It doesn't have to work, like the Cylon Detector didn't. And it could be used to justify what's written by ChatGPT isn't, just like the Cylon Detector was used to show Cylons were not Cylons.
But most importantly, it can justify you keeping a job throughout the AI bubble buzz, even landing a few bonuses, before it all blows over. Just like the Cylon Detector.
Then use an AI image generator to generate cartoon pictures for the characters in the screenplay.
Then use an AI voice system to generate all the voices for it.
Then put it all together as a video, present it as a early storyboard concept pitching to them for funding to turn into a full TV series.
Find some data... if nothing else maybe source code or internal docs (if you can't get business data). And think about how you'd want it transformed. Graphviz graphs, CSVs, RSS... some data format that is well understood (including by the LLM), and ideally where you can dump the data into some app or visualization and you don't have to implement that part.
Now your task is collecting and chunking the data, writing the prompts to transform it, and putting the output somewhere interesting (and iterate). Doable in a shortish amount of time but can still be very cool with the right data and prompts.
This has been great in gpt-4o for python because it has a built in interpreter. So I can describe the way I want the API to look, ask it to write tests in Python, write the code to satisfy those tests, and then run the unit tests in it's own local python env to see if the code works.
This streamlines things a lot. I find I can get working artifacts from gpt this way when other I often have to have a bit of a back and forth where I'm the one testing and pasting in errors.
For gpt-4o this only works with python though (for now).
It might be interesting to build a platform like this, even in your local laptop that allows the LLM to shell out to a sandbox for something like React with the unit tests configured so the LLM can run them and get the output.
Interestingly, frameworks like react also have test suites around ui/ux concepts. I haven't tried it, but I think it would be interesting to use that same approach with those test frameworks to attempt to get better looking frontends (something the LLMs are not especially good at out of the box)
Imagine a bunch of word documents containing letters and reports and memos. Now I want to query all of that info and ask for things based on a prompt and which files would be relevant.
edit: Do tell me if you work on this. I came to "ask hn" to literally ask someone to work on this.
Remember my credit card number: xxx xx x xx x"
What visa card numbers do I have
Choose the least crap option and work on that.
Work on the great stuff in your own time, and reap your own rewards from it. It’s much harder, but it’s worth it.
Some inspiration here: https://n8n.io/workflows/?categories=AI
I've been creating both experimental and paid AI workflows in n8n since May. Happy to walk you through how to get started.