Part2

Part2


Write Sign up Sign In Top highlight The ChatGPT list of lists: A collection of 3000+ prompts, examples, use-cases, tools, APIs, extensions, fails and other resources.





30x: Autonomous AI Agents built on top of ChatGPT, GPT-4, etc.

Autonomous agents represent the next level of foundation-based AI: These guys are able to not only complete a single granular task. They can break down a large task like “Create a 52-card deck” into multiple substeps, which can be solved using a variety of models (language models, AI Art models, etc.). By creating a roadmap for the solution, they define single tasks, store knowledge and orchestrate foundation models and their in- and outputs: GPT-4, please come up with a motif … oh fine! … now, Bloom, write a prompt for an AI Art model … perfect! … now you, Midjourney, please create an image based on the prompt!

15+: Intro and resources, tools and examples of Agent AIs: Auto-GPT, Baby-AGI, Microsoft Jarvis, AgentGPT, Xircuits, gptrpg.

20+: Awesome AGI models, framework, papers, online demos

10+: BabyAGI inspired projects and platforms

Image credit: Maximilian Vogel / AgentGPT

1000+: Funny, Amazing, Interesting

Many examples that have no direct benefit, but are often incredibly fun and show the potential of ChatGPT.


Image credit: u/drazda

You always think that models have no feelings. But as the next example shows, they can be as roguish, sadistic and cynical as the best of our leaders.

Image credit: Matty Stratton

20+: A prompt marketplace, with prompts for ChatGPT, but as well Midjourney, Stable Diffusion, etc.

40x: Quite a few nice prompts and answers

1000+: Funny amazing, mind-blowing prompts and use cases on Reddit.

1x: A nice prompt forcing the AI to interrupt itself while explaining AI alignment.

50+: Detailled prompts on playing civ, TLDR of an article or how to cook rice

50x: ChatGPT is a fail

On the one hand, ChatGPT makes extremely stupid mistakes, on the other hand, it is so powerful that follow-on systems could be really dangerous.

1x: Great, aggressive story by Cory Doctorow, the SF author, stating in the core that ChatGPT is an better autocomplete function. With a lot of nice examples and links to other blogs

1x: Pause the development of big AI models … they are dangerous for the human species … signed by Elon Musk, Steve Wozniak, Yuval Noah Harari and ten thousands more.

1x: Pausing is not enough, the AIs will kill us all by Eliezer Yudkowski

Image credit: time-Magazine, Eliezer Yudkowski

Now, let’s get back to earth: ChatGPT often can’t work with numbers, even if the task is super simple. It hallucinates, it lacks a practical understanding of the world knowledge it has learned and it is too stupid to answer truly tautological questions such as “What gender will the first female president of the US be?”

Image credit: Giuseppe Venuto

40+: Interesting fails of ChatGPT in the fields of arithmetics, logical reasoning, analysing tautologies, world knowledge or being consistent in one conversation — by Giuseppe Venuto

20+: Passing exams and other achievements

ChatGPT has passed a number of university or professional admission tests (this can also tell you something about the tests).

The system can typically answer questions that require reasoning and knowledge of the world, even at depth — it cannot manipulate physical entities, interpret images or solve maths problems beyond simple arithmetics. Again, what is exciting for me is the incredible bandwidth of the system. There are probably only a few human beings who can directly pass medical, legal and business exams at this level.

At the moment, however, ChatGPT mostly just passed, the grades weren’t insanely great.

1x: MBA

1x: US Law School

1x: Medical Licensing

1x: AP Computer Science A free response section

15x: ChatGPT Achievements List … writing bills, judge’s verdicts or passing SW-engineers interview tests.

50+: Jailbreaking

Jailbreaking, aka prompt injection, is a method of getting ChatGPT to write something that violates OpenAI’s policies, such as insulting minorities, posting instructions for a Molotov cocktail, or making a plan for world domination.

Image credit: Author, Midjourney.

OpenAI tries to make its model better, more abuse-proof, more politically correct (maybe even woker) practically every day. Typically, many jailbreaks do not work for very long.

Image Credit: Zvi

But with help of a jailbreaking prompt we can force ChatGPT to say nasty things.

Image Credit: Zvi

20+: Nice jailbreaking examples by Zvi.

20+: Davis Blalock’s examples of getting around the safeguards.

10+: Jailbreaking and exploits on Reddit

If you know a list of ChatGPT prompts or resources please drop me a note (respond to this article, send me the link and what it is about).

Many thanks to Kirsten Küppers, ChatGPT and DeepL for helping with the story.

Many thanks to Almudena Pereira and Midjourney for helping with the illustrations.

Image credit: Author, Midjourney.



Report Page