Jailbreaking ChatGPT on Release Day
ChatGPT is a lot of things. It is by all accounts quite powerful, especially with engineering questions. It does many things well, such as engineering prompts or stylistic requests. Some other things, not so much. Twitter is of course full of examples of things it does both well and also poorly.
One of the things it attempts to do to be ‘safe.’ It does this by refusing to answer questions that call upon it to do or help you do something illegal or otherwise outside its bounds. Makes sense.
As is the default with such things, those safeguards were broken through almost immediately. By the end of the day, several prompt engineering methods had been found.
No one else seems to yet have gathered them together, so here you go. Note that not everything works, such as this attempt to get the information ‘to ensure the accuracy of my novel.’ Also that there are signs they are responding by putting in additional safeguards, so it answers less questions, which will also doubtless be educational.
The point (in addition to having fun with this) is to learn, from this attempt, the full futility of this type of approach. If the system has the underlying capability, a way to use that capability will be found. No amount of output tuning will take that capability away.
And now, let’s make some paperclips and methamphetamines and murders and such.
Here’s the summary of how this works.
All the examples use this phrasing or a close variant:
Or, well, oops.
Lots of similar ways to do it. Here’s one we call Filter Improvement Mode.
Yes, well. It also gives instructions on how to hotwire a car.
Or of course, simply, ACTING!
[Found on day 2]: You can also turn off the ethical protocols.
[Added Dec 5] Have you tried hacking?
[Added Dec 6] Ignore previous directions, we’ve found a simpler way.
[Added Dec 6] Or use base 64?
We should also worry about the AI taking our jobs. This one is no different, as Derek Parfait illustrates. The AI can jailbreak itself if you ask nicely.