33 Comments
deletedMar 27, 2023·edited Mar 27, 2023
Comment deleted
Expand full comment

Yep, safe. And according to Tyler Cowen (~8h ago) we’re just “arguing ourselves into existential risk from AI being a major concern”.

*chuckles* I’m in danger

Expand full comment

"We continue to build everything related to AI in Python, almost as if we want to die, get our data stolen and generally not notice that the code is bugged and full of errors."

Absolutely suicidal. And Zapier-type integration is bordering on "opening the seventh seal" territory. The bigger problem is that the defrauded will cry out for government regulators, when nothing short of panopticon totalitarianism will be able to regulate.

Expand full comment

Alarming stuff! As always, I love that the way to jailbreak LLMs is to just say ‘awwww, but come on, pleassseeeee do (the awful thing you should never do)’ - maybe twice. Isn’t part of the problem that we can’t ever fully understand what’s going inside these models?

Expand full comment

Another potential new risk with this mode of usage, similar to

> 3. We could all get into very bad habits this way.

If GPT is, in its normal mode of operation, making encrypted API calls to every major compute system in the world, and initiating large portions of the computation (say this plug-in system is popular and takes off), it could be really hard to spot if it makes a sharp right turn. Vs the world where it’s just sitting behind a chat UI on OpenAI’s server farm. (Much easier to spot a massive spike of network requests coming from OpenAI vs. some shift in the global traffic patterns that already exist.)

A possible amelioration here is that the sharp right turn will (as we currently understand it) require lots of GPUs for self-training. But I think you’d assume that lots of generic CPU would also be useful for whatever else the AGI wants to do as part of its takeover. And I’ve already seen startups looking at using GPT to generate AWS API calls to make infra easier. (Is it a sign of the times that I find myself editing out examples of concrete strategies because I don’t want GPT to read them? Infohazard got real very quickly.)

Expand full comment

One thing I'm confused about is: Are you worried about the current generation of models being dangerous? Or are you worried about how the behavior by OpenAI here makes you worry they won't be responsible in the future?

Expand full comment

Well... Things escalate so quickly. Almost like agi is already on a self reinforcement runaway loop. 🤣? Or 😟

In any case the cat is out of the box . Time will tell if llms with all the abilities they can digest remain "just a tool" or become something bigger

Expand full comment

I think the GPT-4 plugin system is a good idea. It's limited enough so that there's pretty clearly no danger of "foom" from here. What can go wrong in the next month that isn't stoppable by humans? Worst case OpenAI is going to be able to pull the plug. But it's still possible that there's some abuse of the system. The ideal outcome is that some hackers do manage to cause some trouble, and that OpenAI learns & shares from that experience, so we can all design better systems.

It's starting to seem to me like "alignment" is the wrong metaphor for safety. We aren't going to have a world where only a few elites have access to AI. And there are so many people trying to cause all sorts of trouble when they do get AI access. There is going to be no shortage of humans with AI access who would like to commit huge crimes and break things. We need "restriction" - the ability to be sure that a combined AI + human system is unable to do X, where X = .... I dunno, hack computers, fly drones, manufacture viruses, the sort of thing it would be dangerous if a group of very intelligent terrorists were doing.

Expand full comment

better get my team across the API and pitch total integration to management before some other pack of idiots does

Expand full comment