18 Comments

Jesus Zvi, that first paragraph is killing me.

Expand full comment

The robotics aspect of Westworld is increasingly looking like the hardest part. Especially hands. Clone and Tesla bot are what I'm paying the most attention to, and I think if AI hits a point that it can improve designs or controls there we might see sudden improvements. Otherwise I only expect meaningful gains if someone can get a commercial foothold with a humanoid robot that then lets them scale iterative R&D (like Tesla did with roadster -> model S -> everything else).

Expand full comment

typo: "2. In our ‘casual’ or..." I think you mean "causal" there

Expand full comment

I really REALLY want GPT4 api access so I can play with my own AutoGPT ideas. The real test will be if we can get decent performance out of cheap models trained on GPT4.

If you could economically and swiftly recurse prompts, autogpt style, maybe on your own laptop I think that'll really open up use cases. GPT4 seems clever enough to project manage, break down tasks, deploy code to the cloud etc one agent doing each task at a time to stay within context limits.

Haven't tried to get it to recursively break down a complex project since it's tedious to do in chat mode, but it can 'ship' basic apps to google cloud with minimal user help.

I also wonder how good plugin integration will be. I can sort of get it to write a spec for a feature, run it, implement the feature and keep iterating til the spec is green then move on. Something like AutoGPT specifically integrated with a web framework like Ruby on Rails that has pretty clear structural rules for where code goes, how to test it etc I think could be actually productive.

Expand full comment

Thank you for this overview and your thoughts. Brilliantly well-covered. And so brief. : )

Expand full comment

"Seems important to think ahead here - is this a good fire alarm? Would anyone be willing to say in advance ‘if this happens, with the following qualifications, then that is scary and I will then start to worry about existential risks from such agents in the future when they get more capable?’"

Yes, that's a good fire alarm, but the details matter. If it's just really good at noticing free dollar bills lying around on the street (arbitrage), then it's only a little alarming. But if it develops capabilities to somehow change/trigger human behaviors in a way that allows it to profit, that is a HUGE RED WAILING fire alarm to me. We've seen hints of this in the Diplomacy bot and poker bots, but nothing like "creating a false narrative that gets reported by the WSJ which causes a trading cascade where it ends up making 100 million dollars over 72 hours by taking it from actual humans and human-run institutions".

Expand full comment

Can't wait til some wag gives it a task involving a basilisk.

Expand full comment

Thinking about GPT's current capacities, would be interested to know people's thoughts about its current "understanding" of the world. It seems to me that a lot of problem-solving and inventiveness depends on a mass of detailed knowledge that supplies materials and models for ideas, and also makes possible inventiveness -- that is, effective reconfigurings of things. I have been playing around with GPT, trying to get a sense of its depth of "understanding" by giving it puzzles I've invented. One described a man stuck in a locked room 40 feet up, with an open window, containing a pitcher of water and a chamber pot. Man had with him the jeans he was wearing & a pocket knife. Question was, how he could get out. I think almost anybody would pretty quickly come up with the idea of cutting the jeans into strips, tying them together and making a rope. GPT did not. It had no suggestions at all. To come up with the idea I guess you have to know that a knife can cut cloth, that jeans fabric is strong, and that a pair of men's jeans contains enough fabric to make reasonably wide, hence strong, strips. So then I re-asked giving a hint: Can the man use his clothes? And GPT then said he could make a rope from the jeans, but also threw in a bunch of stoopit stuff: It hallucinated a structure in the room he could tie the rope to (I had forgotten to mention a structure in the description of the room). Also said you could pour water into the chamber pot and lower it on the rope as a way of stretching the rope in case it does not quite reach the ground. This is dumb because of course the man's weight would do the stretching as he descended. So wondering what kind of process would smarten up an LLM in a way that would make it better at coming up with good subgoals (make a rope) to reach goals (escape tower).

Expand full comment

A really good overview of AutoGPT, thank you.

I think I agree with most of your predictions at the moment with the possible exception of #20 where I have significant doubts that we'll see a "real" runaway agent before end of 2023. I don't see that as a real possibility until a) we have agents that have access to financial resources and can effectively self-propagate by purchasing services such as cloud instances where they can run copies of themselves OR b) someone weaponizes an agent as a malware that is capable enough (though this would arguable be subject to termination eventually via anti-malware solutions). For point a) I don't really see this happening before end of 2023 in a sufficiently robust way that self-replication becomes unstoppable, but I could be wrong, for point b) I think that there are some inherent limitations in the nature of such a malware - like the large size of the model weights that would need to be replicated - that would make it ineffective and relatively easy to stop (at present anyway).

Expand full comment

Skeptical it's going to revolutionize VR just based on a history of nothing every successfully revolutionizing/revitalizing VR. Suspect if, say, Meta thought this was about to happen they probably wouldn't have dropped Metaverse like they're doing.

While I'm commenting, gonna toss a lil writeup I did on trying to use ChatGPT on the job here if anyone's interested:

https://scpantera.substack.com/p/ai-and-pharmacy-1

Expand full comment

Please, when you link to a video, could you note that it's a video link? Similarly to the way you note that a linked article is paywalled, for the same reasons, but also because a video link can suddenly start making loud noises that you weren't expecting.

(I now have my phone's browser configured so that it never plays sound without first asking, but I think this is a good idea in principle, and I'm sure it would prevent annoyance for other people.)

Expand full comment

"In the short term, AutoGPT and its ilk will remain severely limited."

What is "the short-term" when this list has 24 points and by point 20 "by the end of 2023, [we] have the first agent GPTs that have been meaningfully ‘set loose’ on the internet"? The next two weeks? Is the "short term" already over?

Also, what is GPT 4-N? Do you mean GPT 4.N, that is, some improvement on GPT-4 that doesn't make it to GPT-5 territory?

Expand full comment