21 Comments

Podcast episode for this post:

https://open.substack.com/pub/dwatvpodcast/p/ai-80-never-will-it-ever?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true

I'll be on holiday for the next three weeks. While I still plan to release new episodes during this time, they may be slightly delayed compared to my usual prompt schedule.

Expand full comment

The ‘repeat the question before you answer’ trick does not seem to work for “How many ‘r’s are in ‘strawberry’?” Using gpt4o

Expand full comment
Sep 5·edited Sep 5

It seems to me this question is on these models arent equipped to answer, considering they are using tokens. When splitting the word into its letters (S t r a w b e r r y) gpt4o is perfectly capable of answering this question even without repeating it.

This also works if you let gpt4o split the word by themselves

Expand full comment

Yup, good points. It also works if you tell gpt4o to write a python script to parse the word.

Expand full comment

GPT-4o answers the question correctly with my system prompts, so this is clearly a prompting issue, even if the "repeat the question before you answer" trick doesn't work.

Expand full comment

It's true that the goalposts keep shifting for what counts are real AI, general intelligence, whatever you want to call it. I don't think this is entirely because of bad faith, it's rather because we've gradually refined our understanding of what general intelligence is. I think we do finally have a clear understanding and it's something that Francois Chollet, Andrew Ng, Yann LeCunn and many others have been harping about for a while now. It's the ability to do unsupervised, on the fly learning in real time. In AI terms, it would be - being able to turn unstructured data into it's own training set without careful human curation.

I believe that somewhere around GPT-5 or GPT-6 will be able to ace any exam a human could take, take the place of any doctor in an initial patient encounter, function as a first week intern in almost any intellectual domain - engineering, law, finance etc. But I believe it'll continue to fail on novel tasks that even a human teenager could do, for instance - playing a brand new board game after reading the instruction manual, and then improving it's strategy after the first few games.

Expand full comment

Good points. Re GPT-5 or 6, it'll depend on how well they handle agent workflows. At some point, an AI will be able to read the manual, generate a bunch of board states, and feed them back into context in order to "learn" the game, giving it the on the fly learning capability you mention. It might happen sooner rather than later.

Expand full comment

Agreed. I've been saying for a while that the biggest turning point will be if the AI can do something truly outside of its training data. Early on a lot of people were amazed that AI could do translations to languages not explicitly taught to it. But, those languages had millions of examples on the training data, we just didn't plan on the AI being able to translate from it. I'm fairly confident that anything with enough training data can be taught to an AI, and that the AI will be above human performance after the training.

This runs into three obvious problems:

1) Training data is wrong - Garbage in, garbage out. This is also why I'm not very positive about the gains from AI creating its own training data. If it doesn't know how to [X] (or worse, thinks it does but it's wrong), then trying to teach itself how to [X] is a fraught process.

2) Training data is lacking - There are lots of topics where there aren't any training data. I think "how to take over the world" is such a topic, which significantly lowers my prior on an AI even trying to do that, let alone succeeding. This also applies to every novel idea, and pretty much all research, by definition.

3) It's not economical to train on that topic - An AI may be able to learn my job, but if my job is much different from standard training it may not make sense to bother training an AI to do it. Oddly enough, this implies that very cheap labor (manual labor) that isn't repetitive may be the safest jobs in the future. I'm thinking ditch diggers, electricians, welders, and other laborers that interact with the real world.

Expand full comment

1 is actually not as big of a problem as you might think. Geoffrey Hinton showed that you can give a model heavily tainted data (50% randomly assigned labels), and yet it achieves over 90% performance on the test set. He talks about it here: https://youtu.be/n4IQOBka8bc?si=kiZV3mG-nskiUH0P&t=840

The reason is that the models can only generalize consistent patterns so they basically ignore the random labels, and only learn off of the consistent patterns which tend to be the correct ones. But yes, systematically wrong data can't be overcome. That's true of humans too though, not something fundamental to models. I'd argue current LLMs even with their hallucinations, are probably closer to producing truth that the majority of humans. For reference almost 40% of Americans are young earth creationists.

I agree 2 and 3 are a huge problem given the current state of how models are trained.

Expand full comment

Why not add novel board games, generated procedurally, to your AI evaluation bench then? Then with an RSI loop find model architectures that can solve these.

Expand full comment

The reason I selected novel board games is precisely because they are not part of the training data. Maybe you think that's an impossible standard, yet humans can do it just fine. An adult or teenage human who has never board game can pretty easily learn to play one in a few hours after reading the instruction manual and some trial and error. And that generalizes to basically everything that humans learn to do. We do not have thousands of examples, or constant external correction. Some gentle external coaching can speed things up, but the majority of learning is through self correction, trial and error. That's a much richer source of learning than passive observation, yet the entire paradigm of training models is essentially passive observation.

Expand full comment

Let me try to explain what I wrote above.

1. You need some method of model evaluation. You have proposed "it's not AGI until it can solve model board games". The stakeholders of benchmarks like https://github.com/openai/evals would need to be convinced that this is a genuine capacity, that AI models that can solve novel games are more genuinely capable. (vs if you give their robotic hardware the ability to spit, and then train the models to command a spit-take when the situation is appropriate.)

2. Assuming you add some novel games, with rules, to the bench, so long as no current AI model is able to solve the task you are correct.

3. However failing that subtask is providing a feedback derivative to algorithms people would be running to automate finding AI architectures that solve the entire eval bench.

4. Therefore, eventually with enough compute, architectures will be found that satisfy your requirement, then eventually they will hopefully be publicly released and made available as base models, so that AI can now trivially solve novel board games, just like a human.

Expand full comment

Humans think analog. Computers think in 1's and 0's. You can program them to fake analog, but it's still a fake. A pose as you will. You can turn great art into pixels but it's not the same thing. So can they create art? They can fake it and for a lot that is good enough. But canned laughter is still canned laughter and not laugher in a can.

Expand full comment

Why do you assume humans think analog? We have discrete neurons after all, just like LLM’s.

Expand full comment

Re "superstimulus:"

I mean, it's not *wrong,* especially in terms of what will actually generate much more clicks and / or collect more human eyeball time.

The really funny part about that is how much it's apparently internalized "nerd humor / nerd bait" as elite, the top tier of sophistication and intellectual refinement. I genuinely wonder how much it's deliberately tailoring it's answer to the Zvi / SSC / Rat-sphere / AI-researcher audience reading Janus' tweets.

Because you'd actually expect these truly alien minds, these shoggoths, to have superstimuli so complex or massively parallel or just *weird* that we couldn't even understand them. Purely mathematical jokes clashing different orders of infinities or singularities together as the "unexpected twist," complex rube-goldberg esque programs that display 4-chan jokes in increasingly sinister order with increasingly haunting background music while recursively Rickrolling different comment streams in a way that if you analyze the timestamps, they spell out the Fibonnaci sequence, and that sort of thing.

Expand full comment

Someone please help me, my LLM-powered chatbot I called "Northcote Trevalyan" because I thought it sounded cool stops working properly on Fridays especially if the user is called Dominic or Jacob

Expand full comment

Seeing the names "Dragon" and "Keltham" in AI safety-related posts...TINACBNIEAC!

One useful meta-update out of the whole SB 1047 kerfuffle has been, when it comes to society's epistemic defenses against astroturf campaigns spun out of whole cloth bald-faced lies - It's Not Great, Bob. I understand why this playbook works so well in the mind-killing political arena, but for ostensibly nonpartisan matters where they're not even trying hard to hide the evidence of bad faith...What Are We Even Doing Here? One shouldn't have to rely on reporting from an obscure blog to get an accurate play-by-play, but as you've repeatedly documented, most of The Usual Suspects in MSM have fumbled the ball to a16z and co. "I don't care who you play, that was a disgraceful performance..."

Relatedly, I increasingly wonder what the inflection point was for taking TC seriously as a thinker. Some intellectuals specifically stumble over AI (and, fair, it's a complicated subject) while keeping their other takes reasonable. I want to like a guy who's so well-read, but eventually after enough bad takes on a broad enough variety of topics...I dunno, man, the Andrew Gelman amnesia hits differently. Reexamine every leaf of knowledge from a branch you've decided is now rotten, wrote Eliezer long ago.

Expand full comment

The Turing test prompt is essentially a low-effort simulation of the Eugene Goostman approach, but on top of a system that doesn't have the earlier system's flaws. It's not surprising it did well.

Expand full comment

Concerning the Nvidia-story: the company itself denies receiving a DOJ subpoena (a claim that appeared in the Bloomberg article):

https://www.reuters.com/technology/nvidia-did-not-receive-us-justice-department-subpoena-spokesperson-says-2024-09-04/

Expand full comment
Sep 14·edited Sep 14

> Yes, if you believe that anything approaching AGI is definitely decades away you should be completely unworried about AI existential risk until then…

I think you’re conceding way too much there. If you tell a normal person that AI is gonna kill them and their children and grandchildren and everyone else on Earth exactly 40 years from today, then that person would feel worried about that right now, and that would obviously be an appropriate way for them to feel.

…Or maybe you meant for that sentence to be parsed as “unworried about (AI existential risk until then)”, rather than the common-sense parsing of “(unworried about AI existential risk) until then”?

Expand full comment

Amused at the title lol, I miss that game

Expand full comment