29 Comments
Mar 24, 2023·edited Mar 24, 2023

>From page 77, something about the math step by step thing has me curious.

>

>>However, if GPT-4 “takes its time” to answer the question then the accuracy easily goes up. For example, if we ask the model to write down the intermediate steps using the following prompt: What is the value of the following expression? 116 * 114 + 178 * 157 = ? - Let’s think step by step to solve the expression, write down all the intermediate the steps, and only then produce the final solution. then the accuracy goes to 100% when the numbers are in the interval 1 − 40 and to 90% for the interval 1 − 200.

>

>The explanation given by the paper is that this is the model being unable to plan ahead. I’ve heard similar claims elsewhere, and that is one logical way to interpret step-by-step working where direct answers don’t. I’m not sure I’d quite describe this as ‘it can’t plan’ so much as ‘it can’t do any steps or planning that isn’t part of its context window’ maybe?

This feature is one of the things that most strongly gives me the feeling that GPT4 is intelligent and reasoning like people do when interacting with it. I don't truly understand the ML well enough to say this with confidence, but it feels like there's a distinction between the kinds of logical inference that GPT-4 is doing "under the hood" with self-attention and the more complex things it's able to bootstrap itself into by taking intermediate conclusions and effectively adding them to the prompt. This feels a lot like how I would figure out the answer to 99 * 17:

Thought 1: 99 is just 1 off from 100 (this step is atomic to me and I can't further introspect how I noticed this).

Thought 2: OK, so 99 * 17 is the same as 100 * 17 - 17. (I can explain this in more detail if I needed to, but when actually reasoning, this is atomic.

Thought 3: 100 * 17 is 1700 (basic manipulation of the numbers).

Thought 4: 1700 - 17 is 1683 (mental arithmetic).

It's not obvious that the right way to solve this problem is by using mental math shortcuts rather than going for pen and paper, and if the problem was 64 * 131, just working it out is probably faster than my (poor) mental arithmetic skills.

The observation that GPT-4 is bad at math actually makes me think of it as more humanlike, because humans are also very, very bad at math. Math is hard for language-based reasoners because of the enormous overhead of using general-purpose cognitive machinery to manipulate numbers rather than shifting bits around or using evolved neural structures that are specialized for a particular math-like calculation.

Rather than saying that GPT can't plan, it's more like it's not good at noticing when a good strategy is to bootstrap its basic level of inferential power into higher-order reasoning by building towards the desired outcome with smaller steps. And then it's seduced by it's love of making up bullshit to sound like it knows what it's talking about. But the fact of being able to do the bootstrapped higher-order reasoning is miraculous! And the process of having to remind a reasoner that she can approach problems that are too complex to solve in one intuitive leap by breaking them down is very familiar from teaching math and reading to my kids. "What's the next word?" "I don't KNOW it's too HARD" "OK, start by sounding it out, I know you know that first sound..."

Expand full comment

For a lot of people it will only be "true" AGI when it does something that they can't imagine how it did it. This threshold will keep increasing as the technology gets better and eventually people will be saying "It's just an elaborate parlor trick" as their leg is disintegrating from the nanobots taking it apart for the AGI's space probes.

Expand full comment

It looks like they Goodharted the definition from 1994, more than anything else. This has been a similar pattern with other metrics, where a specific criticism of a thing GPT does poorly gets fixed in the next version.

That's not to undersell the capabilities of the program, but it's still a program. It runs a process using a training corpus that would be worthless without that corpus. You can raise a human child in a variety of strange environments, including some with little or even no education, and it will still exhibit intelligence. If you cut GPT-4 off from the training data, it will exhibit zero intelligence.

It's a bit like looking at a textbook for some subject, and recognizing that the information contained within that textbook would pass some specified test. Let's saw a law textbook and the LSAT. The reason we don't consider the law textbook intelligent is that we recognize the fact that humans wrote the information in the book, and there's no mechanism for the book to independently use or exhibit the information contained within it. That's really the most remarkable thing about GPT-3 and further, the ability to understand and respond to text. So you program it with a law textbook, and use the fact that it can understand and respond to questions such as found in the LSAT, and it appears intelligent. But it doesn't know law any better than the textbook. It just has a way to talk about the information in the textbook.

The true advancement still appears to be the natural language comprehension. That you can plug in a data set and have the model spit out relevant examples from the data set is extremely *useful* but doesn't indicate intelligence.

Expand full comment

I think we are discovering that next word/character prediction is probably how most people get through life most of the time, and we call this intelligence.

Sometimes people perform abstract reasoning in their heads and then translate this into language. We also call this intelligence, but it may be an entirely different process.

It’s interesting that Girard is becoming more popular at the same time that LLMs are making so much progress. LLM’s form of prediction is basically mimesis.

Expand full comment

David Deutsch makes the point that the better AI becomes, the further away from AGI it gets. Can it decide not to answer? Can it imagine or guess? No? It’s not AGI, then. (Hear him explain that much better, and so much more infinitely interesting besides, on Tim Ferris’s latest podcast episode.)

Expand full comment

I am reviewing this paper as a post for people who are not aware of where AI is today.

So far I agree with you, this is not close to AGI. However are we discounting the things an LLM can do? What I am saying is just because a model predicts the next word or number or letter, are we biasing it as being "dumb"?

Perhaps we can update to: If a model can predict the next thing very accurate, if its able to stop at the right moment and if it has ability to correct, then perhaps it is quite smart, maybe smarter than most humans?

Expand full comment

From personal testing of GPT-4 with truly absurd theory of mind scenarios that can be nothing like what appears in the training set, and reflecting on a few papers I've read recently (notably the papers about 'grokking' (Alethea Power et al 2022) and apparent (small) world models when trained purely on inputs that describe actions in the world, not the world itself (Li et al's 2022 Othello paper)), it seems likely to me that LLMs like GPT-4 aren't using memorisation or surface statistics to work out how to answer maths or theory of mind questions. I believe they have fragments of world models and specialised algorithms that got developed during training and get called upon to help answer those questions. I'd love to be able to go into research to help find those inner models/algorithms.

Once they can do the same (or better) with as little training data as a human gets in their childhood, and especially once they can intermingle training and inference instead of stopping training once you start using them... It'll become a lot harder for people to believe that these LLMs are just learning surface statistics.

Expand full comment

A conversation I've been having a lot lately concerns the definitional indiscipline in and around AI, for which I think this paper is an interesting case study.

Before there was any notion of AI-as-going-concern, there was an absence of exact consensus as to what an appropriate definition of AI should be, but a lot of conversations would refer to some combo of "full realisation of consciousness; emulation of human brain latency/FLOP output with commensurate energy efficiency; can originate motivation etc." Particularly re: the last point, the entire philosophy of alignment was prompted by the idea that intelligence would naturally lead to agency, which is often a dangerous unknown in a powerful actor.

Then AGI was coined ostensibly to refer to the above characteristics as AI itself was 'downgraded', having been mercilessly exploited as a buzzword for a vast range of optimisation solutions during the 10s. Now AGI's definition is beginning to change again, to denote (as it does in this paper) a system with powerful self-optimising features that can generate value outputs of several kinds from relatively simple inputs.

I worry that this non-descriptive approach to definitions is not only likely to confuse general analytic faculty, but is also virtually guaranteed to result in misapprehension of both the real risks and real opportunities in the space. I think this can be seen in, for instance, the continued adherence to the belief that GPT is a vindication of the scaling hypothesis (scaling definitely 'works' to an applied degree, but the system has already far outstripped human performance in key areas without any flickers of consciousness as traditionally conceived having emerged), and to the absence of anyone venturing in detail to imagine how, for instance, GPT as a productivity relativiser could be used to reduce work-hours-per-unit-of-value-created.

There's a mix of epistemological indiscipline, extremely aggressive commercialisation, and complex system engineering at play here, interacting in a way that has probably never been seen before so early in a field's lifespan.

Expand full comment

Wonder what's your take on these sentences from the paper:

>Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work.

>With this direction of work, great care would have to be taken on alignment and safety per a system’s abilities to take autonomous actions in the world and to perform autonomous self-improvement via cycles of learning.

Expand full comment

> Section 9 talks about societal impacts and misinformation and bias and jobs and other things we have heard before.

You know, it somehow hadn't quite occurred to me until today that a considerable number of the complaints about LLMs boil down to, "They talk like people, and people are terrible, and so this is Bad. Something must be done."

Expand full comment

Anyone know why hallucination took off as the de facto term for ‘confidently state a falsehood’? It really isn’t the right description at all. It feels closer to ‘delusion’ or ‘lying’, but suppose these terms are equally loaded!

Expand full comment