43 Comments

Read the following comment as the ML equivalent of an "in mice" reply to bio/medicine hype posting:

#15. Some context I think the explanation misses: There have been _tons_ of papers (starting right after BERT blew up) proposing ways of extending transformer context length. Just to name a few, you've got:

1. Transformer-XL

2. The Compressive transformer

3. ReFormers

4. State space models

5. Approximately a million different efficient attention variants which reduce the cost which Pete's explanation discusses

The thing is, none of these have replaced good old fashioned (basically-)dense attention for large model pretraining yet, and I don't think the experiments in this paper establish that it will be the one to do so. It might be, but the question is always just "should you spend your flops on bigger model/more data or longer context".

Expand full comment

Re: Superforecasters, I _really_ want to see the question wording since I feel like if it included: "we do AlphaFold type stuff for biology research" even the most AI-detractor types would have >1%. Maybe it was about some very specific scenario?

Expand full comment

It's Demis Hassabis, not Dennis. Possibly autocorrect malfunction.

Expand full comment

*Demis Hassabis, not Dennis

Expand full comment
Apr 27, 2023·edited Apr 27, 2023

Re: Pascal's Mugging

The primary issue is that we have no objective means to determine how likely any of the various necessary conditions (AI can bootstrap, AI will be destructive, AI will be able to overcome humans, etc.) are, or what it would take for them to exist. So you can say 1% or 5% or 50% or 0.00001% and there's no way to debate it. That many people seem to think it's >5% doesn't make that true. Many more people believe that Jesus will return with much higher than 5% chance, and the term Pascal's Mugging was invented about those people saying it (simplifying a much longer conversation, assuming the readers here are aware of the longer conversation).

If you can provide a formula to determine the chances of a world-ending AI scenario that's objective and reasonable - rather than speculative and subjective as with the current models - then we can get away from the Pascal situation. Otherwise you need an objective mathematical model to determine the chances of Jesus returning, in order to differentiate between the two scenarios. If you have no objective method to differentiate, then you have done nothing to overcome the criticism.

Expand full comment
Apr 27, 2023·edited Apr 27, 2023

One seemingly important thing I never see talked about in FOOM debates: What if the difficulty of increasing intelligence increases faster than the increase in intelligence?

Obviously no one knows the answer, but FOOM _requires_ that difficulty of making yourself smarter increases more slowly than the intelligence gains, and it is not obvious to me that this _must_ be the case. It _might_ be the case but I can easily imagine that this is not in fact true and it _might_ be the case that increases in intelligence are inherently self limiting in the sense that it takes longer and longer and more and more intellectual work to make the next advance (even when accounting for the fact that you are now smarter).

Since no one can _know_ if this is true or not, I'm not arguing that we should depend on it, or that we shouldn't be concerned about existential risk, but I think that the possibility that this is true _does_ put a limit on how confident we should be in P(Doom).

Expand full comment

The EU stuff makes me sad. It seems increasingly likely that we'll regulate away anything fun or useful and live boring lives superficially safe but with horrible downsides that are not obvious. Worse, in enough time, our descendants will forget this stuff was even possible, or at least regard it as wildly unsafe and immoral.

Expand full comment

> Strangely, also D&D encounters?

> Brian: He has a point it did this to my D&D stuff too, asked it to put together 3 encounters for a party of 4 level 7 characters and it was like nope.

For what it's worth, I tried 5 times with GPT4 and got good answers every time. (I'm using the API, so maybe it's different in the playground? Or maybe it's only 3.5 that has this restriction? Maybe a different, less-generic system prompt? Didn't bother exploring.) My prompt was

> I'm running a D&D campaign. Can you come up with 3 encounters for a party of 4 level 7 characters?

Expand full comment

That Pascal’s Mugging comment, oh my God…

I feel bad for Eliezer. He has taken it upon himself to explain to everyone why they’re wrong. This is incredibly valuable work. Someone has to do it. Unfortunately that means interacting with the absolute dregs of intellectual society. I can’t blame him for being smug and arrogant after dealing with objections like this. I never would have expected anyone to make the mistake that Rohit did.

Expand full comment
Apr 27, 2023·edited Apr 27, 2023

Curious on people's thoughts about AI winter likelihood over the next couple years. I am pretty bullish on transformative AI over the next few decades but my short-run guesses have gotten a bit more conservative -- I feel like the GPT-3.5 -> GPT-4 differences are a bit more subtle than I would have expected, it seems like Sam Altman is pretty confident we are done scaling for a little bit and I do see some mundane utility but not much, although I'm sure that will change over a months-years timespan even with current capabilities (but maybe not rapidly). Certainly we can all agree that the hype cycle that went "prepare for every week to get weirder from here on out" was wrong, right?

Expand full comment

Re: LLM scaling; it's hard to imagine where to extract more performance, at least in my (admittedly limited) view. GPT-4 likely has around 1T to 10T parameters, but according to the new scaling laws, it would require 20T to 200T tokens for efficient training. I'm not sure obtaining that much high-quality data is feasible. And the compute cost would be absurd.

Expand full comment

Genuinely curious what an Epic AI generated letter will be like. I’m a psychiatrist, and my patient assessment letters are often to myself in the future more than anything (GPs don’t actually read anything but the last few lines!) - I suspect AI generated letters will lack flair for describing a patient’s mental state, but we’ll see. The Epic tools for writing a mental state exam are broadly terrible to begin with.

Expand full comment

Re AI discovers Kepler's law: Symbolic regressions have been able to do this for quite a while, e..g https://www.science.org/doi/10.1126/sciadv.aav6971

Expand full comment

For the business process documentation, the implication seemed pretty clear to me from the text that the counterfactual here without ChatGPT is in fact that they would never have gotten around to doing it due to the (non-) trivial inconvenience of having to find someone to spend a day and a half on it.

Expand full comment

#1: Regarding telling the chatbot AIs they're experts, now I finally understand why, in the original Tron movie, Flynn was telling Clu, "You're the best program that's ever been written. You're dogged and relentless, remember?"

https://www.youtube.com/watch?v=PQwKV7lCzEI

Expand full comment

I think you're conflating Sharp Left Turn with Treacherous Turn.

Sharp Left Turn (https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization): An AI starts out basically aligned but only having capabilities in a narrow range of domains. Then its capabilities become more general, which allows it to rapidly become more capable at many new domains. But the internal properties that made it aligned in its original domain fail to generalize to the new domains, and the AI is now misaligned.

Treacherous Turn (https://www.lesswrong.com/tag/treacherous-turn): An AI starts out already misaligned with humans, but without the resources to seize control of the world yet. It pretends to be aligned for long enough to gather more power, then betrays the humans and seizes control when it has the opportunity.

Indeed as you note humans do small-scale Treacherous Turns all the time. But we don't do Sharp Left Turns all the time. The only human Sharp Left Turn (according to Nate) is the evolutionary transition from chimps to humans (which Quentin disputes as an example).

Expand full comment