35 Comments

Not a lull. Just over-hyped by you and others.

Expand full comment

You may have written about this elsewhere, but what are your thoughts about the argument that even if alignment was achieved, we'd still be screwed because a bad actor could just align an AI to be bad, including existential-risk level bad? Is the strategy just: if we don't figure out alignment we're quite possibly toast, so let's figure it out, and if there are problems afterwards, we'll cross that bridge when we get to it?

Expand full comment
Sep 7, 2023·edited Sep 7, 2023

>I don’t see this as consistent. If you get AGI in 2-8 years, you get ASI in a lot less than 2-8 more years after that.

So this sentence (and FOOM/fast takeoff arguments more generally) assume a certain (low) level of ramp up in difficulty in increasing intelligence, but I have never seen seriously addressed the possibility that difficulty of increasing intelligence might be exponentional/super-exponential. I'm not arguing that it _is_ because I don't know (and I think it's probably true that _no one_ "knows"), but it seems to me to at least be possible that each increase in intelligence is more difficult than the previous step made you smarter (if that made sense).

Imagine a (extremely simplified) intelligence Ladder with Steps A, B, and C.

Figuring out how Step A when you are on the ground takes some amount of time. Once you figure it out, you are smarter now and figuring out Step B will, necessarily, take less time than it would have _from the ground_, so in some sense, it will "take less time", but if it's a sufficiently more difficult step, it still might take you longer than the original Step A took, even if it takes less than the counterfactual where you are still ground-level smart.

I feel like I'm not making my point very clear, but hopefully you understand what I'm trying to say.

To summarize: I do not see a reason why it is _necessarily and obviously_ true that once you get to AGI (or any other increase in intelligence from where we are now) that ASI (or any further step beyond whatever increase you are talking about) will take significantly less time. It _might_, but I don't' understand why it _must_.

Expand full comment

“Build the AGI that will give us everything we want” (or words to that effect)... see, this is where all my cultural programming starts jumping up and down screaming, ‘No, you never do that, never goes well, always goes super badly, don’t you know you’re supposed to THROW THE RING INTO THE CRACKS OF MOUNT DOOM?’ I mean, given the multiple stacks of impossible problems you envisage before you even reach the stage of, “we possibly survive, but the future is highly uncertain and humans are by default irrelevant,” some variation on Mount Doom has got to be the preferred outcome, no? Or do you see “the AGI that gives us everything we want”, not as a huge red flashing warning sign, but as a big enough prize to justify a different approach?

Expand full comment

One thing that does make me optimistic is how old and unchallenged the world is. China is reduced to building islands in the South China Sea and pretending to care about the Himalayas; India is reduced to pretending that Pakistan matters. And then you've got Europe and North America. And class and religion.

If AI had been on the borders of realisability any time from the 16th to the 20th centuries then there would have been no hope whatsoever. We'd all have been eaten alive by entities with complex theories about belonging whatever anyone tried to do.

Expand full comment

- "tabula rosa" should be "tabula rasa"

Expand full comment
Sep 7, 2023·edited Sep 7, 2023

>Beware of AI-generated garbage articles

Question: are these worse than the human-generated equivalent?

Around 2005, I started noticing that it was usually a waste of time trying to learn a skill via Google. Even the "good" results at the top tended to be full of subtle misinformation. It was better to find a smart person/website and learn from them.

In the 00s, Google's enshittification got worse. For certain terms, the entire first page would be ads or unreadable SEO rubbish auto-generated by what looked like a 1930s-style Markov chain. Remember eHow and Wikihow, and how terrible those were? And how they just sat at the top of Google results for years and years before Panda stomped them in 2011?

We need to remember that the "human written" vs "AI written" debate is mostly of interest to people with skin in the game: eg, copywriters and webmasters. The average guy doesn't care how a webpage was written. He just wants his question answered. It's irrelevant to most users whether the text they're reading was written by a human or an LLM or a demon-possessed bowl of alphabet soup. They just want to know how hot to bake their damned pizza.

Expand full comment

I don't think Tegmark and Omohundro are arguing that you can necessarily find a proof of safety if one exists, rather they are arguing you shouldn't trust a system you cannot prove is safe.

However, as you correctly note, any "proof of safety" only proves whatever notion of safety you can formally define, and this is unlikely to ever be sufficient in the real world. Most successful exploits of "provably secure" cryptographic systems succeed through pathways that are not captured by the notion of security used in the proof (e.g. side channel attacks, social-engineering, etc...); such attacks don't invalidate the security proof or the hardness asumption on which it rests, but they still break the system.

But even if someone with god-like security mindest formalized a notion of safety that covered every possible base, I think this approach is doomed to failure for another reason. Godel's second Incompleteness Theorem states that any sufficiently powerful mathematical theory cannot be proved consistent within that theory. A possible corollary is that any sufficiently powerful AI cannot be proved safe by those who created it (or by itself or any other AI of similar power). This suggests that, as I think you have pointed out several times, there are really only two possibilities: (1) powerful AI won't exist any time soon and we are fine, (2) we are not fine.

Expand full comment

I'm glad you're talking about the problem of simply being outcompeted. I agree almost no one is thinking seriously about this. I think the closest most AI-risk people get is in asking "Whose values will we align it with?", which elides the possibility that it might be no one's values. Perhaps an analogous question is "whose values is the military-industrial complex aligned with?"

Expand full comment

“If you get AGI in 2-8 years, you get ASI in a lot less than 2-8 more years after that.“

Curious if you can expand on that? Does it hinge on whether you believe LLMs will continue through directly to ASI via scale, or do you think it applies even if LLMs level out at near-AGI due to data-limits or limits of next-token-prediction (because presumably it’s the sheer volume of cheap AI-research that leads to ASI)?

Expand full comment

> Is generative AI in violation of copyright? Perhaps it is.

Imagine that US courts find generative AI in violation. (The nuances are beyond me.) What could politicians do?

-Could the attorney general reprioritize this aspect of copyright enforcement to the lowest possible level? That leaves enforcement to civil suits.

-Could Congress pass, and the president sign, a bill restricting the scope of copyright law? That could take LLMs off the hook entirely.

Would politicians do this for nothing, or start bargaining? If the latter,

-Would they ask for something related to LLMs?

-Would they ask for something bigger/broader that Meta, Microsoft, and Alphabet could do? (Considering how expensive being found in violation would be.)

What if one party held a government trifecta at the time? How partisan might the demands be, in their likely impact if not their phrasing?

Expand full comment

> I remind everyone that we don’t let Chinese AI (or other) talent move to America, so we cannot possibly care that much about winning this battle.

And on August 8,

> No, seriously, we could devastate China if we allowed skilled immigrants to come contribute here instead of there (or from other places as well), until we do this I do not want to hear anyone talk about having to ‘beat China’ some other way, and I will keep talking about this.

What would this mean for espionage? Surely some AI talent would remain in China (due to denial of exit permission, genuine patriotism, satisfaction, indolence, better pay, etc.). Spies in the US would try to feed that stay-behind talent with everything they learn, right?

My guess: this tradeoff is substantially worthwhile for the US - for ML in particular and engineering talent in general. Thinking broadly, relative GDP growth seems like a big deal when trying to deter a rising power from starting a war.

But my guess is poorly informed. I would like to see the espionage risk evaluated seriously. For example, where would the FBI hire people capable of hunting a twenty-fold increase in Chinese spies in US ML? Does it have enough people at present?

Expand full comment

RE: Google searches, my sense recently has been that I’ve seen a lot of AI-generated images shitting up Image Search. Deliberate attempts to replicate the trend have failed, though.

Expand full comment

how has no one commented on the Your Mom zinger? really caught me off guard, well done

Expand full comment

Those aren't Roon's beats, they're quoting the final verse of ERB's Gates v Jobs rap battle. Which incidentally is excellent.

https://www.youtube.com/watch?v=njos57IJf-0

Expand full comment