Discussion about this post

User's avatar
Jeffrey Soreff's avatar

Three broad comments:

First, the nonproliferation and competitiveness concerns really predate and are broader than AI (or ASI) concerns. They really apply to any enabling technology, from iron working to CAD/CAM software. Now, a large quantitative change _is_ a qualitative change, and it well may be that AI advances produce a large speedup, or a large extension of the spectrum of possible designs. Still, the fundamental worry about an economic or military competitor getting an enabling technology or an improvement to an enabling technology and improving their products or their weapons has been a concern for centuries.

We don't _know_ how large an advantage in design time or design effectiveness AI (or ASI) will provide, in general. We've seen some striking examples (protein design speeded by orders of magnitude, some coding tasks but not yet others speeded by 10X).

Second, I _do_ see the loss of control concern as primarily specific to AI. There is a partial exception to this in that we have already ceded control to automated systems in situations like parts of process control and all of the detailed tiny decisions made by all of our existing computerized systems. Here the nomenclature gets fuzzy.

Roughly speaking, let me call an AI "AGI" if it is roughly equivalent to a human in capabilities. Delegating a decision to it is very much akin to delegating it to a human subordinate. There are principal/agent problems, but, at that level, they are ones we've had for as long as we've been human.

I lied. It isn't really plausible to have an AI equivalent to human capabilities, since existing LLMs already have a superhuman _breadth_ of knowledge. So if we get an "AGI" with more-or-less the capabilities of, say, a 115 IQ human in one area, their overall capabilities are at least weakly superhuman in the sense of breadth. Another sense in which I think we get weak ASI by default is if we populate a competent organization (e.g. SpaceX) with AGIs in each of the positions humans are in today. SpaceX, as a whole, is capable of doing feats that no single individual is capable of, and presumably the same would be true if it were built of AGIs.

Now, neither of these scenarios result in anything incomprehensible to humans. We can understand any single action of the broad AGI with 115 IQ, and we can understand what a SpaceX-o-matic is trying to do and even how it is trying to do it for any given subtask. Given success in building an AGI at all, I think these scenarios are pretty much a given.

ASI is sometimes taken to mean something that is as much smarter than humans as humans are smarters than, e.g. dogs. Now, I think that this is probable, but it is an _open_ question whether this can happen. We _don't_ have an existence proof. We don't (unlike the SpaceX case) have an architecture which we are confident will work. I'm going to call this species-jump-ASI. If we get species-jump-ASI, then, at best, we hope we set up its initial preferences so that it values humans, and we rubber-stamp its decisions, with no idea how it does what it does, and barely an idea of what or why. I see loss of control as baked in to any situation where we build species-jump-ASI.

Third, I'm really skeptical of the ability to detect an imminent jump to improved AI effectiveness. Really, the only parts of the AI development process which have a signature large enough to detect are the pretraining of frontier models and the chip fabs supplying the CPUs & GPUs used in that pretraining. But

a) Some of the major advances have been in areas like sophisticated prompting and fine tuning for reasoning, neither of which has anything like the signature of the LLM pretraining phase.

b) There is hope for much more data-efficient (and, presumably, compute-efficient) frontier model training. See https://www.youtube.com/watch?v=Z9VovH1OWQc&t=44s at around the 3:40 mark and https://seantrott.substack.com/p/building-inductive-biases-into-llms

Expand full comment
AW's avatar

Whole situation looks pretty grim to me. Any newly created technology quickly becomes so integrated into society that it’s impossible to stop using it or scale it back in the slightest.

Farming? Sure we know how to stop, but we don’t have “meaningful” control over stopping, since the earth can’t support this many humans without farming.

Electricity? Yeah we know how to turn it off, but we don’t have meaningful control over it either, as it would lead to mass death and it’s impossible to coordinate.

Internet? Would lead to a -90% drawdown in stock market, collapse of the dollar, societal devastation, etc.

AI? I think we know how this is going to go. Full reliance on it, and it won’t even be realistic to not fully embrace the latest superintellence to the maximum extent, due to competitive pressure, and how many lives, dollars, economies that will be reliant on it.

In short, seems like Moloch is going to get his biggest sacrifice yet… all of humanity!

https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

Expand full comment
19 more comments...

No posts