1 Comment
⭠ Return to thread

I think that the plan will work and we're doomed, my only disagreements with EY et al. are:

1) I think that there's diminishing returns to recursive self-improvement, as well as diminishing returns to scientific research. Therefore it will probably take several decades for AGI to execute it's plan, rather than a few weeks or months as suggested by proponents of FOOM. It's extremely unlikely that anyone alive today will witness the doom of humanity in their lifetimes, unless AGI solves the problem of ageing in the process of gaining power.

2) It's unclear to me that we should necessarily assume that AI will seek to maximize its power and resources. Humans do this because we're the ancestors of animals who's survival depended on maximizing power and territory, but this wouldn't be the case for AGI. I don't understand why hunger for power should necessarily arise without the evolutionary pressures experienced by biological creatures.

3) I don't agree that *now* is the right time to be remotely worried about AGI. LLMs likely represent a dead end in terms of AGI development, AutoGPT nonwithstanding. Our doom will likely come from a different technology that's probably still decades away. GPT-4 is highly impressive but I'm not worried about GPT-5/6/7.

Expand full comment