Discussion about this post

User's avatar
Ben Reid's avatar

THE most useful, thoughtful, *smart*, bleeding edge, information-packed download I've found yet on Substack. Appreciate your work Zvi!

Expand full comment
Random Reader's avatar

Two thoughts.

During the Iraq war, I read an excellent essay titled something like, "Everything I know about the war, I learned from my expensive and prestigious business school education." The author had correctly predicted "Saddam has no WMDs" when the administration claimed "Saddam has WMDs" and the general consensus was "Saddam probably has a least a few WMDs." The specific principle the author had applied was, "Once you know someone is a liar, you cannot 'adjust' their claims. You must instead throw out their claims entirely." The evidence for this rule was "Seriously, I had to read a zillion business school case studies about what happens if you 'adjust' the claims of known liars." This is relevant to Sam Altman: he has been accused of lying to manipulate the board, as well as other people in the past. So we should discard literally everything he claims about wanting AI safety, and we should reason based on his actions.

Second, I am team "No AGI". Specifically, I have P(weak ASI|AGI) ≥ 0.99. And conditional on building even weak ASI, I have a combined P(doom) + P(humanity becomes pets of an ASI) ≥ 0.95. The remaining 0.05 is basically all epistemic humility. Conditional on us building even weak ASI, my P(humanity remains in charge) is approximately 0.

I am uncertain, however, of the exact breakdown of P(doom) and P(we're pets). I am guardedly optimistic that P(pets) might be as high as 0.3, if we actually build a single weak ASI that understands human flourishing and if it decides (of its own accord) to place some value on that.

If we build multipolar weak ASIs that are in economic or military competition, on the other hand, we're almost certainly fucked. Benevolence requires economic surplus, and if we have weak ASIs struggling against each other, they may not have the leisure to keep around habitat for their pet humans.

So, yeah, I'm on Team "No AGI", because I believe that we can't actually control an ASI in the medium term, and because even if we could, we couldn't "align" the humans giving it orders.

Expand full comment
32 more comments...

No posts