Discussion about this post

User's avatar
Victualis's avatar

If EY has ended up in a place of despair, then following in those footsteps doesn't seem to be a useful action. It seems more productive to look for a different path through the facts which allows one to perhaps see something different, or take a useful stance not apparent by playing through the same movie that EY is projecting. Many people I know in AI research (not alignment, but IJCAI/AAAI/ICLR/ICML/NeurIPS) are really worried about the massive social upheaval that their research may lead to, with few mitigations available, but not especially worried about EY's particular obsessions. They are perhaps all deluded (and some in AI research don't even want to engage with any thoughts about consequences of their work) but to me this shows that other paths are possible, and other vistas.

Expand full comment
Joshua's avatar

I think this is a really good addition to Eliezer's post. It seems like a good plan-to-plan is to have a lot of discussion like this going on at moments like this, when AI is all over the news for being much more impressive than many people expected. I think a lot of people are suddenly much more open to taking AGI ruin seriously right after they see what PaLM can do, and that should be capitalized on.

Eliezer's post is not aimed at a super broad audience, but that need not be bad. It can be good to have a rough draft of "The Post" up, which can be polished through responses like this one. Maybe someone else can write a response to this response which polishes the idea up even more, until eventually there is a version of "The Post" that is fit for mass consumption.

Expand full comment
24 more comments...

No posts