Discussion about this post

User's avatar
Sarah Constantin's avatar

many people i respect, including you explicitly in this post, are in the camp of "wants to prevent AI x-risk as a top priority, but not take a hammer to every economically productive present-day AI application" and...i still think i'm missing something.

why wouldn't "shut down all AI, and thus necessarily a big chunk of the software industry" be worth it given the belief that x-risk is a major concern? and wouldn't it be way politically easier than trying to do narrow technocratic regulations that preserve the AI industry? in other words, why NOT the terrible bills? I can understand "no terrible bills no matter what, on principle; if we die we die", and I can understand "no terrible bills because they're not worth the cost given that we're highly unlikely to all die", but a lot of people seem to believe that terrible bills would be *ineffective* against AGI, and i don't understand that one.

Expand full comment
[insert here] delenda est's avatar

I've thought carefully about the contents of this and your last several posts, and I'm afraid the conclusion is clear: we need a moratorium on human thinking until we can work out how to get them to an even halfway respectable median score.

These things are just riddled with conspicuous failure modes, and they fail in them even when given explicit knowledge of the failure mode!

It's time to call time on the human bubble.

Expand full comment
30 more comments...

No posts