Discussion about this post

User's avatar
Jonathan Weil's avatar

There is something deeply weird, to me, about an organisation working tirelessly to elicit capabilities that it then vows to delete. Plus, given G in the stated goal of AGI, it just seems... odd? ... to think you can make something with general capabilities that enable superhuman persuasion, then train it to not do that, at all, ever. As a non-technical, recently interested bystander to it all, I can’t help wondering why not just go for something or a set of somethings much narrower, along the lines of AlphaFold, that could reap enormous benefits without the apparently obvious insane risks (not to mention the queasy ethical aspect to performing selective lobotomies on an AGI worthy of the name). What is the huge differential upside to General over Narrow?

Expand full comment
David Kasten's avatar

I agree with functionally all of this, and would like once again to put into the world the idea that a meaningful "carrot" governments can offer frontier labs is, "we will literally send our world experts on how to build airgapped sytems to help you airgap your systems." The US government should basically have an unlimited travel budget for its SCIF experts to go to SF for the next decade.

Expand full comment
13 more comments...

No posts