I want to add that PauseAI has also been doing educational and demonstrative workshops on AI. So increasingly I feel that rather than just yell the message, it just helps to show the real risks to people and policymakers so they can make the best decisions.
Pausing is, I believe, the best strategy. Certainly not all might agree but I want everyone at least to be aware of what is happening and make decisions based on reality, as opposed to "lol cant count r in strawberry."
I second the recommendation of Orthogonal. One of my slightly-longer-term career goals is to someday work for them, or at least help them out more. I can vouch for the people running the org, also.
"I feel zero pressure to make my work legible […]" For a writer this is not ideal. (Tongue in cheek of course; thanks a lot for what you write!)
Nice. I mean, I try very hard to make it legible... just not to them, except insofar as they are also readers!
I want to add that PauseAI has also been doing educational and demonstrative workshops on AI. So increasingly I feel that rather than just yell the message, it just helps to show the real risks to people and policymakers so they can make the best decisions.
Pausing is, I believe, the best strategy. Certainly not all might agree but I want everyone at least to be aware of what is happening and make decisions based on reality, as opposed to "lol cant count r in strawberry."
Podcast episode for this post, less mutli-voiced for this one:
https://open.substack.com/pub/dwatvpodcast/p/the-big-nonprofits-post
I second the recommendation of Orthogonal. One of my slightly-longer-term career goals is to someday work for them, or at least help them out more. I can vouch for the people running the org, also.
BTW zvi, encode justice is indeed concerned with AI x-risk. So you might be pleased with that.