16 Comments

"I feel zero pressure to make my work legible […]" For a writer this is not ideal. (Tongue in cheek of course; thanks a lot for what you write!)

Expand full comment

Nice. I mean, I try very hard to make it legible... just not to them, except insofar as they are also readers!

Expand full comment

I want to add that PauseAI has also been doing educational and demonstrative workshops on AI. So increasingly I feel that rather than just yell the message, it just helps to show the real risks to people and policymakers so they can make the best decisions.

Pausing is, I believe, the best strategy. Certainly not all might agree but I want everyone at least to be aware of what is happening and make decisions based on reality, as opposed to "lol cant count r in strawberry."

Expand full comment

Podcast episode for this post, less mutli-voiced for this one:

https://open.substack.com/pub/dwatvpodcast/p/the-big-nonprofits-post

Expand full comment

I second the recommendation of Orthogonal. One of my slightly-longer-term career goals is to someday work for them, or at least help them out more. I can vouch for the people running the org, also.

Expand full comment

On the other hand, I un-second it.

I used to follow their work closely, but lost interest after I realized they were extremely confused about basic math (not knowing what expected value was for example) which cast shadows over all their math-sounding work.

I was interested because I couldn't understand exactly what they did which sounded like I was missing critical insights into agent foundations, but in retrospect, I don't think I learned much reading their work.

(It has been more than a year since I followed Orthogonal, maybe it changed a lot since then.)

Expand full comment

Orthogonal has changed a lot since then, for the better. I had varying amounts of interaction with them, TLDR most or all of their idiosyncracies are of the "shorthand notes, Lisp macros, and custom configs" type. Which makes it harder for others, at least at the current stage, to get up to speed quickly/precisely.

Expand full comment

BTW zvi, encode justice is indeed concerned with AI x-risk. So you might be pleased with that.

Expand full comment

Charity on this scale is outside my budget, but it was useful as a who's-who glossary of Players In The Scene. Some familiar names from the recurring posts that I now know some of what they actually do besides tweet pith. It's easy to get hung up on the individual personalities (especially loud ones like sama) and forget they're mostly all part of broader organizations with bigger aims.

Expand full comment

I’m pretty sure Roman Yampolskiy is a tenured professor—I should hope he’s not pursuing a PhD!!

Expand full comment

This was really helpful!

However, I am primarily commenting with an FYI to mention that Cillian Crosson is a man.

Expand full comment

> CLTR at Founders Pledge

Hm... CLTR is a Founders Pledge recommendation, but beyond that, they are not related?

Expand full comment

Regarding the following:

“Daniel walked away from OpenAI, and what looked to be most of his net worth, to preserve his right to speak up.

That led to us finally allowing others at OpenAI to speak up as well.”

Did Daniel or anyone else from OpenAI actually say anything of high importance after their NDAs were absolved? There was a lot of meta-level drama about how bad the NDAs were but now that they have been gone for 6 months… what exactly did anyone reveal, other than the fact that such NDAs used to exist?

Expand full comment

I'm interested in donating to The Scenario Project from Daniel Kokotajlo, but you don't link to any website or any way to donate, and I can't find them on the web. Can you let us know how to donate to this?

Expand full comment

Oh yeah, I forgot to put the email contacts in. Fixed now.

Expand full comment

I've been impressed with Catalyze - they seem to have navigated their 'start-up phase' wisely, e.g. they've done well building a strong core team + securing support from competent people, AFAICT.

Expand full comment