Recently on Twitter, in response to seeing a contest announcement asking for criticism of EA, I offered some criticism of that contest’s announcement.
That sparked a bunch of discussion about central concepts in Effective Altruism. Those discussions ended up including Dustin Moskovitz, who showed an excellent willingness to engage and make clear how his models worked. The whole thing seems valuable enough to preserve in a form that one can navigate, hence this post.
This compiles what I consider the most important and interesting parts of that discussion into post form, so it can be more easily seen and referenced, including in the medium-to-long term.
There are a lot of offshoots and threads involved, so I’m using some editorial discretion to organize and filter.
To create as even-handed and useful a resource as possible, I am intentionally not going to interject commentary into the conversation here beyond the bare minimum.
As usual, I use screenshots for most tweets to guard against potential future deletions or suspensions, with links to key points in the threads.
(As Kevin says, I did indeed mean should there.)
At this point there are two important threads that follow, and one additional reply of note.
Thread one, which got a bit tangled at the beginning but makes sense as one thread:
Thread two, which took place the next day and went in a different direction.
Link here to Ben’s post, GiveWell and the problem of partial funding.
Link to GiveWell blog post on giving now versus later.
Dustin’s “NO WE ARE FAILING” point seemed important so I highlighted it.
There was also a reply from Eliezer.
And this on pandemics in particular.
Sarah asked about the general failure to convince Dustin’s friends.
These two notes branch off of Ben’s comment that covers-all-of-EA didn’t make sense.
Ben also disagreed with the math that there was lots of opportunity, linking to his post A Drowning Child is Hard to Find.
This thread responds to Dustin’s claim that you need to know details about the upgrade to the laptop further up the main thread, I found it worthwhile but did not include it directly for reasons of length.
This came in response to Dustin’s challenge on whether info was 10x better.
After the main part of thread two, there was a different discussion about pressures perhaps being placed on students to be performative, which I found interesting but am not including for length.
This response to the original Tweet is worth noting as well.
Again, thanks to everyone involved and sorry if I missed your contribution.
Quick note of gratitude: thanks for taking the time to highlight the Twitter discussion in a permanent and discoverable way! I generally stay off Twitter to avoid unbounded time sinks, but it comes at the cost of missing cool exchanges like these.
Forget it Zvi, its <strike>Chinatown</strike> Man-of-System town. Man of system will never understand local knowledge and just think it measn "can't observe as much" because local is smaller geographically than global is. You can beat them over the head with the fact it means, say, a hot dog vendor has an intuitive grasp on what traffic on their street looks like and are better at spotting bombers than than any rules the man of system can devise (look up Duane Jackson's story, for reference). Deviant Olam's first, second, and third bit of security advice at home is "get to know your neighbors".
I'm 100% on board with the give based on your local knowledge, with a little bit of seeking out donations that aren't tax incentivized. One of my hobby horses is giving a car to folks who would lose their jobs without transportation - not wildly expensive cars, just cars that ran well but would only trade in for <$3,000. This is basically the white trash version of giving a laptop to a grad student.
Holding an open application process (what some people might call an All-Pay Auction...) to give money to a grad student is the Man of System version, and, uh, yeah do the math. Its a negative EV proposition.