11 Comments
Comment deleted
Jul 15, 2022
Comment deleted
Expand full comment

> Ever since I read "More Dakka" (which I'm seriously considering having tattooed on my arm along with "Be Here, Now") I've been on the lookout for a concentrated, no-filler collection of your principles for living a happy, successful life. Does such a collection exist? Just the principles, minimal explanation?

Hmmm – maybe I've been reading Zvi for too long, or we've been 'swimming in the same pond' together for too long, but I have a few thoughts:

1. Just take all of the titles of Zvi's blog posts (here and then on his previous blog, and maybe his LessWrong posts too) and throw away the ones that aren't 'principles'.

2. "Living a happy, successful life" is great, but it's not _all_ of the Good.

3. "Living a happy, successful life" is really easy – in theory – and thus a list of principles would be rather short? _In practice_ it is, of course, arbitrarily complicated/difficult/impossible.

The (theoretical) principles for "living a happy, successful life":

1. Know thyself, i.e. what 'happy' and 'success' are for you.

2. Achieve happiness and success.

3. _Maintain_ happiness and success.

For a more detailed list of principles, try [1] from my first list above.

Expand full comment

I haven't compiled such a list but it's been vaguely on my list of long-term TODOs for a while.

Expand full comment

I look forward to it!

But I'd also be a little surprised if there was much that was new – to me, i.e. that I haven't picked up from reading/following you as long as I have.

But – of course – I am totally excited to _test_ my own model/prediction about this! :)

Expand full comment

> I don’t know how to write at that level in a way that I feel permission to do so, or the expectation of being able to do so effectively.

If it helps (and it probably doesn't), I give you permission to "write at that level" and share it with me privately :)

More generally, I think I'm missing some critical intuitions you seem to have that leads you to (what seems to me like) a very 'cynical' view of EA generally and this contest specifically.

I am, almost certainly, at much more of a 'remove' from EA than you perhaps. (I just joined the EA forum site maybe a month ago and, beyond that, my exposure to EA is mostly via rationality-adjacent blogs and LessWrong).

The idea of 'expertise', particularly for this contest, but also EA in general, doesn't make a lot of sense to me. What kind of claims could people make that are not weak evidence, at best, of the relevant 'cutting of the enemy'? IME, it sure seems like people just list 'credentials', e.g. job titles.

> This has been something in-between. It *gestures towards* fundamental critiques, but doesn’t focus on actually making them or justifying them. Making the actual case properly is hard and time intensive, and signals are strong it is unwelcome and would not be rewarded.

I can offer some money for a prize, to be paid to you, for making "the actual case properly", and all the LewssWrong upvotes I have to give (or the one Substack heart I'm allowed) :)

I think it would be *very* useful to have "the actual case properly" made – somewhere – even if it would be unwelcome and not rewarded – by the EAs. I think there's quite a few of us that are *sympathetic* to EA but not 'EAs ourselves' (by self-identification) and I'd bet that that's because of many of the same reasons you aren't either.

Personally, I'm not happy with the "altruism" aspect, but I *am* very interested in 'effective doing Good'. I *hope* EA is, or will be, the latter, eventually, but I'd like to know more about how to avoid the mistakes EA is making at doing it themselves.

Expand full comment

Ehhhhhh. You need to ask the first question.

Why does EA need to exist at all? What's it for?

1. What purpose does it serve? Charity has always existed. I don't need anyone, least of all these weirdos, to tell me how to give money away. EA is a political movement consisting of elites who attempt to convince other elites to adapt their preferred values and implement their policy preferences. It is not a popular movement and does not even try to be.

By definition altruism is a personal choice. "Effective" how? By what criteria? That's subjective!

EA is a way to push political causes under the guise of being scientific and rational. I've seen this racket before. At its best, it sounds like economics, but what's concerning is that it often pushes ideas to solve problems which are really policy preferences based on (fringe) values. Yes, you can come up with an efficient way to solve a problem, but if I don't think it's a problem (or the solution is a bigger problem) I won't ever agree.

The science of economics already exists and should inform everything EA claims it's about. When I first encountered EA, I thought it would sound like economics (which I studied). When it didn't, I began to see the truth. Most of them aren't economists, and often don't seem to understand basic ideas like supply and demand or (especially!) gains from trade. They are making essentially economic arguments but ignore centuries of economic theory and research. Sure, lots of EA people are smart and experts in their fields, but they are not experts in the field that matters!

2. I get the vibe that EA would really, really like to be able to do things with other people's money. They're so virtuous, so smart, so altruistic that you can trust them to do what's right for all for the greater good. I'll emphasize "so smart." Read EA writing and tell me that it's intended for a general audience rather than to show how smart the writer is. There are some exceptions, like Scott Alexander and the host of this blog, but geeeeeeeez what a lot of IQ-based dick measuring at EA sites. Read the comments at Less Wrong. If I cared to give advice, I'd say, "If your goal is to convince enough people to change policy, you need to speak to the general public in a way that doesn't make you sound like a bunch of smug douchebags." Certain EA proponents can do this, and experts in many fields succeed in communicating clearly with a general audience. The fact that EA cannot seem to do this consistently indicates it's not interested in doing so. Fine, we need civil society and special interest groups, but if you are trying to change policy in a democracy it might be a good idea to be able to talk to voters. At the very least it's good practice in communication skills. I find that the most talented people can communicate clearly with anyone in plain English.

3. I do not accept that any solution is obvious. In practice, everything is a trade-off. EA is bad at this.

Example: You want to put in a big affordable housing project in a nice neighborhood, huh? How virtuous. What about the consequences for everyone else? Here's a news flash: the neighborhood is nice because there are no affordable housing projects. Put one in, people move away. Have done it. Would do again. I paid a lot to get away.

Why value affordable housing over the preferences of people who already live in a nice neighborhood (and have worked hard and paid taxes to do so?) There's a bias here. It's not self-evident and not everyone wins. If you want to advocate for a group, fine, but don't act like any opposition is illegitimate. In a democracy, voters do not have to justify their preferences. That's why it's a democracy and not something else.

That's a single example from a complicated problem with no solution that will satisfy everyone. If a problem hasn't been solved already, it's because it's not easy. I'm all for cutting red tape and firing everyone at the FDA, burning down their building and sowing the ground with salt (figuratively! The Horseshoe Crab is against violence against anyone at any time. Salt is fine, though, since I live in the ocean.) However, the FDA is terrible because of many decisions made by elected officials solving perceived problems. Solving all those problems caused the problem!

Be careful about what problems you solve. A lot of EA problems aren't problems, will solve themselves if left alone, or aren't something they, or anyone else, can solve. Not every investment is a good one.

4. The sad truth is that most of the societal problems we have in the US are not problems government or anyone else can solve for other people. The best thing for me to do for the world is to make sure I'm the best person I can be for the people I know personally. I am best equipped to understand them and their problems. I'm not that important, nor am I special. I may be smart, but so what? I may have money, but that doesn't mean I am better equipped to solve problems in fields in which I have no expert knowledge.

Feel free to dismiss all of this, but EA puts off people who by temperament and training should find it appealing. That's my critique.

Lastly, read Julian Simon. He should have had a larger influence than he did.

Expand full comment

Zvi, wonderful post. Re: Utilitarianism. Late in life (63) I've come to the conclusion that the purpose of life is grandkids. (Getting your genes into the future.) This goal narrows your focus considerably. And without being insulting, I wonder how many of the EA people have children?

On a totally different note. I just discovered that you have posts both on Wordpress and Substack. (and Less Wrong) I think some of my comment confusion is that I commented on one forum and then read you on another without realizing they are different. Should I think of Substack as your primary platform now?

Thanks,

Expand full comment

Grandchildren are underrated by those without grandchildren, I have no doubt. Reproduction rate of EAs as far as I can tell is low but also population is largely very young.

I consider Substack the primary copy for most posts, WP secondary, LW tertiary.

For Rationalist stuff, sometimes I will consider LW to be primary.

For EA stuff like this post, I consider EA Forum copy (which otherwise doesn't exist) to be primary.

Expand full comment

I have no grandkids, having started a family late in my life (~40). I expect my kids, if they have kids, to also do so later in life. One way to reduce world population (besides less kids) is more time between generations. (But I'm probably just 'rationalizing' my late start, and I do somewhat regret my not having started earlier.)

How to get the most 'good' human genes for the longest time into the future seems like a decent EA goal?

Expand full comment

That last question would be one of the things you can't (in general) say on the internet, so I would do my best not to disagree with it on the internet either. There is some talk of 'clone Van Neuman for help with the AI Alignment problem' and it seems like a relatively promising angle.

We wait too long to have kids, in general - biology doesn't care about your cultural timeline.

Expand full comment

This comment is only about the beginning, as I haven't read the whole thing.

> within-paradigm critiques are welcomed [...] but deeper critiques are unwelcome

It sound like you're saying within-paradigm critiques are shallow and outside-paradigm critiques are deep. I think outside-paradigm critiques (especially setting aside the contest) are usually shallow and indeed mistakes, and you'd make your case better if you gave an example of a deep critique that was not a mistake and generally mistreated by EAs.

> Here is my attempt to summarize the framework

Assuming "framework" refers to the same thing as "paradigm" in the previous paragraph and "set of assumptions" before that, I think your list of points is both far too specific about the what "the EA paradigm" is, and at the same time incomplete.

Now, maybe you are *just* criticizing the contest, whose rules I haven't read, but it sounds like you're trying to characterize what it means to be an EA *in general*.

I consider myself a devoted EA, but I disagree that I adhere to the following points that you are apparently saying I must adhere to in order to be within-paradigm.

> Quantification. Emphasis on that which can be seen and measured.

I think it's widely acknowledged in EA that some important things are hard to measure, and that it can be okay to work on such things even if proxy metrics are limited.

> Bureaucracy. Distribution of funds via organizational grants and applications.

This is purely an instrumental issue, and it is a conclusion or default mode, not a core belief or assumption. As such it is open to question.

> Intentionality. You should to plan your life around the impact it will have.

A few days ago I heard that the average EA gives ... okay, I forget the exactly number, I think it's 3% of their income. I guess this point is technically correct that you "should" ideally plan your life around the impact you will have, but evidently the average EA is not doing the "Giving What We Can" thing, and I don't think EAs think everybody should be an EA (heck, if everyone were an EA, it would totally mess up our paradigm... but in a good way). I give over 10%, but I'm cheating a bit by donating to WorldVision and Ukraine charities despite not knowing their effectiveness. The latter's effectiveness is probably depressingly low, but giving makes me feel better. And like, I think EAs kinda recognize the value of some cheating here and there in terms of e.g. mental health (purchasing warm fuzzies and utilons separately, as Yudkowsky said).

> Altruism. The best way to do good yourself is to act selflessly to do good.

Err... I'm skeptical of this.

> Obligation. We owe the future quite a lot, arguably everything.

I wouldn't frame it as "we owe the future quite a lot", nor would I argue for "everything", but I don't feel like it's worth quibbling about, since I tend to approve of the actions of people who do think that way.

> Coordination. Working together is more effective than cultivating competition.

Er, the purpose of this essay is to criticize _a competition_. Working together is valuable; competition is valuable. One or the other can be more effective in a given context.

> Selflessness. You shouldn’t value yourself, locals or family more than others.

I mean, maybe I agree in some aspirational way, but EAs are smart enough to know that humans (including EAs) simply don't work that way. EAs tend to be cosmopolitan, but they don't tend to deny the reality that we do in fact value ourselves and our families more than others. I do think that it is possible to value strangers in a foreign country as much as strangers in one's own city or country, and that it's good to try to think in that manner. However, I don't think it's important to actually achieve that aspiration — like, if an EA values strangers in their city 50% more than foreign strangers, I feel like "meh, good enough."

> Self-Recommending. Belief in the movement and methods themselves.

I believe mostly empirically or based on a *malleable* mental model in a various specific things. It's incorrect to characterize this as an "assumption". Conclusions are not assumptions. And since we are talking about a conclusion based on reason, of course any given method is open to question. Likewise one could question whether there should be "a movement", e.g. at EAGx yesterday someone wondered whether EA would split into two movements. I don't think it should, but your framing of this as something not open to question is wrong.

Am am reminded here that you left out something crucial from the list: empiricism. You do have a word that rhymes:

> Evangelicalism. Belief that it is good to convert others and add resources to EA.

In the long run, yes, but (1) this is a conclusion, not an assumption, so of course it is open to question and (2) I am very concerned about growing the movement too quickly — and this concern is certainly not original to me.

> Reputation. EA should optimize largely for EA’s reputation.

Huh? EA should optimize largely for doing good in the world. Reputation is an instrumental tool toward that end; it's easier to do good with a good reputation, therefore we want good reputations. If you want to write an essay arguing that EAs shouldn't try to safeguard the reputation of themselves or of the EA movement, you can, but EAs will be rightly skeptical that a deteriorating reputation is fine.

> Modesty. Non-neglected topics can be safely ignored, often consensus trusted.

A few years back I argued against this in the context of climate change (on an essay arguing we don't need climate change interventions because there's so much money going to climate change already) and, lo and behold, yesterday I saw a talk at EAGx from an EA org 100% devoted to climate change.

> Judgment. Not living up to this list is morally bad. Also sort of like murder.

Err... I've never thought of it that way before. It's a defensible position, but not one you need to be an EA.

> Veganism. If you are not vegan many EAs treat you as non-serious (or even evil).

Okay, well, I'm not a vegetarian, let alone a vegan. Ideally I'd eat less meat but mostly I think it's fine as systemic change is more important than personal purchasing decisions. AFAIK I am not treated as non-serious or evil.

> Totalization. Things outside the framework are considered to have no value.

I did just say your list is incomplete, so.

Also I would quibble a bit about the assumption that outside-paradigm critiques are "deep" in relation to, most notably, Utilitarianism. Scott Alexander once wrote these two big FAQs, the non-libertarian FAQ and the consequentialism FAQ. I hear people talk about things like the "repugnant conclusion" all the time. From what I've seen, lots of people rush to criticize consequentialism but they don't do any work to put forth an alternative that is "better" in the sense of not having any possible "repugnant conclusions".

Like, okay, you have an intuition against utilitarianism, fine. But what's your alternative? It's like complaining in 1900 that the entire physics profession is rotten because of the result of the Michelson-Morley experiment — okay, yeah, there is an anomaly in physics, but I'm going to keep on using the same physics formulas that work perfectly well in practice, unless you can offer me a Theory of Relativity that does a better job explaining reality. Same thing here. People often criticize utilitarianism but never, so far as I have seen, offer an alternative that makes as much sense as utilitarianism. And it's a tall order. Not only do you have to do better intellectually by finding a better paradigm than consequentialist utilitarianism, but you also have to communicate very well in order to show that you' paradigm is better. People like Scott Alexander and Yudkowsky have produced large volumes of well-communicated, popular essays and FAQs about, or based on, utilitarianism; this is your competition.

Expecting longtime EAs to be warm to critiques of utilitarianism seems like expecting climate scientists to be warm to "the sun causes global warming", "it's all just natural internal variability", "homogenization is fraud" or "there's been no warming since 1998". Consequentialist utilitarianism is a meaty, rich idea with a lot of thoughtful discourse around it. You can't expect to impress EAs by crashing in with a critique like the (admittedly few) critiques I've seen - critiques that either misunderstand what they are arguing against, or don't propose an alternative to utilitarianism, or both. Even if somebody proposes an alternative, that's not enough — lots of additional analysis is required to answer the objections that would inevitably arise. This is really hard to do well. Probably if there is a better paradigm, it already exists but it is being communicated poorly (example: Crary's remarkably vacuous arguments against EA in favor of virtue ethics https://dpiepgrass.medium.com/crary-avoids-explaining-her-arguments-against-effective-altruism-3e39b43bd), so perhaps the best way to approach creating a critique of it is to study all the paradigms philosophy has to offer, understand them thoroughly, and then communicate the best paradigm better than philosophers do. Just don't be surprised if the best paradigm turns out to be primarily utilitarian.

Expand full comment

And:

> And importantly, there are also things one is not socially allowed to question or consider, not in EA in particular but fully broadly. Some key considerations are things that cannot be said on the internet, and some general assumptions that cannot be questioned are importantly wrong but cannot be questioned. This is a very hard problem but is especially worrisome when calculations and legibility are required, as this directly clashes with there being norms against making certain things legible.

Huh. I don't know what this might refer to.

> That many disagreements strongly implies the list is not doing a good job cutting reality at its joints, and a shorter list is possible

I don't get what you are doing. You wrote the list, right? Why not just publish a shorter list in the first place? Or warn the reader in advance that, like, it's not well-done?

Expand full comment