33 Comments

“There was huge pressure exerted on holdouts to fall in line, and not so subtle warnings of what would happen to their positions and jobs if they did not sign and Altman did return.”

I was one of the ~5% who didn’t sign. I did not perceive huge pressure to sign nor did I face any repercussions. A couple of people messaged me if I had seen the doc and was going to sign (quite reasonable given the lack of company-wide comms at the time). I said I agreed with the letter in spirit but not every particular point, so didn’t want to sign. My answer was accepted without pressure or judgment. So based on my actual experience, I would dispute your narrative of huge pressure and warnings. I really don’t think it’s true at all.

Expand full comment
author

I will update based on this new information to note your experience.

Expand full comment

Thanks!

>Ted also claims that retaliation did not take place, but overall it sure looks to me like predictions of retaliation proved accurate.

Incidentally, where did I claim that retaliation did not take place?

I’d be surprised if I claimed this. I can only share my experiences and my perceptions. I cannot speak to experiences or perceptions I’m not privy to.

Expand full comment

It's refreshing to see someone on the Safety-ist side of the AI risk spectrum - to avoid using the "doomer" slur - to acknowledge that we should accept some level of risk given the potential upside. Kudos to Zvi.

Is 1% risk appropriate? 10%? 99%? Impossible to say, because those numbers are entirely fictitious. I think EY is obviously correct that you can't nitpick individual doom scenarios and Zvi is obviously correct that this is a potentially tremendously out of distribution event.

Those sauces taste equally good served over goose and gander. You cannot say that something is so novel that we have no way to predict its effects, then assume they will be negative based off comparisons to previous incidents in human life. You cannot point out that AI can go bad in ways we cannot predict, then poke holes in reasons why things could go well and declare yourself 99% certain of doom.

It goes against rationalist mores, but people should stop offering percentiles. They're useful for calibrating where a conversation partner sits, sure, but so is "big risk", "little risk", etc. They convey equal amounts of information without the pretense that even offering a range like 10-25% engages in.

"Why AI now?"

Two reasons:

1) In the long run, we are all dead. AI developed after my lifetime does me no good.*

2) Imagine a world where events fortuitously lined up so that the Industrial and Information Revolutions happened basically simultaneously with the Agricultural Revolution.** We could skip the thousands of years of slavery, warfare over land, etc. that the earlier agricultural systems were incentivized to engage in.

There is "big risk" that we could end up in a world where AI is good enough to replace ~75% of labor. This will naturally select for a world where those ~75% of people must subsist off the largess of the 25% fortunate enough to have the "genetic gifts" (read: guanxi) to remain employed. Many, many ways this goes terribly.

Is this worth risking humanity's existence over? That's a personal decision. I can't but note, though, that the people who would have us slow down are the ones most threatened by their most unique asset - intelligence - becoming obsolete. These aren't the people who work the vast majority of jobs that are dull, soul-sucking, or otherwise cause a grey existence. They want you to do it. They'll be busy going to fun conferences and writing long arguments about the things they are predisposed to find fascinating.

Most people are already slaves to an uncaring intelligence. AI certainly presents new and exciting risks. It's also the only thing offering real upside. Place your bets.

* Thanks to cryonics there's still that .0001% chance I'll be able to benefit. If you aren't signed up for cryonics - do it today. The more people who do the better chance someone / something will find a reanimation process that I can benefit from.

** If your monocle is popping with anger over the low probability, remember that superintelligence is similarly out of distribution.

Expand full comment

If you attach value only to (1) the lives and well-being of current humans, and not to (for example) the descendants of current humans, then that would definitely change the math on AI. With that value set, then the P(Doom) is already very close to 100%, since by default we expect all of those current humans to die in the next 120 years or so.

I think many people would disagree with you on this value, however; I value the lives of future humans and the continuity of my species and culture, as well as the existence of intelligent life in general. Personal immortality would be nice, but not necessary for the majority of the things I value in the world to continue.

Expand full comment

In no particular order:

1) I am considering future humans. I would have wanted a hypothetical Sumerian to make choices leading out of the local minima of the Agricultural Revolution that may have been riskier. The earlier we work on developing AI, the more the benefits will accrue by start_time + x. I'm acting in consistency with what I would have wanted from a past version of myself and my best projection of what a future self would desire.

2) Future humans are current humans but in the future. Their P(Doom) is individually also 100%. The Safety-ist approach assumes that Homo Sapiens will reign in power and glory forever and ever but this is the least likely scenario: the P(Doom) for humanity as a whole - either through extinction or evolution into a new form - is close enough to 100% to be indistinguishable. When I hear someone's estimate of P(Doom) from Superintelligent AI, even if it's 99.5%, all I hear is that risk is decreasing.

If you want a future with intelligent life that carries on some or all of humanity's values, your safest bet is AI.

3) Future humans do not exist and can be completely discounted. "Resources exist to be consumed. And consumed they will be, if not by this generation then by some future. By what right does this forgotten future seek to deny us our birthright? None I say! Let us take what is ours, chew and eat our fill."

Expand full comment

When people talk about P(Doom) from ASI, I believe they're talking about in our lifetimes, not at some future point. I'm not against the advancement of science, *or* the development of AI - I just think we should do so slowly enough that we can be safe, understand the implications, and society can adapt as necessary.

I agree that individual future humans can be discounted; however, completely discounting future humanity is a dramatic difference between your apparent values and mine. I have faith in future humanity's ability to solve most of the problems you describe, but only if we allow them to exist to do so.

Expand full comment

"I just think we should do so slowly enough that we can be safe, understand the implications, and society can adapt as necessary."

Understand this isn't meant as a gotcha. I don't expect you to have a thorough, impeachable answer. But what exactly does that mean compared to what has happened and is happening? What developments that have already been made would you prevent and what is anyone doing that you would stop?

No offense, but it reads as "I support good things, not bad things". I'm obviously very pro-AI but I also would not support further developing AI if it were running around strangling orphans in the street. The safety concerns, however, are purely hypothetical and even their most ardent proponents acknowledge that we're talking about completely novel and unpredictable outcomes.

We have never understood the implications before developing any technology. Society will not adapt until there is something to adapt to. And this attitude does nothing but fill the invisible graveyard while we postpone potentially universal prosperity until it means some nebulous concept of "safety" that will never be reached.

"I have faith in future humanity's ability to solve most of the problems you describe, but only if we allow them to exist to do so."

Humanity becoming extinct or unrecognizable to our current selves isn't a problem to be solved. It is an inevitability.

Expand full comment

For a very concrete example, by trying to slow down AI compute scaling until we have a firm understanding of how the model weights lead to the AI's behavior. What is currently happening is a mostly unrestricted race for getting the best agents out there asap on as much compute and reach as we can give them. There's a large delta between those two worlds. And unlike safety, observability is not so far out of paradigm that we can't even begin figuring it out.

Expand full comment

Would you have stopped GPT-3 without such an understanding? GPT-4? There’s plenty of utility in those models without such an understanding.

GPT-5?

What if it never comes to your satisfaction? Jam tomorrow I suppose for everyone toiling in automatable jobs and those paying - or, more accurately, not being able to pay - for goods / services that could be multiple orders of magnitude cheaper.

Expand full comment

Great Analysis ... lots to unpack here. However, the panic argument fails. We're way too ignorant. There's SO much more to know and find out. We only figured out how the sun works barely 100 years ago (quantum tunneling facilitating fusion of protons and neutrons). We didn't actually know what powered the sun until then. Up to the point of QM and nuclear physics, people thought sun was powered first by burning wood, then coal, then oil. And this, after many thousand of years and billions of people wondering about the sun (after they stopped ascribing the sun's power to gods). We didn't understand photosynthesis until 1945. We're essentially clueless about dark energy and dark matter that make up most of the universe. By the way, who's talking about a random black hole coming within range to destroy the stability of the solar system? It doesn't need to get too close. Maybe you want ASI to help figure our way out of the challenge?

Chaitin, in "The limits of mathematics "

"The normal view of mathematics is that if something is true, it's true for a reason, right? In mathematics the reason that something is true is called a proof. And the job of the mathematician is to find proofs. So normally you think if something is true it’s true for a reason. Well, what Ω shows you, what I’ve discovered, is that some mathematical facts are true for no reason! They’re true by accident! And consequently they forever escape the power of mathematical reasoning."

there's a lot more to be said about this. I will be doing so later. Right now, discussion is good, panic is not!

Expand full comment

Agreed, panic is not good. But caution is warranted, perhaps even extreme caution. The current status quo of closing our eyes and running full speed ahead is madness.

Expand full comment
Sep 27·edited Oct 4

First off, I get that this is a long comment about a small point that is really not what this post is mainly about. But I've been thinking a lot about Effective Altruism in the last week

(sparked by a post from Yascha Mounk in Persuasion, which triggered a couple of interesting responses from Scott Alexander and Andrew Doris), and the "Cost Benefit Analysis" section of this post, where Zvi talks about the New York subway's decision to temporarily halt the F line while searching for a lost cat, helped crystalize some of my concerns about EA and rationalism.

I say that as someone who has been a strong supporter of GiveWell for almost 15 years and who found Scott and Andrew's defenses of EA more persuasive than Yascha's critique. And also as someone who thinks that America has gone a little crazy about treating pets as people.

Having said that, I don't think the decision to temporarily shut down the F line to help find the cat was nearly as ridiculous as Zvi apparently believes.

First, I think Zvi's estimates of the costs of this decision are bogus. He gets there by calculating the cost of the "lost wages" that occurred because of the F train's delay. But I'm guessing that the vast majority of people didn't lose any wages at all or even much productivity. I'd estimate that they just stayed at work a little later or worked in a little bit more of a focused way, someone else at their workplace worked a little harder to cover for them, or one of the many other things happened when people are unexpectedly later.

One meta point here is that I believe it's a lot more difficult than I think Zvi implies here to truly estimate the costs of trade offs and that many times rationalists are fooling themselves when they think they can do so accurately. A second meta point is that the idea that there is a set amount of productive time in the world, and that anything that cuts into that time has a cost, is not necessarily true. While there are instances when I think that accurately describes the world, there are other instances where I think productive time is more like a balloon, where if you squeeze it in one place, it expands into others.

Second, I think something else is going on in an instance like this when a group of people is asked to do something to support someone who needs help. I think providing help in this instance is not just about providing whatever help these folks can provide. It's also about expressing a sense of solidarity and community with the folks in need that positively and meaningfully affects the person asking for help, the people offering help, and society more generally. And in reverse, I think refusing these types of request for help also coarsens everyone affected of involved. And I think both of those dynamics matter in a way that simple rationalism ignores. To put it another way, I think it matters when the Grinch's heart becomes bigger or smaller.

Expand full comment

I like your last point there. To put rational/utilitarian terms on it, the dog's life has more value when so many people are affected by the action to save it, and the dog's death would have an additional cost for each person who had to participate in it.

Expand full comment

That's one way to think about it. That a utilitarian with full information (including how people would be personally affected) would be able to make the right decision. As someone who tends to generally believe utilitarianism is the right way to make political decisions, I've even made arguments like that from time to time.

But what I think a critic of utilitarianism would say back to that argument, and I think they're probably right, is that the case for utilitarianism becomes tautological at that point. The reality, at least with questions like how do we decide whether to delay the train, is that we never have "full information" in this way and a philosophy that doesn't provide you with guidance without that information is of limited use.

Let me give a different example that I think a lot of people may relate to: being asked for money by homeless folks when you're walking down a street.

Now I'm sure that the right answer from a pure utilitarian perspective is not to give the money. There a strong likelihood that it will just be used to support someone's addiction, and I'm confident it would be used better if I gave to a shelter or other types of orgs supporting the homeless. Except, that:

1. I likely won't end up giving that $1-5 dollars to a homeless org. And

2. I feel like a little piece of my humanity dies when I refuse someone in obvious extremis like that. I think about how I would feel if the situation were reversed, and I was the one desperate for help. And my guess is that being refused constantly is deadening for them in some ways as well. That's the dynamic that I think is impossible for utilitarianism to calculate.

So I end up giving sometimes and not giving other times depending on my mood that day, what's in my wallet, my perception of their state, how they approach me, and maybe other factors that I'm not even aware of.

Now, I don't really defend that crazily inconsistent approach, but I'm not sure there is a better answer either. The one thing I'm confident of is that the world would be better off if we didn't have people begging for money on the streets and that we would be better off as a society by preventing it from happening, BOTH by preventing people from getting into that state and by having norms/customs/laws discouraging panhandling.

Anyway, apologies Jonathan for a long answer to a question that you didn't really even ask. I guess I'm just thinking "aloud."

Expand full comment

Yeah, one of the big weaknesses of strong (or even medium) utilitarianism is that it's almost impossible to confidently predict the actual expected value of even relatively mundane events like stopping the subway for a while. This seems under-appreciated.

Expand full comment

Not directly part of this book, but Scott's contrarian take that rationalism is, in fact, mostly useless for quotidian marginal improvements continues to baffle me. What is the point of a tsuyoku naritai system that only works to cut Grand-World problems? (And should I be so confident of that ability, if the principles can't be honed mainichi mainichi on Small-World problems? The optimal amount of small-stuff-sweating is not zero!) Even for a fisher of men who's more of a Rationalist camp follower, I notice that I get a lot of utility from adapting the techniques...life success took a noticeable upswing after I tendered my resignation to The Village. Perhaps it's more correlational than causative - dropping a culture that defines one as a powerless victimized loser is maybe more load-bearing than its replacement - but it was clearly a +EV trade either way.

I feel similarly about accelerationist arguments for AI, that it's the only/most promising path left to reach a Good End instead of an inevitable Game Over for humanity*...very much feels like a Gentlemen, We Have The Technology situation, except the tech isn't AI, it's...well...what Matt Yglesias frequently calls the slow boring of hard boards. Dreadfully unsexy, anodyne work in grubby mind-killing realms like politics, the mundanest of mundane utility. So much of the Not Doing Of Things is largely voluntary own goals, shackles imposed by entirely-human choices. Yes, it's not trivial coordinating to break them - but it hardly requires a superintelligent silicon god, either. (In fact, rationalism teaches lots of useful tools for encouraging coordination!) Entirely possible we get there faster with AI...but invisible-graveyard is a Fully General (counter)Argument one can use to justify anything, with the right odds. The fact that low-stakes battles over existing mundane utility/harms from today's barely-dangerous models are going Not Great, Bob is not encouraging.

p(doom|fizzle) = 1%; p(doom|AGI) = starting at 5%, tap that ten year pause button anyway. (Contamination: I noticed a nontrivial anchoring pull to your/Leike's opening 10%, agree this doesn't make much sense on the merits, but feel reluctant to revise after noticing the flinch. Not actually smart enough to genuinely understand most of the technical or philosophical details, feels like putting far too much weight on arguments-from-authority. Don't Trust Over 30%.)

*and, again: why the inevitability, Mr. Anderson? Have we really checked the Trying At All box? Be honest! Lot of model uncertainty for thee, but not for me going around...

Expand full comment

"This starts with Peter Singer, who is clear that he believes the virtuous man should be willing to lie their ass off."

I'm apparently not familiar enough with Singer to know what you're talking about - what claim of his are you summarizing this way?

Expand full comment

I will represent that I have a reasonable amount of exposure to Singer's thought (96%+ percentile, plausibly 99%+ percentile) and that this is not an unreasonable gloss on Singer. [ED: mistakenly had this as "not an reasonable" before, my bad. I am saying it *is* a reasonable gloss, not an unreasonable one.]

Singer's fundamental view of ethics is purely consequentialist (utilitarianism is an attempt to reify consequentialism), so terminal goals and outcomes rather than instrumental processes are *always* the ethical lodestar. "The ends justify the means" is absolutely true.

In this view, honesty in communication is purely an instrumental rather than a terminal goal, and so should be employed iff it results in better outcomes. This may be contrasted with the oft-lampooned example of Kant suggesting that one shouldn't lie about the location of a victim to a murderer searching for them because dishonesty is per-se bad. To Singer, honesty has no intrinsic moral character -- the act itself doesn't matter, just the effect on suffering / happiness.

I can't find a citation ready to hand but I believe among the hypos he poses is whether one should keep an otherwise utility-negative promise to a deceased person. The utilitarian answer is: no, that's stupid, there's no beneficiary, to do so is to be a slave to convention, (as is opposition to consensual brother-sister incest using multiple forms of birth control, which is a specific hypo from Singer used as an intuition pump to get people to grasp the distinction between utilitarian thought and deontological thought).

Singer is himself far along the act-utilitarian part of the act-utilitarian versus rule-utilitarian axis, but I would note that *in practice,* most rule-utilitarians would acknowledge that rule-utilitarianism is a heuristic concession to bounded rationality -- act-utilitarianism is obviously "true" utilitarianism but humans are bad at operationalizing it and are subject to self-interest, so they take the more practical but admittedly second-best approach.

One plausibly could attempt to make a consequentialist defense of honesty under a rule-utilitarian framework based on the second-order effects of trust creating benefit, but in an act utilitarian framework every single act of honesty would have to be evaluated on its own merits (the failure to do so being what makes Kant's example seem so silly) based on its individuated cost-benefits to a generalized system of trust versus the specific harms attributed to it.

In short, if lying produces a better outcome than honesty, Singer would say to lie.

While I don't think Singer would oppose taking second-order considerations into account as far as treating honesty as having independent value as a utility-creating norm that should not be broken with complete abandon, he would probably impose a fairly high standard of rigor in accepting such argumentation: his point of view is probably informed by the fact that that class of reasoning *in general* favors status-quo bias in a way that is also true of activities that are utilitarianly morally bankrupt like factory farming. As Zvi says, most people probably need to be more rather than less marginally utilitarian.

Expand full comment

Caplan had a nice post about this called "Singer and the Noble Lie".

https://www.betonit.ai/p/singer-and-the-noble-lie

To save you time, Singer wrote a paper called "SECRECY IN CONSEQUENTIALISM: A DEFENCE OF

ESOTERIC MORALITY" with lines like

"Yet it does seem to be an implication of consequentialism that it is sometimes right to do in secret what it would not be right to do openly, or to advocate publicly. We defend Sidgwick on this issue, and show that accepting the possibility of esoteric morality makes it possible to explain why we should accept consequentialism, even while we may feel disapproval towards some of its implications."

The most straightforward reading of this passage is that Singer is saying: "Consequentialism implies that it's okay to lie under certain circumstances and we readily bite that bullet."

Expand full comment

On the dystopian AI scenario, I'm surprised that people don't cite Accelerando, a 2005 novel by Charles Stross. Spoiler warning! The novel very straightforwardly goes down the autonomous AI path. The earlier, less advanced, more human-like AIs leave Earth alone and leave a gap in the dyson sphere to preserve sunlight for the Earth. But within a couple of decades the AIs have gone through untold generations, and the newer AIs no longer really care. I forget the details, but they basically expel humans from Earth and start assimilating the entire solar system. The remaining post-humans need to head out across the galaxy to survive.

My takeaway is that this is not at all a novel or surprising scenario. If you work through the game theory, this is often where you end up. Maybe it's wrong and the future AIs won't pick up the $20 bill that is Earth's sunlight, matter, and human-friendly ecosystem. That would be wonderful, but I don't think we can just assume it'll happen like that.

Expand full comment

I read a lot of your AI posts before I decided to comment, to make sure I'm not missing anything.

I think this AI problem resembles that posed by the invention of atomic and nuclear weapons. It's not the same, but it resembles it. Suddenly, people had these weapons that were far more powerful than any before and whose use could cause the worst cataclysm in human history. Many very smart people were convinced that, on a long enough time scale, they would be used, in large numbers, and the current civilization would end. Or, that humanity would end or life on Earth would end.

They may end up being correct, because the problem of the threat posed by nuclear weapons was never solved. We just live with it.

I understand the argument that pointing at some past problem that turned out not to be a problem doesn't mean any other problem doesn't exist. Where I think this is similar is that nukes are still a problem that is not solved. On a long enough time scale, someone will use a nuke and we'll find out what happens.

AI is the same thing: on a long enough time scale, someone is going to build a true thinking machine and we'll find out what happens.

With nuclear weapons, the problem was managed by tightly controlling who can use them and under what circumstances (which took a few terrifying near misses to figure out). The problem is only managed, it isn't solved. The current civilization, against the expectations of many of the best and brightest, hasn't obliterated itself. That doesn't mean it won't at some point in the future, but it hasn't yet. That's all that we can expect.

I suspect AI is going to go the same way. Some event will cause AI to be heavily regulated, at great cost, when it would have been easier earlier. I see the problem with letting something smarter than us off the leash for too long before this happens, but I have a realistic view of how humans make decisions.

Something that makes me shake my head is that while Silicon Valley in CA is the current locus of AI research, on a long timescale it doesn't really matter. The USA had an atomic monopoly for only 4 years. Now North Korea has atomic weapons. Yeah, it's important to do the hard thinking now, but the world is not run by one government and it would take a treaty regime to regulate AI worldwide. Given the power requirements, it shouldn't be that hard to tell who is doing it, but there would have to be enforcement. How? By whom?

It would be nice for CA to regulate AI, but China exists and I don't trust them (and neither should you.) On a long enough time scale, I wouldn't trust anyone. That's why this is like the problem of nuclear weapons.

Expand full comment

“Some event will cause AI to be heavily regulated, at great cost, when it would have been easier earlier.”

Your succeeding paragraphs describe well why both halves of this sentence are likely incorrect. People in other U.S. states could do it, and China can and will. Given this reality, the idea that much AI regulation that would successfully prevent existential risk can be properly crafted AND ENFORCED strikes me as exceedingly low.

I like your analogy to nuclear weapons. But keeping AI from spreading is unlikely to be possible unless the U.S. or NAT decides to bombshell energy infrastructure of a country.

All any AI regulation implemented will do is make it harder for the (relatively) good guys to keep pace with the bad guys.

Expand full comment

My second thought is the Herbert hypothesis: that AIs will be used by men to enslave other men. I think that's certainly possible, and the degree of totalitarian control made possible by AI is what we should be worrying about in the near future.

Expand full comment

The whole secular stagnation argument seems so weird to me. Like, am I really supposed to buy that the *one and only* way out of an otherwise inevitable slide into civilisational collapse *just happens* to be this incredibly risky and unpopular and otherwise hard-to-justify enterprise that you, roon (or whoever: people making this argument mostly seem to work in/adjacent to AI, no?) just happen to have devoted your life to?

Do some politics! Or science! Or rhetoric! Or any of the thousands of things humans have done in the past to overcome such challenges — or, for sure, something entirely new and untried! Why does it have to be, effectively, “discover the cheat codes that will let me blast through all the obstacles at will, oh and maybe that will destroy literally everything”? Am I wrong to characterise the thinking thusly? If so, how; if not, how can this pass anyone’s basic sanity check? I know some people will respond with, “Oh, but if the cheat codes work, we get x +EV,” etc etc. I don’t know how to respond to that because once you’ve got as far as doing any calculation on that basic proposition you’ve already left the place where I live, morally and intellectually.

Expand full comment

> And for those who need to be reminded, this is not a Pascal’s Wager situation, at all.

Excuse me, this is exactly a Pascal's Wager situation. It's just that everyone has forgotten what Pascal's Wager was. Pensee #233 is right there, you can go read it if you want. It's not about 0.01%. It's about how to make decisions under fundamental uncertainty in the first place. Pascal thought the evidence did in fact point towards Christian belief. He proposed the Wager as a refutation of those who'd say "you can't know for sure" as if it ended the discussion, to motivate actual investigation.

Expand full comment

“Alas, poverty is largely relative, and the world needs and will always find new incentives and scarce resources to fight about.”

I’m finding this whole combo review “piece” fascinating (I ain’t close to finished), and I find I agree with you on most of it - even as I’m not on the same page with you re: AI existential risk and what should or should not be done about it.

But the above quoted take is particularly bad, especially for one who is a rationalist.

Poverty is NOT “largely relative” - even if it ain’t an exact thing.

WEALTH might indeed be “largely relative”; poverty is not.

Raising up the billions of the world’s poor to the level of TODAY’s lower-middle class American (say, 80th - 85th percentile of U.S. income) would be a huge, massively wonderful thing.

Sure we can argue about exactly where to draw the line, and separately of course it is human nature to want more. But those points are irrelevant to the important part of any discussion on poverty.

Americans at the 80th percentile of income today do NOT in fact live in “poverty”.

Getting first 50% of the world, then 80% then at least 90% of the world’s population up to that level someday, preferably somewhat sooner rather than later, would be a WONDERFUL. THING. Full stop.

Poverty is NOT “largely relative”.

Expand full comment

I think Zvi would agree that the utility gains there would be massive, just as the gains between the world of a few centuries (or even decades) ago and today are absolutely massive.

But as a society, people will talk about poverty in developed nations while pointing at people who have more than most people in history could have dreamed of. Our standards for 'enough' rise pretty fast, and anyone sufficiently below average in wealth gets called "poor" no matter how much we raise the average.

Expand full comment

But our “relative” poor is not actual poverty. We literally have defined poverty upwards for years. Dems’ definition of poverty is such that it can NEVER be eliminated in our country! It is always the bottom xx% of the income distribution - usually even after transfers! This is absurd.

So I don’t accept someone rational claiming that poverty is largely relative.

VERY few poor American citizens live in poverty any more.

But billions around the world still do.

Absolutely.

Expand full comment

“Our standards for 'enough' rise pretty fast”

I was making no comment re: the other side of wealth/income. What is “enough”, whether that is a fair question or the right question, that indeed is a complex topic lacking easy or obvious answers, and about which perfectly reasonable people can completely disagree.

That, however, is completely orthogonal to the (imo false, or at minimum mostly false) claim that poverty is relative.

Expand full comment

"Yudkowsky would respond that this is not the kind of situation where model errors work in your favor. More often than not yes, but in the 90s variance and uncertainty are your friends anyway."

Can you explain this? Is it just that anything north of 50% on a binary outcome should be adjusted down for uncertainty? That doesn't sound right to me, so it's probably not what was meant?

Expand full comment
author

I mean that if you're 90%+ to fail by default, and you can increase the amount of uncertainty, take more risks, then that's a likely to be a good idea.

Expand full comment