8 Comments

Confused about the idea that AI regs should optimally be paired with heavy deregulation everywhere else.

In a world where AI is panning out, obviously economic growth goes up like crazy no matter what, and money always has diminishing marginal utility. It seems to me that in that world, we can much more easily “afford” economically costly regulations that prioritize something non-material we care about.

Like, in our current world I oppose NIMBY zoning regulations because the cost-benefit sucks: the society-wide pain from high housing costs powerfully outweighs the enjoyment incumbent homeowners get from stasis. But once we’re all rich, why not let the suburbanites have their stasis if they want it?

Expand full comment

Interesting question. I'm not sure that I get it myself, but my best guess is that regulation has the ability to eliminate much of the gains from AI. One, by making large parts of the economy off-limits to AI. Imagine that healthcare, education, and humanoid robots are ruled off limits. Perhaps 50% of the potential gains go away right there. Two, by sucking up the economic surplus into increasingly inefficient by comparison sectors. Great, we're 10X richer, but now housing, healthcare, and education are all 15X more expensive due to supply constraints. Sure, AI-generated video is now free and so on and so forth, but how much does that matter to your actual quality of life? Not that much. From a thriving index perspective, you're probably actually poorer.

Basically, the costs from regulation may get more onerous at least as fast as we get richer.

Expand full comment

(Minor readability comment: I know you said you're not going to quote most of the theses, but it does make this post hard to read. Not sure it's worth your time to fix this, but it would be much easier to read if each thesis was copy-pasted before your comments on it.)

Expand full comment

Yeah I found switching to be annoying

Expand full comment

Cool to see we mostly agree where it matters. To clarify a few of the more philosophical areas where you disagreed:

Section 4.6 - My intuition here comes from observing the generally poor to nonexistent governance of nonprofits on the one hand, and the greater use of equity-based comp and shareholder voice for incentive alignment of public companies on the other. Public companies also have many additional disclosures and fiduciary duties. It should concern us that the OpenAI's LP agreement warns that that they are under no obligation to make a profit or provide returns to limited partners, and that Sam doesn't care about making money per se. The mission takes priority, which is in some sense commendable, but also the start of a Michael Crichton novel.

Section 5.7 - Utilitarianism is a system- / outcome-level moral framework, whereas many EAs focus on the life *you* can save; the meat *you* didn't eat; the kidney *you* donated. That's all fine and good, but is a kind of internalization of utilitarian thinking into personal habits and character. The Christian lineage from Comte's religion of humanity onward is fairly clear, though I'm far from the first to point it out. See: Tyler's famous bloggingheads with Peter Singer. As for the inverse of EA being satanic, there's obviously a family resemblance between LaVeyan Satanism, Randian objectivism, Nietzsche's inversion of "slave morality," etc., so you're not wrong.

Section 9.7 - My intuition here is part Parfitian, part Vedic. Do enough meditation (and/or acid), and you will depersonalize and detach from your wants, urges, emotions, dissolve the subject-object distinction and come to see identity as an illusion. More practically, it's not clear how AIs could acquire moral status if they can be turned on or off from saved states, or replaced part by part like the Ship of Theseus. Moral personhood seems indelibly linked to both continuity of personal identity and the fleeting, "black box" nature of our mind's biological substrate. If Parfit's teletransporter existed I'm not sure we'd perceive murder in the same way. I'm not saying AI will make teletransporters real, I'm just saying we're more likely to "level-down" our self-understanding as wet neural networks than to "level-up" artificial neural networks into dignified agents.

Section 10.4. This connects to Parfit as well. Civilizations conceived as meta-agents depend on generational turnover ("society advances one funeral at a time," etc.). Having kids is like creating admixture clones of yourself to carry on a version of your mind after you die. Electing to never die is a tragedy of the anticommons in a way analogous to someone holding out on selling their home to make way for a bigger project. Dying in old age surrounded by children and grandchildren is a public good, whereas living forever is a kind of selfish holdout problem. Like if Captain Kirk got in the teletransporter only for his original copy to refuse to be destroyed. Obviously I wouldn't want to die either, but I'm also aware that almost every cell in my body has turned-over multiple times throughout my life. The illusion of identity and drive for self-preservation become pathological if dying becomes optional.

Expand full comment

If Zvi's EA-ish, he naturally views e/acc as awful as it's an inversion of something not too far from his own views.

To LaVeyan Satanism (inverted midcentury American Christianity), Randian objectivism (inverted Communism), and Nietzsche's 'master morality' (inverted 'slave morality') I'd add Aleister Crowley's Thelema (an inversion of very conservative low-church Christianity, similar in many ways to modern evangelicalism) and the modern 'red pill' movement (an inversion of feminism). These things don't last but they seem to be influential in some ways.

Expand full comment

I dig most of your theses, and find myself torn between Zvi's take and yours on some. But 10.4: On the illusion of identity... No, I think it's much less illusory than you seem to present here. The key aspects of 'you' are neurons formed before your birth, and then pruned (never to be replaced) during your life. These neurons are you. Lose them and you irreparably lose a part of the you-ness of you. Your blood cells aren't you in the same way. They turn over frequently. Your long-range neuronal axons are irreplaceable, and the computational graph they outline fundamentally defines the limits of who you are. The physical biological constraints of how your neurons are able to change define the limits of who you can become as a human being, what you can learn and how much you can change. The limits are wide, but there are limits. This of course falls apart when we have digital sapient beings, human or not. They can choose not to obey the biological limits that bind cell-based creatures. Not only can they live forever, they can do far weirder things. Growing in mentally capability and complexity, cloning themselves endlessly, splitting into fractional parts, merging with others... it's just way way weirder than simply immortal humans. Thinking our current biology-bound identities are merely illusion with no binding power gives a false impression of how normal these future substrate-independent intelligences will quickly be able to become.

Expand full comment

As you say, we have survivorship bias and only know this is the world where liberal democratic capitalism succeeded and we live in it. We can't know if Communism would have been stable, but the answer seems no. It's not clear Hitler had enough Germans to run his empire if he'd won, but mercifully we never got to find out.

I'm still waiting to see if socialism with Chinese characteristics does better. I wouldn't bet against the Chinese knowing how to administer an empire. They've done it for 4000 years!

Expand full comment