Discussion about this post

User's avatar
Aryeh L. Englander's avatar

(Minor readability comment: I know you said you're not going to quote most of the theses, but it does make this post hard to read. Not sure it's worth your time to fix this, but it would be much easier to read if each thesis was copy-pasted before your comments on it.)

Expand full comment
Samuel Hammond's avatar

Cool to see we mostly agree where it matters. To clarify a few of the more philosophical areas where you disagreed:

Section 4.6 - My intuition here comes from observing the generally poor to nonexistent governance of nonprofits on the one hand, and the greater use of equity-based comp and shareholder voice for incentive alignment of public companies on the other. Public companies also have many additional disclosures and fiduciary duties. It should concern us that the OpenAI's LP agreement warns that that they are under no obligation to make a profit or provide returns to limited partners, and that Sam doesn't care about making money per se. The mission takes priority, which is in some sense commendable, but also the start of a Michael Crichton novel.

Section 5.7 - Utilitarianism is a system- / outcome-level moral framework, whereas many EAs focus on the life *you* can save; the meat *you* didn't eat; the kidney *you* donated. That's all fine and good, but is a kind of internalization of utilitarian thinking into personal habits and character. The Christian lineage from Comte's religion of humanity onward is fairly clear, though I'm far from the first to point it out. See: Tyler's famous bloggingheads with Peter Singer. As for the inverse of EA being satanic, there's obviously a family resemblance between LaVeyan Satanism, Randian objectivism, Nietzsche's inversion of "slave morality," etc., so you're not wrong.

Section 9.7 - My intuition here is part Parfitian, part Vedic. Do enough meditation (and/or acid), and you will depersonalize and detach from your wants, urges, emotions, dissolve the subject-object distinction and come to see identity as an illusion. More practically, it's not clear how AIs could acquire moral status if they can be turned on or off from saved states, or replaced part by part like the Ship of Theseus. Moral personhood seems indelibly linked to both continuity of personal identity and the fleeting, "black box" nature of our mind's biological substrate. If Parfit's teletransporter existed I'm not sure we'd perceive murder in the same way. I'm not saying AI will make teletransporters real, I'm just saying we're more likely to "level-down" our self-understanding as wet neural networks than to "level-up" artificial neural networks into dignified agents.

Section 10.4. This connects to Parfit as well. Civilizations conceived as meta-agents depend on generational turnover ("society advances one funeral at a time," etc.). Having kids is like creating admixture clones of yourself to carry on a version of your mind after you die. Electing to never die is a tragedy of the anticommons in a way analogous to someone holding out on selling their home to make way for a bigger project. Dying in old age surrounded by children and grandchildren is a public good, whereas living forever is a kind of selfish holdout problem. Like if Captain Kirk got in the teletransporter only for his original copy to refuse to be destroyed. Obviously I wouldn't want to die either, but I'm also aware that almost every cell in my body has turned-over multiple times throughout my life. The illusion of identity and drive for self-preservation become pathological if dying becomes optional.

Expand full comment
6 more comments...

No posts