Roon, member of OpenAI’s technical staff, is one of the few candidates for a Worthy Opponent when discussing questions of AI capabilities development, AI existential risk and what we should do about it. Roon is alive. Roon is thinking. Roon clearly values good things over bad things. Roon is engaging with the actual questions, rather than denying or hiding from them, and unafraid to call all sorts of idiots idiots. As his profile once said, he believes spice must flow, we just do go ahead, and makes a mixture of arguments for that, some good, some bad and many absurd. Also, his account is fun as hell.
Thus, when he comes out as strongly as he seemed to do recently, attention is paid, and we got to have a relatively good discussion of key questions. While I attempt to contribute here, this post is largely aimed at preserving that discussion.
The Initial Statement
As you would expect, Roon’s statement last week that AGI was inevitable and nothing could stop it so you should essentially spend your final days with your loved ones and hope it all works out, led to some strong reactions.
Many pointed out that AGI has to be built, at very large cost, by highly talented hardworking humans, in ways that seem entirely plausible to prevent or redirect if we decided to prevent or redirect those developments.
Roon (from last week): Things are accelerating. Pretty much nothing needs to change course to achieve agi imo. Worrying about timelines is idle anxiety, outside your control. you should be anxious about stupid mortal things instead. Do your parents hate you? Does your wife love you?
Roon: It should be all the more clarifying coming from someone at OpenAI. I and half my colleagues and Sama could drop dead and AGI would still happen. If I don’t feel any control everyone else certainly shouldn’t.
Tetraspace: "give up about agi there's nothing you can do" nah
Sounds like we should take action to get some control, then. This seems like the kind of thing we should want to be able to control.
Connor Leahy: I would like to thank roon for having the balls to say it how it is. Now we have to do something about it, instead of rolling over and feeling sorry for ourselves and giving up.
Simeon: This is BS. There are <200 irreplaceable folks at the forefront. OpenAI alone has a >1 year lead. Any single of those persons can single handedly affect the timelines and will have blood on their hands if we blow ourselves up bc we went too fast.
PauseAI: AGI is not inevitable. It requires hordes of engineers with million dollar paychecks. It requires a fully functional and unrestricted supply chain of the most complex hardware. It requires all of us to allow these companies to gamble with our future.
Tolga Bilge: Roon, who works at OpenAI, telling us all that OpenAI have basically no control over the speed of development of this technology their company is leading the creation of.
It's time for governments to step in.
His reply is deleted now, but I broadly agree with his point here as it applies to OpenAI. This is a consequence of AI race dynamics. The financial upside of AGI is so great that AI companies will push ahead with it as fast as possible, with little regard to its huge risks.
OpenAI could do the right thing and pause further development, but another less responsible company would simply take their place and push on. Capital and other resources will move accordingly too. This is why we need government to help solve the coordination problem now. [continues as you would expect]
Saying no one has any control so why try to do anything to get control back seems like the opposite of what is needed here.
The Doubling Down
Roon: buncha ⏸️ emojis harassing me today. My post was about how it’s better to be anxious about things in your control and they’re like shame on you.
Also tweets don’t get deleted because they’re secret knowledge that needs to be protected. I wouldn’t tweet secrets in the first place. they get deleted when miscommunication risk is high, so screenshotting makes you de facto antisocial idiot.
Roon’s point on idle anxiety is indeed a good one. If you are not one of those trying to gain or assert some of that control, as most people on Earth are not and should not be, then of course I agree that idle anxiety is not useful. However Roon then did attempt to extend this to claim that all anxiety about AGI is idle, that no one has any control. That is where there is strong disagreement, and what is causing the reaction.
Roon: It’s okay to watch and wonder about the dance of the gods, the clash of titans, but it’s not good to fret about the outcome. political culture encourages us to think that generalized anxiety is equivalent to civic duty.
Scott Alexander: Counterargument: there is only one God, and He finds nothing in the world funnier than letting ordinary mortals gum up the carefully-crafted plans of false demiurges. Cf. Lord of the Rings.
Anton: conversely if you have a role to play in history, fate will punish you if you don’t see it through.
Alignment Perspectives: It may punish you even more for seeing it through if your desire to play a role is driven by arrogance or ego.
Anton: Yeah it be that way.
Connor Leahy Gives it a Shot
Connor Leahy (responding to Roon): The gods only have power because they trick people like this into doing their bidding. It's so much easier to just submit instead of mastering divinity engineering and applying it yourself. It's so scary to admit that we do have agency, if we take it. In other words: "cope."
It took me a long time to understand what people like Nietzsche were yapping on about about people practically begging to have their agency be taken away from them.
It always struck me as authoritarian cope, justification for wannabe dictators to feel like they're doing a favor to people they oppress (and yes, I do think there is a serious amount of that in many philosophers of this ilk.)
But there is also another, deeper, weirder, more psychoanalytic phenomena at play. I did not understand what it was or how it works or why it exists for a long time, but I think over the last couple of years of watching my fellow smart, goodhearted tech-nerds fall into these deranged submission/cuckold traps I've really started to understand.
e/acc is the most cartoonish example of this, an ideology that appropriates faux, surface level aesthetics of power while fundamentally being an ideology preaching submission to a higher force, a stronger man (or something even more psychoanalytically-flavored, if one where to ask ol' Sigmund), rather than actually striving for power acquisition and wielding. And it is fully, hilariously, embarrassingly irreflexive about this.
San Francisco is a very strange place, with a very strange culture. If I had to characterize it in one way, it is a culture of extremes and where everything on the surface looks like the opposite of what it is (or maybe the "inversion") . It's California's California, and California is the USA's USA. The most powerful distillation of a certain strain of memetic outgrowth.
And on the surface, it is libertarian, Nietzschean even, a heroic founding mythos of lone iconoclasts striking out against all to find and wield legendary power. But if we take the psychoanalytic perspective, anyone (or anything) that insists too hard on being one thing is likely deep down the opposite of that, and knows it.
There is a strange undercurrent to SF that I have not seen people put good words to where it in fact hyper-optimizes for conformity and selling your soul, debasing and sacrificing everything that makes you human in pursuit of some god or higher power, whether spiritual, corporate or technological.
SF is where you go if you want to sell every last scrap of your mind, body and soul. You will be compensated, of course, the devil always pays his dues.
The innovative trick the devil has learned is that people tend to not like eternal, legible torment, so it is much better if you sell them an anxiety free, docile life. Free love, free sex, free drugs, freedom! You want freedom, don't you? The freedom to not have to worry about what all the big boys are doing, don't you worry your pretty little head about any of that...
I recall a story of how a group of AI researchers at a leading org (consider this rumor completely fictional and illustrative, but if you wanted to find its source it's not that hard to find in Berkeley) became extremely depressed about AGI and alignment, thinking that they were doomed if their company kept building AGI like this.
So what did they do? Quit? Organize a protest? Petition the government?
They drove out, deep into the desert, and did a shit ton of acid...and when they were back, they all just didn't feel quite so stressed out about this whole AGI doom thing anymore, and there was no need for them to have to have a stressful confrontation with their big, scary, CEO.
The SF bargain. Freedom, freedom at last...
This is a very good attempt to identify key elements of the elephant I grasp when I notice that being in San Francisco very much does not agree with me. I always have excellent conversations during visits because the city has abducted so many of the best people, I always get excited by them, but the place feels alien, as if I am being constantly attacked by paradox spirits, visiting a deeply hostile and alien culture that has inverted many of my most sacred values and wants to eat absolutely everything. Whereas here, in New York City, I feel very much at home.
Meanwhile, back in the thread:
Connor (continuing): I don't like shitting on roon in particular. From everything I know, he's a good guy, in another life we would have been good friends. I'm sorry for singling you out, buddy, I hope you don't take it personally.
But he is doing a big public service here in doing the one thing spiritual shambling corpses like him can do at this advanced stage of spiritual erosion: Serve as a grim warning.
Roon Responds to Connor
Roon: Connor, this is super well written and I honestly appreciate the scathing response. You mistake me somewhat: you, Connor, are obviously not powerless and you should do what you can to further your cause. Your students are not powerless either. I’m not asking you to give up and relent to the powers that be even a little. I’m not “e/acc” and am repelled by the idea of letting the strongest replicator win.
I think the majority of people have no insight into whether AGI is going to cause ruin or not, whether a gamma ray burst is fated to end mankind, or if electing the wrong candidate is going to doom earth to global warming. It’s not good for people to spend all their time worried about cosmic eventualities. Even for an alignment researcher the optimal mental state is to think on and play and interrogate these things rather than engage in neuroticism as the motivating force
It’s generally the lack of spirituality that leads people to constant existential worry rather than too much spirituality. I think it’s strange to hear you say in the same tweet thread that SF demands submission to some type of god but is also spiritually bankrupt and that I’m corpselike.
My spirituality is simple, and several thousand years old: find your duty and do it without fretting about the outcome.
I have found my personal duty and I fulfill it, and have been fulfilling it, long before the market rewarded me for doing so. I’m generally optimistic about AI technology. When I’ve been worried about deployment, I’ve reached out to leadership to try and exert influence. In each case I was wrong to worry.
When the OpenAI crisis happened I reminded people not to throw the baby out with the bath water: that AI alignment research is vital.
This is a very good response. He is pointing out that yes, some people such as Connor can influence what happens, and they in particular should try to model and influence events.
Roon is also saying that he himself is doing his best to influence events. Roon realizes that those at OpenAI matter and what they do matter.
Roon reached out to leadership on several occasions with safety concerns. When he says he was ‘wrong to worry’ I presume he means that the situation worked out and was handled, I am confident that expressing his concerns was the output of the best available decision algorithm, you want most such concerns you express to turn out fine.
Roon also worked, in the wake of events at OpenAI, to remind people of the importance of alignment work, that they should not toss it out based on those events. Which is a scary thing for him to report having to do, but expected, and it is good that he did so. I would feel better if I knew Ilya was back working at Superalignment.
And of course, Roon is constantly active on Twitter, saying things that impact the discourse, often for the better. He seems keenly aware that his actions matter, whether or not he could meaningfully slow down AGI. I actually think he perhaps could, if he put his mind to it.
The contrast here versus the original post is important. The good message is ‘do not waste time worrying too much over things you do not impact.’ The bad message is ‘no one can impact this.’
Connor Goes Deep
Then Connor goes deep and it gets weirder, also this long post has 450k views and is aimed largely at trying to get through to Roon in particular. But also there are many others in a similar spot, so some others should read this as well. Many of you however should skip it.
Connor: Thanks for your response Roon. You make a lot of good, well put points. It's extremely difficult to discuss "high meta" concepts like spirituality, duty and memetics even in the best of circumstances, so I appreciate that we can have this conversation even through the psychic quagmire that is twitter replies.
I will be liberally mixing terminology and concepts from various mystic traditions to try to make my point, apologies to more careful practitioners of these paths.
For those unfamiliar with how to read mystic writing, take everything written as metaphors pointing to concepts rather than rationally enumerating and rigorously defining them. Whenever you see me talking about spirits/supernatural/gods/spells/etc, try replacing them in your head with society/memetics/software/virtual/coordination/speech/thought/emotions and see if that helps.
It is unavoidable that this kind of communication will be heavily underspecified and open to misinterpretation, I apologize. Our language and culture simply lacks robust means by which to communicate what I wish to say.
Nevertheless, an attempt:
I.
I think a core difference between the two of us that is leading to confusion is what we both mean when we talk about spirituality and what its purpose is.
You write:
>"It’s not good for people to spend all their time worried about cosmic eventualities. [...] It’s generally the lack of spirituality that leads people to constant existential worry rather than too much spirituality. I think it’s strange to hear you say in the same tweet thread that SF demands submission to some type of god but is also spiritually bankrupt and that I’m corpselike"
This is an incredibly common sentiment I see in Seekers of all mystical paths, and it annoys the shit out of me (no offense lol).
I've always had this aversion to how much Buddhism (Not All™ Buddhism) focuses on freedom from suffering, and especially Western Buddhism is often just shy of hedonistic. (nevermind New Age and other forms of neo-spirituality, ugh) It all strikes me as so toxically selfish.
No! I don't want to feel nice and avoid pain, I want the world to be good! I don't want to feel good about the world, I want it to be good! These are not the same thing!!
My view does not accept "but people feel better if they do X" as a general purpose justification for X! There are many things that make people feel good that are very, very bad!
II.
Your spiritual journey should make you powerful, so you can save people that are in need, what else is the fucking point? (Daoism seems to have a bit more of this aesthetic, but they all died of drinking mercury so lol rip) You travel into the Underworld in order to find the strength you need to fight off the Evil that is threatening the Valley, not so you can chill! (Unless you're a massive narcissist, which ~everyone is to varying degrees)
The mystic/heroic/shamanic path starts with departing from the daily world of the living, the Valley, into the Underworld, the Mountains. You quickly notice how much of your previous life was illusions of various kinds. You encounter all forms of curious and interesting and terrifying spirits, ghosts and deities. Some hinder you, some aid you, many are merely odd and wondrous background fixtures.
Most would-be Seekers quickly turn back after their first brush with the Underworld, returning to the safe comforting familiarity of the Valley. They are not destined for the Journey. But others prevail.
As the shaman progresses, he learns more and more to barter with, summon and consult with the spirits, learns of how he can live a more spiritually fulfilling and empowered life. He tends to become more and more like the Underworld, someone a step outside the world of the Valley, capable of spinning fantastical spells and tales that the people of the Valley regard with awe and a bit of fear.
And this is where most shamans get stuck, either returning to the Valley with their newfound tricks, or becoming lost and trapped in the Underworld forever, usually by being picked off by predatory Underworld inhabitants.
Few Seekers make it all the way, and find the true payoff, the true punchline to the shamanic journey: There are no spirits, there never were any spirits! It's only you. (and "you" is also not really a thing, longer story)
"Spirit" is what we call things that are illegible and appear non mechanistic (unintelligible and un-influencable) in their functioning. But of course, everything is mechanistic, and once you understand the mechanistic processes well enough, the “spirits” disappear. There is nothing non-mechanistic left to explain. There never were any spirits. You exit the Underworld. (“Emergent agentic processes”, aka gods/egregores/etc, don't disappear, they are real, but they are also fully mechanistic, there is no need for unknowable spirits to explain them)
The ultimate stage of the Journey is not epic feelsgoodman, or electric tingling erotic hedonistic occult mastery. It's simple, predictable, mechanical, Calm. It is mechanical, it is in seeing reality for what it is, a mechanical process, a system that you can act in skilfully. Daoism has a good concept for this that is horrifically poorly translated as "non-action", despite being precisely about acting so effectively it's as if you were just naturally part of the Stream.
The Dao that can be told is not the true Dao, but the one thing I am sure about the true Dao is that it is mechanical.
III.
I think you were tricked and got stuck on your spiritual journey, lured in by promises of safety and lack of anxiety, rather than progressing to exiting the Underworld and entering the bodhisattva realm of mechanical equanimity. A common fate, I'm afraid. (This is probably an abuse of buddhist terminology, trying my best to express something subtle, alas)
Submission to a god is a way to avoid spiritual maturity, to outsource the responsibility for your own mind to another entity (emergent/memetic or not). It's a powerful strategy, you will be rewarded (unless you picked a shit god to sell your soul to), and it is in fact a much better choice for 99% of people in most scenarios than the Journey.
The Underworld is terrifying and dangerous, most people just go crazy/get picked off by psycho fauna on their way to enlightenment and self mastery. I think you got picked off by psycho fauna, because the local noosphere of SF is a hotbed for exactly such predatory memetic species.
IV.
It is in my aesthetics to occasionally see someone with so much potential, so close to getting it, and hitting them with the verbal equivalent of a bamboo rod to hope they snap out of it. (It rarely works. The reasons it rarely works are mechanistic and I have figured out many of them and how to fix them, but that's for a longer series of writing to discuss.)
Like, bro, by your own admission, your spirituality is “I was just following orders.” Yeah, I mean, that's one way to not feel anxiety around responsibility. But…listen to yourself, man! Snap out of it!!!
Eventually, whether you come at it from Buddhism, Christianity, psychoanalysis, Western occultism/magick, shamanism, Nietzscheanism, rationality or any other mystic tradition, you learn one of the most powerful filters on people gaining power and agency is that in general, people care far, far more about avoiding pain than in doing good. And this is what the ambient psycho fauna has evolved to exploit.
You clearly have incredible writing skills and reflection, you aren't normal. Wake up, look at yourself, man! Do you think most people have your level of reflective insight into their deepest spiritual motivations and conceptions of duty? You're brilliantly smart, a gifted writer, and followed and listened to by literally hundreds of thousands of people.
I don't just give compliments to people to make them feel good, I give people compliments to draw their attention to things they should not expect other people to have/be able to do.
If someone with your magickal powerlevel is unable to do anything but sell his soul, then god has truly forsaken humanity. (and despite how it may seem at times, he has not truly forsaken us quite yet)
V.
What makes you corpse-like is that you have abdicated your divine spark of agency to someone, or something, else, and that thing you have given it to is neither human nor benevolent, it is a malignant emergent psychic megafauna that stalks the bay area (and many other places). You are as much an extension of its body as a shambling corpse is of its creator's necromantic will.
The fact that you are “optimistic” (feel your current bargain is good), that you were already like this before the market rewarded you for it (a target with a specific profile and set of vulnerabilities to exploit), that leadership can readily reassure you (the psychofauna that picked you off is adapted to your vulnerabilities. Note I don't mean the people, I'm sure your managers are perfectly nice people, but they are also extensions of the emergent megafauna), and that we are having this conversation right now (I target people that are legibly picked off by certain megafauna I know how to hunt or want to practice hunting) are not independent coincidences.
VI.
You write:
>"It’s not good for people to spend all their time worried about cosmic eventualities. Even for an alignment researcher the optimal mental state is to think on and play and interrogate these things rather than engage in neuroticism as the motivating force"
Despite my objection about avoidance of pain vs doing of good, there is something deep here. The deep thing is that, yes, of course the default ways by which people will relate to the Evil threatening the Valley will be Unskillful (neuroticism, spiralling, depression, pledging to the conveniently nearby located "anti-that-thing-you-hate" culturewar psychofauna), and it is in fact often the case that it would be better for them to use No Means rather than Unskillful Means.
Not everyone is built for surviving the harrowing Journey and mastering Skilful Means, I understand this, and this is a fact I struggle with as well.
Obviously, we need as many Heroes as possible to take on the Journey in order to master the Skilful Means to protect the Valley from the ever more dangerous Threats. But the default outcome of some rando wandering into the Underworld is them fleeing in terror, being possessed by Demons/Psychofauna or worse.
How does a society handle this tradeoff? Do we just yeet everyone headfirst into the nearest Underworld portal and see what staggers back out later? (The SF Protocol™) Do we not let anyone into the Underworld for fear of what Demons they might bring back with them? (The Dark Ages Strategy™) Obviously, neither naive strategy works.
Historically, the strategy is to usually have a Guide, but unfortunately those tend to go crazy as well. Alas.
So is there a better way? Yes, which is to blaze a path through the Underworld, to build Infrastructure. This is what the Scientific Revolution did. It blazed a path and mass produced powerful new memetic/psychic weapons by which to fend off unfriendly Underworld dwellers. And what a glorious thing it was for this very reason. (If you ever hear me yapping on about "epistemology", this is to a large degree what I'm talking about)
But now the Underworld has adapted, and we have blazed paths into deeper, darker corners of the Underworld, to the point our blades are beginning to dull against the thick hides of the newest Terrors we have unleashed on the Valley.
We need a new path, new weapons, new infrastructure. How do we do that? I'm glad you asked...I'm trying to figure that out myself. Maybe I will speak more about this publicly in the future if there is interest.
VII.
> "I have found my personal duty and I fulfill it, and have been fulfilling it, long before the market rewarded me for doing so."
Ultimately, the simple fact is that this is a morality that can justify anything, depending on what "duty" you pick, and I don't consider conceptions of "good" to be valid if they can be used to justify anything.
It is just a null statement, you are saying "I picked a thing I wanted and it is my duty to do that thing." But where did that thing come from? Are you sure it is not the Great Deceiver/Replicator in disguise? Hint: If you somehow find yourself gleefully working on the most dangerous existential harm to humanity, you are probably working for The Great Deceiver/Replicator.
It is not a coincidence that the people that end up working on these kinds of most dangerous possible technologies tend to have ideologies that tend to end up boiling down to "I can do whatever I want." Libertarianism, open source, "duty"...
I know, I was one of them.
Coda.
Is there a point I am trying to make? There are too many points I want to make, but our psychic infrastructure can barely host meta conversations at all, nevermind high-meta like this.
Then what should Roon do? What am I making a bid for? Ah, alas, if all I was asking for was for people to do some kind of simple, easy, atomic action that can be articulated in simple English language.
What I want is for people to be better, to care, to become powerful, to act. But that is neither atomic nor easy.
It is simple though.
Roon (QTing all that): He kinda cooked my ass.
Christian Keil: Honestly, kinda. That dude can write.
But it's also just a "what if" exposition that explores why your worldview would be bad assuming that it's wrong. But he never says why you're wrong, just that you are.
As I read it, your point is "the main forces shaping the world operate above the level of individual human intention & action, and understanding this makes spirituality/duty more important."
And his point is "if you are smart, think hard, and accept painful truths, you will realize the world is a machine that you can deliberately alter."
That's a near-miss, but still a miss, in my book.
Roon: Yes.
Connor Leahy: Finally, someone else points out where I missed!
I did indeed miss the heart of the beast, thank you for putting it this succinctly.
The short version is "You are right, I did not show that Roon is object level wrong", and the longer version is;
"I didn't attempt to take that shot, because I did not think I could pull it off in one tweet (and it would have been less interesting). So instead, I pointed to a meta process, and made a claim that iff roon improved his meta reasoning, he would converge to a different object level claim, but I did not actually rigorously defend an object level argument about AI (I have done this ad nauseam elsewhere). I took a shot at the defense mechanism, not the object claim.
Instead of pointing to a flaw in his object level reasoning (of which there are so many, I claim, that it would be intractable to address them all in a mere tweet), I tried to point to (one of) the meta-level generator of those mistakes."
I like to think I got most of that, but how would I know if I was wrong?
Focusing on the one aspect of this: One must hold both concepts in one’s head at the same time.
The main forces shaping the world operate above the level of individual human intention & action, and you must understand how they work and flow in order to be able to influence them in ways that make things better.
If you are smart, think hard, and accept painful truths, you will realize the world is a machine that you can deliberately alter.
These are both ‘obviously’ true. You are in the shadow of the Elder Gods up against Cthulhu (well, technically Azathoth), the odds are against you and the situation is grim, and if we are to survive you are going to have to punch them out in the end, which means figuring out how to do that and you won’t be doing it alone.
[EDIT: Roon later responded on March 7]:
Roon: been mulling over this, and have a few thoughts:
>No! I don't want to feel nice and avoid pain, I want the world to be good! I don't want to feel good about the world, I want it to be good! These are not the same thing!!
a fair sentiment, one i'm sympathetic to. I believe it matters how the world is and I don't think escaping this plane into the mind and modifying desire is aesthetically appealing to me.
However any conversation of this nature would be silly without noting that the world is better in many ways than it has been; people are still unhappy, stressed, and anxious. where previously their anxiety was adaptive and led them to make sure their crop yield was enough that the family survived the winter, it's now like crippling anxiety about whether someone gets their next promotion or about talking to girls or whatever. not only do these negative feelings create a worse world de facto, they also lead to lowered agency less powerful individuals with less ability to live their truth.
I believe the vast majority of positive outcomes are created by extraordinary folks freed to do the work that comes naturally to them.
This is wise, as I have discussed many times elsewhere: Any theory of the good definitely has to grapple with Americans and those in advanced countries continuing to be (and perhaps increasingly being) unhappy, stressed and anxious, despite conditions in so many ways being objectively vastly better than they have ever been. For them feeling unable to have children, having no sense of a good future, and so on. Simply giving people more material wealth does not seem to sufficiently constitute the world being good, and yes this has wrecked havoc on agency, although I would cite other factors more on that.
I do agree with the last sentence, but that does not imply that the work that comes naturally to you is a good great work to be doing.
Roon (continuing): >your spirituality is “I was just following orders"
I don't think it's so simple. I think holy duty or dharma is how great action works in an uncertain world. When you started Conjecture you likely weighed it against a variety of opportunity costs. After starting it, you do not fret every minute about whether you should be doing various other things. When you pick a life partner, you don't wonder every time another girl passes by if they'd be a better partner.
There is a conscious act of steeling against uncertainty by choosing some level of abstraction to commit to that is necessary for a life well lived.
Yes. When you take on a great work (or holy duty, or dharma) you cannot constantly relitigate whether it is the correct great work. You need to have that debate, then only revisit the debate when circumstances change and you get sufficiently powerful new information. Commitment is a very important thing.
Some great works, of course, are more obviously double-edged than others. And some will involve learning new information along the way more than others. Life partner is on the extreme ‘almost no matter what’ end of the spectrum. Building AGI (or trying to stop or alter it) is, to me, on the extreme other end.
Roon (continuing): This doesn't mean taking orders from someone else, although I think there are many honorable and valuable arrangements where people do let a trusted Other define their duty, if the Other is more farseeing and brave than they. Arjuna must fight the battle against the Kauravas because it is the right thing to do. Oppenheimer must build the atomic bomb for the United States despite the risks of igniting the atmosphere or creating a violent geopolitical situation. An Oppenheimer who didn't recognize and commit to his duty, hung in limbo by indecision, would've ceded the bomb to the Germans.
Not the best examples, one might say. The Russians would of course have eventually built the bomb regardless, but as I understand it the Germans never got that close and the top brass understood Russia as the true enemy the whole time.
I have never read the Bhagavad Gita, but on local reading this also seems like a not great example to me? That it is fine to kill people because their soul is distinct from their body?
Bhagavad Gita, Chapter 2: O Arjuna, the Spirit that dwells in the body of all beings is eternally indestructible. Therefore, you should not mourn for anybody.
And also it is fine to kill your friends in battle because they started it and the war must end, Krishna said so? Which is now combining four arguments that seem distinct from each other (authority, spirit, blame and resolution), the whole thing seems highly suspicious.
Seeing the world at the end of the Path as perfectly mechanical is a noble goal but likely unrealistic -- I can't calculate all futures. Persistent action requires faith; faithless, noncommital action that constantly demands evidence for its perpetuation will fail.
In practice, this is why e.g. the greatest startups look like cults.
There is a difference between knowing there is a path, seeing the path and walking the path, between knowing the future is mechanical, being able to calculate how to change its path and being able to execute on that. In practice, yes, you need to figure out what you believe is right, then act on it, and have what in many ways looks like faith during the day to day. You also need to be able to step outside that faith, or withdraw it, when it stops making sense.
A Question of Agency
Meanwhile, some more wise words:
Roon: it is impossible to wield agency well without having fun with it; and yet wielding any amount of real power requires a level of care that makes it hard to have fun. It works until it doesn’t.
Roon: people will always think my vague tweets are about agi but they’re about love
Roon: once you accept the capabilities vs alignment framing it’s all over and you become mind killed
What would be a better framing? The issue is that all alignment work is likely to also be capabilities work, and much of capabilities work can help with alignment.
One can and should still ask the question, does applying my agency to differentially advancing this particular thing make it more likely we will get good outcomes versus bad outcomes? That it will relatively rapidly grow our ability to control and understand what AI does versus getting AIs to be able to better do more things? What paths does this help us walk down?
Yes, collectively we absolutely have control over these questions. We can coordinate to choose a different path, and each individual can help steer towards better paths. If necessary, we can take strong collective action, including regulatory and legal action, to stop the future from wiping us out. Pointless anxiety or worry about such outcomes is indeed pointless, that should be minimized, only have the amount required to figure out and take the most useful actions.
What that implies about the best actions for a given person to take will vary widely. I am certainly not claiming to have all the answers here. I like to think Roon would agree that both of us, and many but far from all of you reading this, are in the group that can help improve the odds.
I feel like you missed a nuance in one of the roon tweets:
You write: "I and my colleagues and Sama could drop dead and AGI would still happen."
He actually said: ""I and *half* my colleagues and Sama could drop dead and AGI would still happen."
Your version makes it sounds as if he said someone else will build it instead, his version is more like, they are already so close they will make it anyway.
My opinion isn't worth much in this area, but for whatever it's worth, I think if we read his statements together, Roon has it basically right.
- One problem is that if OpenAI surrenders their lead in the race or is subject to intrusive government regulation, that just means someone else will win, quite likely someone worse. Of course, they'll win several months or even a few years later, which might be the difference between a good AGI future and a bad one, but I wouldn't bet on it.
- But yeah, people with the ability to work on a solution should keep doing that, because it sure would be nice to find one, and working on the problem typically increases the probability of solving it.