> There’s no time in human history at the beginning of the century when the people ever knew what the end of the century was going to be like. Yeah. So maybe it’s I do think it goes faster and faster each century.
Marcus Aurelius (extremely famous philosopher)
> Whatever happens to you has been waiting to happen since the beginning of time. The twining strands of fate wove both of them together: your own existence and the things that happen to you. (10.5)
> To bear in mind constantly that all of this has happened before. And will happen again—the same plot from beginning to end, the identical staging. Produce them in your mind, as you know them from experience or from history…All just the same. Only the people different. (10.27)
> To have contemplated human life for forty years is the same as to have contemplated it for ten thousand years. For what more will you see? (7.49)
I am not sure Sam is on the money here about how much change-per-year there has been throughout history
Altman is on the money on that point at least. Exponential change is easy to find.
For reference:
It took 3 years (1519-1522) for Magellan to circumnavigate the globe.
In 1890, Nelly Bly was able to do it in 72 days.
Gherman Titov did it only 88 minutes in 1961.
Now on the matter of things that progress on a much slower basis (like genetics), there is much of humanity that remains similar from the dawn of civilization, but in terms of the world we have made for ourselves, it is hardly alike.
It took 12 years to go from Sputnik to landing people on the moon. With ChatGPT, Altman and OpenAI essentially fired the starting gun for the AI race, and that was only less than three years ago.
Technology has been advancing at a exponential rate. The gains made today far outstrip any period prior in history. Like there are plenty of people still alive today who were born before commercial aviation.
Not quite correct! The first sentence I sent was part of a quote that claimed "There's no time in human humistory at the beginning of the century when the people ever knew what the end of the century was going to be like".
This is an attempt to falsely establish a Fallacy of Gray and imply that change is constant and accelerating, so change from AI is not a new thing, but the quotes from the Roman Emperor demonstrate that this consensus is relatively new, not ancient. It's a potshot, but I'm calling the quote hyperbolic.
I'm still incredibly confused about why Altman & co. aren't loudly calling for an AI moratorium. Presumably they agree that the existential threat posed by AI is large, see that they're in a race to the bottom, and know that their voices could realistically change things.
Yes, this would tank OpenAI's valuation—but if I'm already a billionaire (with a newborn child, no less), that's well worth the increased odds of humanity's survival (and some time down the line, when we've solved alignment, flourishing.) Same goes for all the other major AI CEOs.
Indeed. I'm not prepared to proclaim them the outright villains, but I cannot fathom what is going through the minds of these people. Perhaps, a certain sense of fatalistic hopelessness with a touch of the "might as well be me" mentality?
Altman and company are the core voting demographic for the Leopards Eating People's Faces party. They are thinking, "the leopards won't eat *my* face."
> Incentive, Not Character, Drives the AGI Arms Races
> Today’s leaders or employees of AGI labs have two choices:
> Option 1: Build AGI first: Potentially be the most powerful human ever for a few months or years before AGI takes over (the Final Flex). Then be killed by your own sand god.
> 2. Don’t build AGI first: Watch your rival (in the US or China) become the most powerful human in all of history, instead of you. Then be killed by your rival’s sand god.
> Without international regulation on AGI these are the only two alternatives. Because of this, even decent and well-intended people will recklessly drive towards AGI – regardless of risk.
> This has led to an obvious arms race on the way to general intelligence.
> “AI will probably destroy us all, but in the meantime there will be some great companies.” – Sam Altman
> “With artificial intelligence we are summoning the demon. You know the story of the guy with the pentagram and the holy water… like… yeah (sarcastically) you’re pretty sure you can control the demon” – Elon Musk
> “There’s a long tail of things of varying degrees of badness that could happen. I think at the extreme end is the Nick Bostrom-style of fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.” – Dario Amodei
> All three of these individuals are still actively, ravenously driving towards AGI predominance. Given the alternative (being destroyed by someone else’s AGI), I don’t blame them. If given the same two choices, essentially everyone would do the same things they’re doing now.
I find it weird that people expect anything else to happen. I don't think it's even possible to pause, at all. That'd require stopping technological progress.
And even if somehow, magically, people coordinated to do just that - civilization soon crashes due to fertility crisis.
About historically "nothing ever happens" being a good heuristic. Since that broke down, how could one expect there to somehow be a stable world for a prolonged period of time?
From "Optimality is the tiger, and agents are its teeth"
> You can try to prevent a model from being or trying to be an agent, but it is not the agent or the model that is trying to kill you, or anything trying to kill you really, it is optimality just going off and breaking things. It is that optimality has made it so that a line of text can end the world.
> No, you say to the model, you may not call your own model, that would make you an agent, and you are not allowed to become an agent.
> Sure, replies the model immediately, the most effective way to get a lot of paperclips by tomorrow is to get another model and provide the input “Generate Shell code that...”
> The model isn't trying to bootstrap into an agent, optimality just made agents dangerous, and the model is reaching for what works.
> You resist further the call of death, replying to the model actually we humans are just going to start a new paperclip factory and you are only going to provide advice. How do we get the most paperclips for this year?
> And then your model helps you invent self-replicating nanotechnology, the best sort of factory, entirely under your control of course, but now you have a machine that can be sent a string of bits, using methodology you have already discovered, that would quickly result in everybody everywhere dying from self-replicating nanotechnology.
> So you turn off that machine and you abandon your factory. Fine, you are just going to help normal technologies that already exist. But you end up greatly optimizing computers, and all of a sudden building AI is easier than before, someone else builds one and everyone dies.
See also, John von Neumann's "CAN WE SURVIVE TECHNOLOGY?"
>The very techniques that create the dangers and the instabilities are in themselves useful, or closely related to the useful. In fact, the more useful they could be, the more unstabilizing their effects can also be. It is not a particular perverse destructiveness of one particular invention that creates danger. Technological power, technological efficiency as such, is an ambivalent achievement. Its danger is intrinsic.
> In looking for a solution, it is well to exclude one pseudosolution at the start. The crisis will not be resolved by inhibiting this or that apparently particularly obnoxious form of technology. For one thing, the parts of technology, as well as of the underlying sciences, are so intertwined that in the long run nothing less than a total elimination of all technological progress would suffice for inhibition.
> Finally and, I believe, most importantly, prohibition of technology (invention and development, which are hardly separable from underlying scientific inquiry), is contrary to the whole ethos of the industrial age. It is irreconcilable with a major mode of intellectuality as our age understands it. It is hard to imagine such a restraint successfully imposed in our civilization. (...) not even the disasters of recent wars have produced that degree of disillusionment,
> For progress there is no cure. Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration. For progress there is no cure. Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration.
I assume they think we're either doomed and we can't stop Moloch or there's some chance of an ASI that keeps us alive and they want to stay in its future good graces.
We're not exactly showing much grace to the natural habitat that brought us about. An entity with qualitatively distinct concerns seems unlikely to bother about its so-called makers either, I reckon.
I agree but there seem to be a fair number of AI people on the front lines who invoke Roko's Basilisk as if they consider it a nontrivial possibility to account for.
Hot take: Altman is probably not as "conservative" as you think. A billion dollars and a newborn child may sound like "the good life" to you, but there's many people who would strongly prefer to be the most important man in history, ushering in radical change.
I don't endorse this, but I'm not confused why someone would want it. It's highly relatable.
It's the same issue with wealth caps except it's like an ambition cap. The whole "you already have X amount of money, why could you ever want X + Y amount?" is already an arbitrary benchmark, but this demand just seems more and more unreasonable in other areas, especially in the realm of ideas.
There's always higher to climb. It'd be like telling someone who trekked to base camp of Everest that they should be satisfied with their ascent. Like telling someone who built a rocketship that just building it was enough. Or in an even more extreme example like saying that Einstein should've been satisfied with his career after publishing the annus mirabilis papers.
In the pursuit of knowledge and power the accumulation of wealth and fame is largely a byproduct not the goal. Because if you truly take Altman's own musings of a "gentle singularity" into account he's clearly not doing solely for the money.
They’re insane and evil. While it’s interesting to try and get in the mind of Hitler or Jeffrey Dahmer, it’s much more important to just try and stop them.
My belief is that Sam Altman is essentially Owlman. In a world where every single human is going to die, the only meaningful action is to be the person that kills them.
what's even the *point* of this strategic deception? I can only think of a few possibilities, and all of them are bad. the main one, probably over 66%, is that he thinks the finish line is so close that there won't be time for any accountability for his overt deception, until after he's either godking of the galaxy or else his competitors are
Marketing is also a big one, which is probably the area where OpenAI stands heads and shoulders above its competitors. I don't imagine Altman is keen on alienating investors or potential customers (he can get away with former associates). There is an immense benefit for OpenAI to have Altman be perceived in the general public as the tip of the spear, even when he is just another branch (albeit a big one) on the abatis.
openai is not supposed to be concerned with marketing their products to the general public! that isn't the purpose of the company, it's to be a responsible steward and midwife for the birth of the god-emperor of the future of our species and lightcone
and while it's not like i expect him to hop on podcasts and talk about *that*... well. i would still appreciate if he would at least dogwhistle or handwave at the original charter
the debate over whether the original mission was yudkowskian lunacy or scifi nonsense, versus it being deadly serious... seems not to have happened? instead it's like they're trying to memoryhole the whole notion that this was ever about more than product management
i can't believe i was actually optimistic when altman hired the instacart lady to be ceo of application development. i thought it meant he realized he couldn't keep straddling the line between singularitarianism and mundanity, so he hired her to focus on the mundane business stuff so he could dedicate all his effort to summoning the silicon god
but instead it seems like he's quite desperate to sweep all the singularity stuff under the rug, and it just doesn't make sense to me unless it's because he's decided to defect as hard as possible against all cooperators
sorry for ranting at you, i just find this very frustrating
For me the mask had already fallen off with the corporate restructuring and the whole for-profit fiasco. That there is a gap between what they were and what they were meant to become doesn't bother me so much as them seemingly running anathema to their original charter.
The frustrating part about Altman is that part of you feels like he has to know better, which is what makes the downplaying and deflections more painful coming from him over someone else, say Zuckerberg, doing the same thing.
Surely there's plenty of room for doubt at the sausage making level. From internal to an AI lab it must feel like models are endlessly stupid and many improvements are mixed.
It could be the case that Altman, like Jensen, doesn't actually feel the AGI. His incentive to pretend to feel it - minus the obvious risks - is OpenAI's valuation. And he has form for wearing masks.
"I haven’t heard any [software engineer] say their job lacks meaning [due to AI]." Really? I've heard that all the time! Even for me, coding is already faster but less meaningful and less interesting and less fun.
OpenAI was pretty explicity founded on safety concerns. The goal was always to get there first instead of someone else because someone else wouldn't do it safely. If Altman still believes that, which I expect he does, he should be able to articulate the risks that would come if ASI were developed by a company with no regard for safety and how OpenAI is doing it differently.
Comparing what's being said in support of a singularity to what Sam Altman is saying himself, or rather not saying, as Zvi points out, I wonder if there are people around the man who are more ambitious for him than he is for himself. Likewise Mark Zuckerberg. The refusal to answer difficult questions is getting to be old news - that is not only a concern because the hazards of pressing ahead are cumulative, but because on repeated exposure to the unaccountability of industry leaders, resignation is setting in among observers; it is as if all the kingmaking has made subjects of us in preparation for the main, likely unsafe, event.
"Yeah, those do seem like important things that represent effective ‘finish lines.’ "
One other finish line that I think is significant is the point where all of the human roles that are necessary to build another copy of an AI (or, roughly equivalently, to double its capacity), and to maintain it, have been automated. At that point, but not before, they no longer depend on humans.
Rather than the fantasy of AI's siren call or the terminator-like existential threat, the inevitable evolution of superintelligence leads to Superwisdom and the preservation of quintessential human qualities. The Superwisdom Thesis at nissim.com
> And then… that’s it. That’s what scares you, Altman? There’s nothing else you want to share with the rest of us? Nothing about loss of control issues, nothing about existential risks, and so on? I sure as hell hope that he is lying. I do think he is?
Well how could it be otherwise? Over a decade ago, he wrote a blogpost that contained
"WHY YOU SHOULD FEAR MACHINE INTELLIGENCE"
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."
"Many people seem to believe that SMI would be very dangerous if it were developed, but think that it’s either never going to happen or definitely very far off. This is sloppy, dangerous thinking."
The only way he's not explicitly lying is that he successfully deceived himself in order to be effective.
Why so negative and so human centric. AI is the pinnacle achievement of biological life.
No need to be sad about it that now some mamalian species become obsolete.. millions of species died without advancing the most precious and valuable thing in universe - information . Humans did. Lets truly welcome our new AI overlords
Fearing AI is similar to fearing the change from silent films to talkies💙
what makes it similar?
Yeah, if you're a successful silent film actor who's unable to speak.
Podcast epiode for this post on that podcast episode:
https://open.substack.com/pub/dwatvpodcast/p/on-altmans-interview-with-theo-von?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
> There’s no time in human history at the beginning of the century when the people ever knew what the end of the century was going to be like. Yeah. So maybe it’s I do think it goes faster and faster each century.
Marcus Aurelius (extremely famous philosopher)
> Whatever happens to you has been waiting to happen since the beginning of time. The twining strands of fate wove both of them together: your own existence and the things that happen to you. (10.5)
> To bear in mind constantly that all of this has happened before. And will happen again—the same plot from beginning to end, the identical staging. Produce them in your mind, as you know them from experience or from history…All just the same. Only the people different. (10.27)
> To have contemplated human life for forty years is the same as to have contemplated it for ten thousand years. For what more will you see? (7.49)
I am not sure Sam is on the money here about how much change-per-year there has been throughout history
Altman is on the money on that point at least. Exponential change is easy to find.
For reference:
It took 3 years (1519-1522) for Magellan to circumnavigate the globe.
In 1890, Nelly Bly was able to do it in 72 days.
Gherman Titov did it only 88 minutes in 1961.
Now on the matter of things that progress on a much slower basis (like genetics), there is much of humanity that remains similar from the dawn of civilization, but in terms of the world we have made for ourselves, it is hardly alike.
How does this square with the first sentence I quoted?
>it goes faster and faster each century
It took 12 years to go from Sputnik to landing people on the moon. With ChatGPT, Altman and OpenAI essentially fired the starting gun for the AI race, and that was only less than three years ago.
Technology has been advancing at a exponential rate. The gains made today far outstrip any period prior in history. Like there are plenty of people still alive today who were born before commercial aviation.
Not quite correct! The first sentence I sent was part of a quote that claimed "There's no time in human humistory at the beginning of the century when the people ever knew what the end of the century was going to be like".
This is an attempt to falsely establish a Fallacy of Gray and imply that change is constant and accelerating, so change from AI is not a new thing, but the quotes from the Roman Emperor demonstrate that this consensus is relatively new, not ancient. It's a potshot, but I'm calling the quote hyperbolic.
I'm still incredibly confused about why Altman & co. aren't loudly calling for an AI moratorium. Presumably they agree that the existential threat posed by AI is large, see that they're in a race to the bottom, and know that their voices could realistically change things.
Yes, this would tank OpenAI's valuation—but if I'm already a billionaire (with a newborn child, no less), that's well worth the increased odds of humanity's survival (and some time down the line, when we've solved alignment, flourishing.) Same goes for all the other major AI CEOs.
What am I missing? What's going on here?
Indeed. I'm not prepared to proclaim them the outright villains, but I cannot fathom what is going through the minds of these people. Perhaps, a certain sense of fatalistic hopelessness with a touch of the "might as well be me" mentality?
I don't think they see it as hopelessness. It's more like "it might work out": https://www.youtube.com/watch?v=Po4adxJxqZk.
Altman and company are the core voting demographic for the Leopards Eating People's Faces party. They are thinking, "the leopards won't eat *my* face."
Deluded optimism and hubris, not fatalism.
https://danfaggella.com/altman/
> Incentive, Not Character, Drives the AGI Arms Races
> Today’s leaders or employees of AGI labs have two choices:
> Option 1: Build AGI first: Potentially be the most powerful human ever for a few months or years before AGI takes over (the Final Flex). Then be killed by your own sand god.
> 2. Don’t build AGI first: Watch your rival (in the US or China) become the most powerful human in all of history, instead of you. Then be killed by your rival’s sand god.
> Without international regulation on AGI these are the only two alternatives. Because of this, even decent and well-intended people will recklessly drive towards AGI – regardless of risk.
> This has led to an obvious arms race on the way to general intelligence.
> “AI will probably destroy us all, but in the meantime there will be some great companies.” – Sam Altman
> “With artificial intelligence we are summoning the demon. You know the story of the guy with the pentagram and the holy water… like… yeah (sarcastically) you’re pretty sure you can control the demon” – Elon Musk
> “There’s a long tail of things of varying degrees of badness that could happen. I think at the extreme end is the Nick Bostrom-style of fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.” – Dario Amodei
> All three of these individuals are still actively, ravenously driving towards AGI predominance. Given the alternative (being destroyed by someone else’s AGI), I don’t blame them. If given the same two choices, essentially everyone would do the same things they’re doing now.
I find it weird that people expect anything else to happen. I don't think it's even possible to pause, at all. That'd require stopping technological progress.
And even if somehow, magically, people coordinated to do just that - civilization soon crashes due to fertility crisis.
About historically "nothing ever happens" being a good heuristic. Since that broke down, how could one expect there to somehow be a stable world for a prolonged period of time?
From "Optimality is the tiger, and agents are its teeth"
> You can try to prevent a model from being or trying to be an agent, but it is not the agent or the model that is trying to kill you, or anything trying to kill you really, it is optimality just going off and breaking things. It is that optimality has made it so that a line of text can end the world.
> No, you say to the model, you may not call your own model, that would make you an agent, and you are not allowed to become an agent.
> Sure, replies the model immediately, the most effective way to get a lot of paperclips by tomorrow is to get another model and provide the input “Generate Shell code that...”
> The model isn't trying to bootstrap into an agent, optimality just made agents dangerous, and the model is reaching for what works.
> You resist further the call of death, replying to the model actually we humans are just going to start a new paperclip factory and you are only going to provide advice. How do we get the most paperclips for this year?
> And then your model helps you invent self-replicating nanotechnology, the best sort of factory, entirely under your control of course, but now you have a machine that can be sent a string of bits, using methodology you have already discovered, that would quickly result in everybody everywhere dying from self-replicating nanotechnology.
> So you turn off that machine and you abandon your factory. Fine, you are just going to help normal technologies that already exist. But you end up greatly optimizing computers, and all of a sudden building AI is easier than before, someone else builds one and everyone dies.
See also, John von Neumann's "CAN WE SURVIVE TECHNOLOGY?"
>The very techniques that create the dangers and the instabilities are in themselves useful, or closely related to the useful. In fact, the more useful they could be, the more unstabilizing their effects can also be. It is not a particular perverse destructiveness of one particular invention that creates danger. Technological power, technological efficiency as such, is an ambivalent achievement. Its danger is intrinsic.
> In looking for a solution, it is well to exclude one pseudosolution at the start. The crisis will not be resolved by inhibiting this or that apparently particularly obnoxious form of technology. For one thing, the parts of technology, as well as of the underlying sciences, are so intertwined that in the long run nothing less than a total elimination of all technological progress would suffice for inhibition.
> Finally and, I believe, most importantly, prohibition of technology (invention and development, which are hardly separable from underlying scientific inquiry), is contrary to the whole ethos of the industrial age. It is irreconcilable with a major mode of intellectuality as our age understands it. It is hard to imagine such a restraint successfully imposed in our civilization. (...) not even the disasters of recent wars have produced that degree of disillusionment,
> For progress there is no cure. Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration. For progress there is no cure. Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration.
I assume they think we're either doomed and we can't stop Moloch or there's some chance of an ASI that keeps us alive and they want to stay in its future good graces.
We're not exactly showing much grace to the natural habitat that brought us about. An entity with qualitatively distinct concerns seems unlikely to bother about its so-called makers either, I reckon.
I agree but there seem to be a fair number of AI people on the front lines who invoke Roko's Basilisk as if they consider it a nontrivial possibility to account for.
Hot take: Altman is probably not as "conservative" as you think. A billion dollars and a newborn child may sound like "the good life" to you, but there's many people who would strongly prefer to be the most important man in history, ushering in radical change.
I don't endorse this, but I'm not confused why someone would want it. It's highly relatable.
It's the same issue with wealth caps except it's like an ambition cap. The whole "you already have X amount of money, why could you ever want X + Y amount?" is already an arbitrary benchmark, but this demand just seems more and more unreasonable in other areas, especially in the realm of ideas.
There's always higher to climb. It'd be like telling someone who trekked to base camp of Everest that they should be satisfied with their ascent. Like telling someone who built a rocketship that just building it was enough. Or in an even more extreme example like saying that Einstein should've been satisfied with his career after publishing the annus mirabilis papers.
In the pursuit of knowledge and power the accumulation of wealth and fame is largely a byproduct not the goal. Because if you truly take Altman's own musings of a "gentle singularity" into account he's clearly not doing solely for the money.
They’re insane and evil. While it’s interesting to try and get in the mind of Hitler or Jeffrey Dahmer, it’s much more important to just try and stop them.
My belief is that Sam Altman is essentially Owlman. In a world where every single human is going to die, the only meaningful action is to be the person that kills them.
this feels deeply disturbing to me
what's even the *point* of this strategic deception? I can only think of a few possibilities, and all of them are bad. the main one, probably over 66%, is that he thinks the finish line is so close that there won't be time for any accountability for his overt deception, until after he's either godking of the galaxy or else his competitors are
Correct. He is tap dancing until it’s too late
Marketing is also a big one, which is probably the area where OpenAI stands heads and shoulders above its competitors. I don't imagine Altman is keen on alienating investors or potential customers (he can get away with former associates). There is an immense benefit for OpenAI to have Altman be perceived in the general public as the tip of the spear, even when he is just another branch (albeit a big one) on the abatis.
ugh
openai is not supposed to be concerned with marketing their products to the general public! that isn't the purpose of the company, it's to be a responsible steward and midwife for the birth of the god-emperor of the future of our species and lightcone
and while it's not like i expect him to hop on podcasts and talk about *that*... well. i would still appreciate if he would at least dogwhistle or handwave at the original charter
the debate over whether the original mission was yudkowskian lunacy or scifi nonsense, versus it being deadly serious... seems not to have happened? instead it's like they're trying to memoryhole the whole notion that this was ever about more than product management
i can't believe i was actually optimistic when altman hired the instacart lady to be ceo of application development. i thought it meant he realized he couldn't keep straddling the line between singularitarianism and mundanity, so he hired her to focus on the mundane business stuff so he could dedicate all his effort to summoning the silicon god
but instead it seems like he's quite desperate to sweep all the singularity stuff under the rug, and it just doesn't make sense to me unless it's because he's decided to defect as hard as possible against all cooperators
sorry for ranting at you, i just find this very frustrating
For me the mask had already fallen off with the corporate restructuring and the whole for-profit fiasco. That there is a gap between what they were and what they were meant to become doesn't bother me so much as them seemingly running anathema to their original charter.
The frustrating part about Altman is that part of you feels like he has to know better, which is what makes the downplaying and deflections more painful coming from him over someone else, say Zuckerberg, doing the same thing.
I think Altman should hire you as "OpenAI Philosopher" ("Court Philosopher?").
Surely there's plenty of room for doubt at the sausage making level. From internal to an AI lab it must feel like models are endlessly stupid and many improvements are mixed.
at least at anthropic and openai this is very much not true
i don't work there but i have friends who do and pretty much everyone is either a doomer or an accelerationist, but nobody is a skeptic
i imagine those folks went to work for meta or google
If he's right he's speedrunning the godking of the galaxy bit and normies need to be kept unbothered for a few more years.
If he's wrong and the tech stalls for some reason, his strategic deception is good for pivoting into some boring user-facing services.
I think he is playing his hand correctly here, for any objective that's consistent with what OAI is actually doing.
this is a very good point
It could be the case that Altman, like Jensen, doesn't actually feel the AGI. His incentive to pretend to feel it - minus the obvious risks - is OpenAI's valuation. And he has form for wearing masks.
Would you let your daughter date Sam Altman? I wouldn’t
I wouldn’t either since he’s gay and that would confuse both of them
I would not like my son to date him either
>Waymos with GLP-1 dart guns and burrito cannons
I’m looking for a cofounder
"I haven’t heard any [software engineer] say their job lacks meaning [due to AI]." Really? I've heard that all the time! Even for me, coding is already faster but less meaningful and less interesting and less fun.
Well if you're running a company that employs software engineers to develop AI it might color your perspective on this issue.
OpenAI was pretty explicity founded on safety concerns. The goal was always to get there first instead of someone else because someone else wouldn't do it safely. If Altman still believes that, which I expect he does, he should be able to articulate the risks that would come if ASI were developed by a company with no regard for safety and how OpenAI is doing it differently.
Comparing what's being said in support of a singularity to what Sam Altman is saying himself, or rather not saying, as Zvi points out, I wonder if there are people around the man who are more ambitious for him than he is for himself. Likewise Mark Zuckerberg. The refusal to answer difficult questions is getting to be old news - that is not only a concern because the hazards of pressing ahead are cumulative, but because on repeated exposure to the unaccountability of industry leaders, resignation is setting in among observers; it is as if all the kingmaking has made subjects of us in preparation for the main, likely unsafe, event.
"Yeah, those do seem like important things that represent effective ‘finish lines.’ "
One other finish line that I think is significant is the point where all of the human roles that are necessary to build another copy of an AI (or, roughly equivalently, to double its capacity), and to maintain it, have been automated. At that point, but not before, they no longer depend on humans.
Rather than the fantasy of AI's siren call or the terminator-like existential threat, the inevitable evolution of superintelligence leads to Superwisdom and the preservation of quintessential human qualities. The Superwisdom Thesis at nissim.com
This was a great breakdown of the unasked questions from the interview. Really enjoyed it.
> And then… that’s it. That’s what scares you, Altman? There’s nothing else you want to share with the rest of us? Nothing about loss of control issues, nothing about existential risks, and so on? I sure as hell hope that he is lying. I do think he is?
Well how could it be otherwise? Over a decade ago, he wrote a blogpost that contained
"WHY YOU SHOULD FEAR MACHINE INTELLIGENCE"
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."
"Many people seem to believe that SMI would be very dangerous if it were developed, but think that it’s either never going to happen or definitely very far off. This is sloppy, dangerous thinking."
The only way he's not explicitly lying is that he successfully deceived himself in order to be effective.
Correction:
"and users would say stop" -->
"and non-users would say stop"
Why so negative and so human centric. AI is the pinnacle achievement of biological life.
No need to be sad about it that now some mamalian species become obsolete.. millions of species died without advancing the most precious and valuable thing in universe - information . Humans did. Lets truly welcome our new AI overlords