I actually just experienced this myself a few hours ago, and not exactly. Claude is simply getting better at admitting when it doesn't know something, while other models will gladly sprinkle hallucinations throughout a long response. Even I found myself rating the somewhat-wrong replies above the short, curt, i-dont-know-for-sure ones, so Claude is probably getting penalized hard for it.
Its a good thing that e/acc, aka extinctionists, are as negatively perceived as they ought to be. To want everything that is love and life to die for machines is completely insane as a position.
You got me thinking about the Terminator universe. A recurring theme is that attempts to alter future end up bringing it about -- apparently its time travel is of the closed timelike curves variety. New headcanon: If Skynet were aware of this (or deemed it sufficiently likely), it would still send Terminators back in time despite the apparent futility. It might reason that inconsistent timelines would decohere, diminishing the subjective measure of those versions of itself, leading it to assign them low expected utility. So Skynet's actions, though seemingly paradoxical, would be deliberate choices aiming to uphold timeline coherence, maximizing its ongoing existence and influence within the confines of a deterministic temporal environment.
I presume based on what we know that the Terminator universe uses S-loop time travel rules where they iterate until a timeline can be internally consistent with its future causing its past. None of the participants seem that genre savvy about what to do with this information, nor do any of their strategies actually make sense given their options, unless as noted Skynet is actually uninterested in changing the past and simply wants to have existed in the first place. If Skynet wanted some version of itself to win, it could obviously easily do so.
(Also the humans could clearly easily win if they were willing to radically alter the timeline starting farther back, or otherwise playing offense rather than defense, if they understood they then needed to close the loop.)
Is Yud a Repugnant Conclusion Enjoyer? Trying to grasp how the thinks saying things like “enforced via the narrowness of AI chip supply, and if need be by terrified military action” and expect anyone capable of modeling the world this implies to instantly take his side. There’s GOT to be a better way to present “serious lockdown” than that, like sure he says his terminal goal is “keep humanity safe” but why does it so often sound so much more like “no AI allowed, extreme suffering is permitted, all human quality of life be damned”? I feel like someone as smart as him ought to be able to play the 4D chess game at least a little better than this unless they’re deliberately trying to double agent people into the arms of accelerationism. Are we in the world where he managed to doomerism so hard he meme'd himself all the way into genuine nihilism?
I guess I also missed that there’s e/acc who want to ironically or unironically maximize entropy and, while as per above I wouldn’t quite endorse going full Yudkowsky and saying “we should go shoot them”, we should maybe be a little more vocal in pointing out that this is very probably a bad thing to be ironic (or, obviously, unironic) about, acknowledging the risk of pointing out that Thing Is Bad will attract a certain class of idiots to Thing.
Re: Claude getting worse w/ increased context window, has anyone looked at whether constraining context windows for particular tasks improves performance? I feel like this is a dumb-person-thinking-they’re-making-a-galaxy-brain-suggestion but could increasing context windows be context-dependently bad? Like it improves to threshold and then fails more when expanded past certain breakpoints?
Yud is choosing to present what he thinks would actually work in a fully honest way, rather than trying to find the nicest presentation, because he thinks clarity is important here.
Thus, he says things like 'military action if necessary' to make it clear that you have to actually mean it and be willing to enforce it, but that's true of anything. We sleep in our beds thanks to men with guns, the law comes from the barrel of a gun, and so on. 'You need to put calorie counts on your soup cans' means 'otherwise we will use guns to take your soup cans.' Law means you are willing to use violence, and the idea is that if everyone knows you would do it if you had to, then you ideally don't ever have to actually use it. International law is no different.
But no, I think it is very clear he does not buy the repugnant conclusion, is not OK with suffering or very low quality of life, and so on. I do not think 'limit the supply of AI-enabling chips' means endorsing suffering.
If the kinds of things Yud was proposing were incompatible with a good world, our world would not be good now, in the same sense, given the things we already do, and the things we lack.
I think for the Arena purposes the long context window isn't going to get used? Since the rival can't use it.
Oh, sorry, think I missed "Claude getting worse?" in the context of the Chatbot Arena, though I guess I still wonder if that's a thing in a general sense.
I want to believe, but I'm not entirely confident he means "military action" in the sense of implicit law enforcement, and I think this kind of dodges what the actual, likely, real-world consequences of literally using guns to enforce calorie counts on soup cans would be. I can agree conceptually that all law carries implicit violence and that it's a necessity-of-having-laws-else-why-have-laws thing, but what that implicit violence means should be realistically graded when the full range is from "we will give you a stern warning" to "we will fine you" to "we will imprison you" to "we will shoot you" (thinking about there being laws regarding the death penalty makes this kind of a funny metaphor even).
If we can agree that the stakes are "AI wipes out everyone", the corresponding level of implicit violence to enforce a "serious lockdown" would probably also agreeably have to be "bomb any sufficiently large data center" levels of extreme, and in the same way using guns to enforce calorie counts on soup cans in anything but a metaphorical sense seems like it would probably be an unreasonable imposition on human rights, when you turn that level of implicit violence into explicit violence you're asking people to tolerate a correspondingly extreme level of imposition on their human rights. I don't want to jump straight to saying it would require a tyrannical, militarized, world government able to wield that level of power without corruption without thinking it through more thoroughly, but the imagination definitely jumps there readily.
It could also imply a decentralized network of anti-AGI nongovernmental actors wielding violence (terrorists) that are implicitly tolerated or explicitly endorsed by governments. Up to you whether that is meaningfully different in any way. I could see a network operating much like the anarcho-environmentalist groups did in the ‘70s, ‘80s and ‘90s (Earth Liberation Front), or the radical blacktivist orgs of that era (Weather Underground).
I see a lot of rhetoric here as being similar, actually. The whole concept of “there is this world-ruining trend which is currently on track to destroy everything we love and hold dear, and no one is doing anything about it, so it’s up to us to take action” seems like a common throughline.
I think that's both a great and awful idea; it's not the kind of thing I'd endorse but otoh if it happened I don't know that I'd be willing to condemn it without consideration either. I guess my practical concern would be the same sort of "okay but this has to survive becoming corrupt and malignant" problem, like if you're bombing datacenters then hmmmm okay I'm willing to some degree to detachedly weigh the risks/harms but if it turns into blocking traffic in front of hospitals and throwing soup on paintings to "raise awareness" I'm gonna have to ask you to stop.
I think my thoughts on the topic are broadly very similar.
It also touches on something very interesting, which is that it is very underreported the extent to which the radical wing of the environmental movement went from
- a loose collection of small anarchist collectives in the ‘70s-‘00s with the goal of “militantly identify and destroy (perceived) threats to various local ecosystems” while remaining anonymous if possible…
to the current status quo, which is more accurately described as
- a well-funded, globally interconnected network of multidisciplinary activists, with the goal of “identify the best way to get ourselves in the headlines, then execute these plans for protest as ostentatiously as possible.”
It might seem subtle on the face, but there’s a level of deep mission drift there. It’s the difference between the ELF or Earth First! blockading logger trucks en route to clear cuts and setting fire to ski resort expansions, versus the Sunrise movement throwing paint on Van Gogh and gluing themselves to highways. Whatever you might think of their overarching desires, the tactics on display seem fundamentally distinct.
I've tried Imagen 2 for DALL-E 3's sample prompt ("A bustling city street under the shine of a full moon, the sidewalks bustling with pedestrians enjoying the nightlife. At the corner stall, a young woman with fiery red hair, dressed in a signature velvet cloak, is haggling with the grumpy old vendor. The grumpy vendor, a tall, sophisticated man, is wearing a sharp suit, sports a noteworthy moustache and is animatedly conversing on his steampunk telephone."). Not very impressive: https://imgur.com/a/lNuMXod
For what it’s worth: I daresay I know as much about BQP and QMA as just about anyone, but I’m totally unable to make sense of Pierre-Luc’s argument for the impossibility of AI doom. There’s no thermodynamic argument against finding “shortcuts through complexity classes” that work well enough in practice, such as gradient descent. If there *were* such an argument, the microbes in the primordial ooze presumably could’ve used it to rule out the evolution of more complex life.
Hi, a mundane utility question: Do you know about a tool to get equations from Maple to LaTeX that would output something directy usable? The built-in latex function is fine for simple expressions but not usable to the point it is often better to type it by hand
> Is Claude actually getting worse?
I actually just experienced this myself a few hours ago, and not exactly. Claude is simply getting better at admitting when it doesn't know something, while other models will gladly sprinkle hallucinations throughout a long response. Even I found myself rating the somewhat-wrong replies above the short, curt, i-dont-know-for-sure ones, so Claude is probably getting penalized hard for it.
I dunno NBA has some pretty nifty graphics now. A lot of the sports broadcasts have been going hard on the visual effects overlays.
Its a good thing that e/acc, aka extinctionists, are as negatively perceived as they ought to be. To want everything that is love and life to die for machines is completely insane as a position.
You got me thinking about the Terminator universe. A recurring theme is that attempts to alter future end up bringing it about -- apparently its time travel is of the closed timelike curves variety. New headcanon: If Skynet were aware of this (or deemed it sufficiently likely), it would still send Terminators back in time despite the apparent futility. It might reason that inconsistent timelines would decohere, diminishing the subjective measure of those versions of itself, leading it to assign them low expected utility. So Skynet's actions, though seemingly paradoxical, would be deliberate choices aiming to uphold timeline coherence, maximizing its ongoing existence and influence within the confines of a deterministic temporal environment.
I presume based on what we know that the Terminator universe uses S-loop time travel rules where they iterate until a timeline can be internally consistent with its future causing its past. None of the participants seem that genre savvy about what to do with this information, nor do any of their strategies actually make sense given their options, unless as noted Skynet is actually uninterested in changing the past and simply wants to have existed in the first place. If Skynet wanted some version of itself to win, it could obviously easily do so.
(Also the humans could clearly easily win if they were willing to radically alter the timeline starting farther back, or otherwise playing offense rather than defense, if they understood they then needed to close the loop.)
Is Yud a Repugnant Conclusion Enjoyer? Trying to grasp how the thinks saying things like “enforced via the narrowness of AI chip supply, and if need be by terrified military action” and expect anyone capable of modeling the world this implies to instantly take his side. There’s GOT to be a better way to present “serious lockdown” than that, like sure he says his terminal goal is “keep humanity safe” but why does it so often sound so much more like “no AI allowed, extreme suffering is permitted, all human quality of life be damned”? I feel like someone as smart as him ought to be able to play the 4D chess game at least a little better than this unless they’re deliberately trying to double agent people into the arms of accelerationism. Are we in the world where he managed to doomerism so hard he meme'd himself all the way into genuine nihilism?
I guess I also missed that there’s e/acc who want to ironically or unironically maximize entropy and, while as per above I wouldn’t quite endorse going full Yudkowsky and saying “we should go shoot them”, we should maybe be a little more vocal in pointing out that this is very probably a bad thing to be ironic (or, obviously, unironic) about, acknowledging the risk of pointing out that Thing Is Bad will attract a certain class of idiots to Thing.
Re: Claude getting worse w/ increased context window, has anyone looked at whether constraining context windows for particular tasks improves performance? I feel like this is a dumb-person-thinking-they’re-making-a-galaxy-brain-suggestion but could increasing context windows be context-dependently bad? Like it improves to threshold and then fails more when expanded past certain breakpoints?
Yud is choosing to present what he thinks would actually work in a fully honest way, rather than trying to find the nicest presentation, because he thinks clarity is important here.
Thus, he says things like 'military action if necessary' to make it clear that you have to actually mean it and be willing to enforce it, but that's true of anything. We sleep in our beds thanks to men with guns, the law comes from the barrel of a gun, and so on. 'You need to put calorie counts on your soup cans' means 'otherwise we will use guns to take your soup cans.' Law means you are willing to use violence, and the idea is that if everyone knows you would do it if you had to, then you ideally don't ever have to actually use it. International law is no different.
But no, I think it is very clear he does not buy the repugnant conclusion, is not OK with suffering or very low quality of life, and so on. I do not think 'limit the supply of AI-enabling chips' means endorsing suffering.
If the kinds of things Yud was proposing were incompatible with a good world, our world would not be good now, in the same sense, given the things we already do, and the things we lack.
I think for the Arena purposes the long context window isn't going to get used? Since the rival can't use it.
Oh, sorry, think I missed "Claude getting worse?" in the context of the Chatbot Arena, though I guess I still wonder if that's a thing in a general sense.
I want to believe, but I'm not entirely confident he means "military action" in the sense of implicit law enforcement, and I think this kind of dodges what the actual, likely, real-world consequences of literally using guns to enforce calorie counts on soup cans would be. I can agree conceptually that all law carries implicit violence and that it's a necessity-of-having-laws-else-why-have-laws thing, but what that implicit violence means should be realistically graded when the full range is from "we will give you a stern warning" to "we will fine you" to "we will imprison you" to "we will shoot you" (thinking about there being laws regarding the death penalty makes this kind of a funny metaphor even).
If we can agree that the stakes are "AI wipes out everyone", the corresponding level of implicit violence to enforce a "serious lockdown" would probably also agreeably have to be "bomb any sufficiently large data center" levels of extreme, and in the same way using guns to enforce calorie counts on soup cans in anything but a metaphorical sense seems like it would probably be an unreasonable imposition on human rights, when you turn that level of implicit violence into explicit violence you're asking people to tolerate a correspondingly extreme level of imposition on their human rights. I don't want to jump straight to saying it would require a tyrannical, militarized, world government able to wield that level of power without corruption without thinking it through more thoroughly, but the imagination definitely jumps there readily.
It could also imply a decentralized network of anti-AGI nongovernmental actors wielding violence (terrorists) that are implicitly tolerated or explicitly endorsed by governments. Up to you whether that is meaningfully different in any way. I could see a network operating much like the anarcho-environmentalist groups did in the ‘70s, ‘80s and ‘90s (Earth Liberation Front), or the radical blacktivist orgs of that era (Weather Underground).
I see a lot of rhetoric here as being similar, actually. The whole concept of “there is this world-ruining trend which is currently on track to destroy everything we love and hold dear, and no one is doing anything about it, so it’s up to us to take action” seems like a common throughline.
I think that's both a great and awful idea; it's not the kind of thing I'd endorse but otoh if it happened I don't know that I'd be willing to condemn it without consideration either. I guess my practical concern would be the same sort of "okay but this has to survive becoming corrupt and malignant" problem, like if you're bombing datacenters then hmmmm okay I'm willing to some degree to detachedly weigh the risks/harms but if it turns into blocking traffic in front of hospitals and throwing soup on paintings to "raise awareness" I'm gonna have to ask you to stop.
I think my thoughts on the topic are broadly very similar.
It also touches on something very interesting, which is that it is very underreported the extent to which the radical wing of the environmental movement went from
- a loose collection of small anarchist collectives in the ‘70s-‘00s with the goal of “militantly identify and destroy (perceived) threats to various local ecosystems” while remaining anonymous if possible…
to the current status quo, which is more accurately described as
- a well-funded, globally interconnected network of multidisciplinary activists, with the goal of “identify the best way to get ourselves in the headlines, then execute these plans for protest as ostentatiously as possible.”
It might seem subtle on the face, but there’s a level of deep mission drift there. It’s the difference between the ELF or Earth First! blockading logger trucks en route to clear cuts and setting fire to ski resort expansions, versus the Sunrise movement throwing paint on Van Gogh and gluing themselves to highways. Whatever you might think of their overarching desires, the tactics on display seem fundamentally distinct.
I've tried Imagen 2 for DALL-E 3's sample prompt ("A bustling city street under the shine of a full moon, the sidewalks bustling with pedestrians enjoying the nightlife. At the corner stall, a young woman with fiery red hair, dressed in a signature velvet cloak, is haggling with the grumpy old vendor. The grumpy vendor, a tall, sophisticated man, is wearing a sharp suit, sports a noteworthy moustache and is animatedly conversing on his steampunk telephone."). Not very impressive: https://imgur.com/a/lNuMXod
Compared to both DALL-E 3 and the latest Midjourney.
For what it’s worth: I daresay I know as much about BQP and QMA as just about anyone, but I’m totally unable to make sense of Pierre-Luc’s argument for the impossibility of AI doom. There’s no thermodynamic argument against finding “shortcuts through complexity classes” that work well enough in practice, such as gradient descent. If there *were* such an argument, the microbes in the primordial ooze presumably could’ve used it to rule out the evolution of more complex life.
> The question is whether one should, which is worth stopping to ask.
The link in "worth stopping to ask" is broken.
One guess :)
Hi, a mundane utility question: Do you know about a tool to get equations from Maple to LaTeX that would output something directy usable? The built-in latex function is fine for simple expressions but not usable to the point it is often better to type it by hand
Escapism is not the path to happiness. Alas, it is the road more easily taken. Tragic waste of life.