What's the probability that ANY imaginable regulation passed by states would actually reduce the probability of ASI not-killing-us-all?
All the regulations I can imagine would just do as much damage as most needless regulations do - preventing people from doing what's useful to them and harmless to others. Those who don't want to use AI as a tool or chat companion are free not to. Without regulations.
From my point of view OpenAI still has quite a lot of "safety people". It's true that the EA safety people and the OpenAI safety people don't seem to get along any more, but it's like arguments between Bolsheviks and Mensheviks. Two groups can be bitterly angry at each other over specific personal disputes, but still have roughly the same philosophy that is not shared by most of the world.
A more common mental model in the AI world nowadays is "AI as normal technology" where you think of AI the same way you would iPhones or HTTP. OpenAI has a lot of "safety people" in the sense that they sometimes publicly worry about AGI.
I mean, if the mensheviks had won in 1917, I suspect we would be living in an extremely different world with far fewer conflicts and wars and death and starvation over the previous century
I like ChatGPT (especially its Robot personality, good practice for dealing with future Terminators) and I pay for plus. But given the questionable behavior of openAI I wonder if I should I be using something else. And to avoid feeding their paranoia, I definitely don't like Musk and don't want to pay for Grok, will never pay for Grok. Which company is the least evil, but also has decent models; anthropic?
I use Claude instead, partially because I prefer to give my money to Anthropic, but mostly because I get more value out of it due to its more grounded communication style and less sycophantic personality. At least, that's what it feels like.
The robot personality of chat gpt seems less sycophantic. That was a big problem before I switched it. Maybe they have just prompted Robot so that its more disagreeable, because that is what people expect from a robot. I had a half hour "argument" with it yesterday on a topic and it stuck by its viewpoint, even after I pointing out some circles in its reasoning. It also doesn't pointlessly complement me all the time.
"OpenAI’s CEO Sam Altman has over time used increasingly jingoistic language throughout his talks, has used steadily less talk about"
typo worth fixing^
Also, I would love to hear where your modicum-of-trust-in-Sam-Altman comes from. so far as I can tell, he's a 99.99th percentile Machiavel. and that's just the picture his *publicly known* actions paint.
Yeah, no, give an inch and OSA-type legislation (or even just the fear of such lawsuits) will take an ell...I used to have a more cavalier attitude about Having Nothing To Hide, but really, that's backwards. There's a reason we get extra upset about those wrongly accused in other legal arenas! Better that X guilty men go free than Y innocent men get convicted...and, sure, one can talk price about the values of X and Y. Still, it's precisely those with Actual Nothing To Hide who stand to lose the most by getting false-positived, maliciously or otherwise. Which is a real shame because to the extent AI had any mundane appeal to me, it woulda mainly been for just such "privileged" conversations. I don't know how much longer Signal's E2E will hold out...already nervous about such chats with UK friends. (In some ways the upfront non-security of basic phone texting and emailing is honestly refreshing, cause one doesn't use those channels with any particular expectation of privacy.)
Still mad at Newsom for SB1047. Genuinely unsure what he'll do this time. Recent non-AI maneuverings for obvious 2028 ambitions have been...weird.
> The talk about burden on ‘small developers’ when to be covered by SB 53 at all you now have to spend a full $500 million in training compute, and the only substantive expense (the outside audits) are entirely gone.
> (j) “Large frontier developer” means a frontier developer that together with its affiliates collectively had annual gross revenues in excess of five hundred million dollars ($500,000,000) in the preceding calendar year.
So my reading is that what you have is incorrect. However, beyond this, I don't really understand the point of this threshold, given that a model must be trained with over 10^26 FLOPs to even be relevant (LLMs tell me this many FLOPs costs about $100M, roughly given a competitive pricing on H100s, $1/hour). Does this encourage companies are train but not actually produce revenue or possibly spin off LLM training into a separate corporate entity?
I dunno. Maybe OpenAI is right here. Have you see the recent Eliezer Yudkowsky vs Mark Miller debate? https://www.youtube.com/watch?v=s-Eknqaksfg
What's the probability that ANY imaginable regulation passed by states would actually reduce the probability of ASI not-killing-us-all?
All the regulations I can imagine would just do as much damage as most needless regulations do - preventing people from doing what's useful to them and harmless to others. Those who don't want to use AI as a tool or chat companion are free not to. Without regulations.
From my point of view OpenAI still has quite a lot of "safety people". It's true that the EA safety people and the OpenAI safety people don't seem to get along any more, but it's like arguments between Bolsheviks and Mensheviks. Two groups can be bitterly angry at each other over specific personal disputes, but still have roughly the same philosophy that is not shared by most of the world.
A more common mental model in the AI world nowadays is "AI as normal technology" where you think of AI the same way you would iPhones or HTTP. OpenAI has a lot of "safety people" in the sense that they sometimes publicly worry about AGI.
I mean, if the mensheviks had won in 1917, I suspect we would be living in an extremely different world with far fewer conflicts and wars and death and starvation over the previous century
sometimes those details matter
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/openai-14-openai-descends-into-paranoia
I like ChatGPT (especially its Robot personality, good practice for dealing with future Terminators) and I pay for plus. But given the questionable behavior of openAI I wonder if I should I be using something else. And to avoid feeding their paranoia, I definitely don't like Musk and don't want to pay for Grok, will never pay for Grok. Which company is the least evil, but also has decent models; anthropic?
That's my reasoning for using Claude.
I use Claude instead, partially because I prefer to give my money to Anthropic, but mostly because I get more value out of it due to its more grounded communication style and less sycophantic personality. At least, that's what it feels like.
The robot personality of chat gpt seems less sycophantic. That was a big problem before I switched it. Maybe they have just prompted Robot so that its more disagreeable, because that is what people expect from a robot. I had a half hour "argument" with it yesterday on a topic and it stuck by its viewpoint, even after I pointing out some circles in its reasoning. It also doesn't pointlessly complement me all the time.
"OpenAI’s CEO Sam Altman has over time used increasingly jingoistic language throughout his talks, has used steadily less talk about"
typo worth fixing^
Also, I would love to hear where your modicum-of-trust-in-Sam-Altman comes from. so far as I can tell, he's a 99.99th percentile Machiavel. and that's just the picture his *publicly known* actions paint.
Yeah, no, give an inch and OSA-type legislation (or even just the fear of such lawsuits) will take an ell...I used to have a more cavalier attitude about Having Nothing To Hide, but really, that's backwards. There's a reason we get extra upset about those wrongly accused in other legal arenas! Better that X guilty men go free than Y innocent men get convicted...and, sure, one can talk price about the values of X and Y. Still, it's precisely those with Actual Nothing To Hide who stand to lose the most by getting false-positived, maliciously or otherwise. Which is a real shame because to the extent AI had any mundane appeal to me, it woulda mainly been for just such "privileged" conversations. I don't know how much longer Signal's E2E will hold out...already nervous about such chats with UK friends. (In some ways the upfront non-security of basic phone texting and emailing is honestly refreshing, cause one doesn't use those channels with any particular expectation of privacy.)
Still mad at Newsom for SB1047. Genuinely unsure what he'll do this time. Recent non-AI maneuverings for obvious 2028 ambitions have been...weird.
Re: the paranoia and EA conspiracy. Perhaps they believe that us, right here and now, discussing AI safety in the open is conspiracy?
maybe the real conspiracy is the friends we made along the way
For the record, I am a member of a vast conspiracy of people who care about their children.
I don't think this is correct:
> The talk about burden on ‘small developers’ when to be covered by SB 53 at all you now have to spend a full $500 million in training compute, and the only substantive expense (the outside audits) are entirely gone.
When I look at what I believe is the bill, it says (https://legiscan.com/CA/text/SB53/id/3268028):
> (j) “Large frontier developer” means a frontier developer that together with its affiliates collectively had annual gross revenues in excess of five hundred million dollars ($500,000,000) in the preceding calendar year.
So my reading is that what you have is incorrect. However, beyond this, I don't really understand the point of this threshold, given that a model must be trained with over 10^26 FLOPs to even be relevant (LLMs tell me this many FLOPs costs about $100M, roughly given a competitive pricing on H100s, $1/hour). Does this encourage companies are train but not actually produce revenue or possibly spin off LLM training into a separate corporate entity?