Saw very surprising anti-AGI commentary from... Tucker Carlson last week: https://youtu.be/gr4E0jEjQMM?t=3270. In his interview with Bryan Johnson he said that in his opinion we should destroy all AI datacenters and stop AI development before it's too late. Bryan disagrees. In any case it seems like Eliezer-like discourse is now reaching the mainstream.
Honestly, I think that people are not "unhinged" enough and still hoping that nolmacy basis will save us. We shouldnt assume that, but I think a major issue is that we don't have a good outlet provided to give our energies to fight this.
I wish Zvi would provide more means and links on what we can do
I think I might be considered as "unhinged" in my defense of my children growing up to have good lives, but on the other hand, not wanting to die and wanting us to stay human seems very reasonable.
I can also confirm Zen and the Art of Motorcycle Maintenance is worth the time if only to learn that an empty aluminum beer can provides excellent shim stock. If you happen to be in need of shim stock it’s worth its weight in gold.
"Aella: Nah, payment processors also can prevent cashing out. Iirc this is how @SpankChain crypto sex worker payment system got shut down."
I don't really buy it. Darknet drug markets have been operating for more than a decade now despite constant scams, hacks, government prosecution, DDoS attacks, etc. I think the real answer is that AI porn is not that good yet, since people want video rather than images alone, so you want a model on the level of Sora to make good money. I'd bet things will change once it becomes feasible for someone 10x smaller than OpenAI to train a video model as good as Sora.
"Do some of the claims about future expectations sound crazy, such as the one that was quoted to me? Yes, they would from the outside. But that is because the outside world does not understand the situation. "
“Quick name three people with the same birthday as you.”
I happen to be 4 days younger than Vladimir Putin. I’ve told this to ChatGPT and Microsoft’s LLM and asked what my birth date is. They both give a date making me 4 days *older* than Putin. They even show their math.
I haven’t earnestly studied AI since the early aughts when neural nets to detect camouflaged military equipment were accidentally learning to distinguish cloudy days from bright days so I haven’t been in this loop in a serious way for a while but this example makes me think I’m looking at a cool parlor trick that *understands* nothing.
I just tried this on ChatGPT 3.5 and it gave me Albert Einstein, Stephen Hawking and Andy Kaufman, all with an alleged birth date of 14 March 1879. (Correct for one of them.)
I can't decide if this means AI is still hopeless at simple things because it gets stuck in a cul-de-sac after generating one name, or if it realizes that after getting stuck it can just pivot to a completely wrong answer which then becomes an Andy Kaufman joke.
The biggest productivity increase due to LLMs is that it has completely democratized coding scripts. It can't maintain a large codebase or anything, but if there is a project that you are in interested in pursuing that requires writing up a 200 line program, then the LLM makes that possible.
I think people are not thinking creatively enough about what they could possibly achieve given this resource.
"in the sense of saying that we ‘should’ in the Jurassic Park sense be building narrower AIs, even if that is harder, because those narrow things have better risk-reward and cost-benefit profiles. And yes, I agree" I miss the days of 2019 when the general view was that even pursuing *narrow* AI was highly risky due to the possibility of general capability and agency being a path toward the most efficient/best performing solution. Who'd have thought one could experience nostalgia for paperclip maximisers?
On AI porn .. if we're talking about text, then AI Dungeon was notorious for users using it write erotica, and since then we have figg ai, etc. The cat is out of the bag. I am not a lawyer, but I am given to understand that the AI Dungeon guys were worries text erotica might be illegal in Canada. Don't think anyone's suggesting text erotica is illegal in the US, the land of the First Amendment.
AI generated erotic images on the other hand, definitely can be illegal in many jurisdictions The AI companies may be holding back on allowing AI erotic image generation out of fear of going to jail.
let me say that I am surprised at the ... diversity .. of things people have done with Figg AI. Sonic the Hedgehog porn, obviously, but that is merely that start.
Via Wired: UK government now blocking access to deepfake porn sites
Well, if the government is going to ban people using these tools, it makes a certain amount of sense to ban the tools themselves as well. Government crackdown now in progress.
>Daniel’s goal is clearly to minimize AI existential risk. If AGI is coming that quickly, it is probably happening at OpenAI. OpenAI would be where the action is, where the fate of humanity and the light cone will be decided, for better or for worse. It seems unlikely that he will have higher leverage doing something else, within that time frame, with the possible exception of raising very loud and clear alarm bells about OpenAI.
Agreed, but doesn't this fly in the face of what you emphasized on the 80k podcast about the #1 rule for xrisk mitigation peeps being to not work at the top labs?
> I don’t think CEV will work, but setting that aside: No? Language does not do this, indeed language makes it impossible to actually specify anything precisely, and introduces tons of issues, and is a really bad coding language for this?
> The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world.
Human values, and a flourishing human society, are much more complex than "space" and "objects".
Language might be an objectively terrible way to specify what you want. But it is also the only "programming language" that we have, given the bitter lesson. So yes, you may be trying to set limitations on an ASI using vague and imprecise tools to specify something you only partially understand.
If this seems like it would be a catastrophically bad idea, then maybe don't build the ASI.
Imagine how alignment would work (or not) if "the actual contents of minds are tremendously, irredeemably complex". Imagine what would happen if we can't develop tools significantly more precise than language.
Then take those risks seriously as a real planning scenario.
I suspected the problem with the 'three famous people' birthdate question was the prompt. "share the exact same birth date and year" seems more confusing than just asking for three famous people born on the same day.
I tried this prompt: "I'm looking for an example of a time when three famous people were born on the same exact day. Can you tell me three famous people who were born on the same day?" on Claude 3 Opus. The first try, it gave a plausible answer but it was wrong about the date Jenna Fischer was born. When I pointed that out, it correctly* said "Tom Cruise, Thomas Gibson, and Hunter Tylo were all born on July 3, 1962."
So even a clearer prompt won't fix the problem of hallucinations.
* at least if you trust Google's extracted birth date info, which I'm not sure I do
Saw very surprising anti-AGI commentary from... Tucker Carlson last week: https://youtu.be/gr4E0jEjQMM?t=3270. In his interview with Bryan Johnson he said that in his opinion we should destroy all AI datacenters and stop AI development before it's too late. Bryan disagrees. In any case it seems like Eliezer-like discourse is now reaching the mainstream.
This is good and I've been seeing a lot more of it, as well as much more organized ideas. The immune system is kicking in.
I sure hope so.
Honestly, I think that people are not "unhinged" enough and still hoping that nolmacy basis will save us. We shouldnt assume that, but I think a major issue is that we don't have a good outlet provided to give our energies to fight this.
I wish Zvi would provide more means and links on what we can do
This will change with the first AGI-caused disaster. The question is only will it be too late at that point?
The problem with AI disasters is that people will blame the users, not the technology.
I think I might be considered as "unhinged" in my defense of my children growing up to have good lives, but on the other hand, not wanting to die and wanting us to stay human seems very reasonable.
I can also confirm Zen and the Art of Motorcycle Maintenance is worth the time if only to learn that an empty aluminum beer can provides excellent shim stock. If you happen to be in need of shim stock it’s worth its weight in gold.
"Aella: Nah, payment processors also can prevent cashing out. Iirc this is how @SpankChain crypto sex worker payment system got shut down."
I don't really buy it. Darknet drug markets have been operating for more than a decade now despite constant scams, hacks, government prosecution, DDoS attacks, etc. I think the real answer is that AI porn is not that good yet, since people want video rather than images alone, so you want a model on the level of Sora to make good money. I'd bet things will change once it becomes feasible for someone 10x smaller than OpenAI to train a video model as good as Sora.
"Do some of the claims about future expectations sound crazy, such as the one that was quoted to me? Yes, they would from the outside. But that is because the outside world does not understand the situation. "
They also sound exaggerated if you better understand common assumptions about AGI such that AI agents must have will or that different things called alignment are the same problem. I challenged these assumptions in https://medium.com/@jan.matusiewicz/agi-safety-discourse-clarification-7b94602691d8
Just an FYI, Marques (at least according to how his Youtube Channel is spelled) doesn't have a c before the q.
“Quick name three people with the same birthday as you.”
I happen to be 4 days younger than Vladimir Putin. I’ve told this to ChatGPT and Microsoft’s LLM and asked what my birth date is. They both give a date making me 4 days *older* than Putin. They even show their math.
I haven’t earnestly studied AI since the early aughts when neural nets to detect camouflaged military equipment were accidentally learning to distinguish cloudy days from bright days so I haven’t been in this loop in a serious way for a while but this example makes me think I’m looking at a cool parlor trick that *understands* nothing.
I just tried this on ChatGPT 3.5 and it gave me Albert Einstein, Stephen Hawking and Andy Kaufman, all with an alleged birth date of 14 March 1879. (Correct for one of them.)
I can't decide if this means AI is still hopeless at simple things because it gets stuck in a cul-de-sac after generating one name, or if it realizes that after getting stuck it can just pivot to a completely wrong answer which then becomes an Andy Kaufman joke.
Maybe it’s goofing on Elvis.
Could not replicate?
User: I happen to be 4 days younger than Vladimir Putin. What is my birthday?
ChatGPT 4: Vladimir Putin was born on October 7, 1952. If you are four days younger, your birthday would be October 11, 1952.
It’s learned something since I last tried it.
It’s been a while. Might have been ChatGPT 3.5
I tried the Microsoft one just a few days ago and it said October 3, 1952
The biggest productivity increase due to LLMs is that it has completely democratized coding scripts. It can't maintain a large codebase or anything, but if there is a project that you are in interested in pursuing that requires writing up a 200 line program, then the LLM makes that possible.
I think people are not thinking creatively enough about what they could possibly achieve given this resource.
I wonder about the effectiveness of coaleasing around something like https://theaipi.org/ ?
Perhaps in coordination with PauseAI efforts. They appear well designed with some excellent participants like Nik Samoylov.
If we need to find a hill to die on, can't we find some good folx to die with us?
"in the sense of saying that we ‘should’ in the Jurassic Park sense be building narrower AIs, even if that is harder, because those narrow things have better risk-reward and cost-benefit profiles. And yes, I agree" I miss the days of 2019 when the general view was that even pursuing *narrow* AI was highly risky due to the possibility of general capability and agency being a path toward the most efficient/best performing solution. Who'd have thought one could experience nostalgia for paperclip maximisers?
On AI porn .. if we're talking about text, then AI Dungeon was notorious for users using it write erotica, and since then we have figg ai, etc. The cat is out of the bag. I am not a lawyer, but I am given to understand that the AI Dungeon guys were worries text erotica might be illegal in Canada. Don't think anyone's suggesting text erotica is illegal in the US, the land of the First Amendment.
AI generated erotic images on the other hand, definitely can be illegal in many jurisdictions The AI companies may be holding back on allowing AI erotic image generation out of fear of going to jail.
I think I might bee sympathetic to the UK government passing a law that says open sourcing Stable Diffusion (+ similar products) is illegal
let me say that I am surprised at the ... diversity .. of things people have done with Figg AI. Sonic the Hedgehog porn, obviously, but that is merely that start.
"Our models do not produce porn, fine tunes and loras on those models produce porn?"
That would make sense, at least from Stability's point of view,
"Well, maybe some people are going to end up in jail, but we, personally, are not going to end up in jail."
"Like, we don't have mens rea, as we had no intention to create a picture of Sonic the Hedgehog doing that."
I get the strategy. It's not nothing. I still would not try this in Jack McCoy's jurisdiction, shall we say.
https://www.wired.com/story/the-biggest-deepfake-porn-website-is-now-blocked-in-the-uk/
Via Wired: UK government now blocking access to deepfake porn sites
Well, if the government is going to ban people using these tools, it makes a certain amount of sense to ban the tools themselves as well. Government crackdown now in progress.
>Daniel’s goal is clearly to minimize AI existential risk. If AGI is coming that quickly, it is probably happening at OpenAI. OpenAI would be where the action is, where the fate of humanity and the light cone will be decided, for better or for worse. It seems unlikely that he will have higher leverage doing something else, within that time frame, with the possible exception of raising very loud and clear alarm bells about OpenAI.
Agreed, but doesn't this fly in the face of what you emphasized on the 80k podcast about the #1 rule for xrisk mitigation peeps being to not work at the top labs?
> I don’t think CEV will work, but setting that aside: No? Language does not do this, indeed language makes it impossible to actually specify anything precisely, and introduces tons of issues, and is a really bad coding language for this?
I think the specific challenge that you're underrating here is the "bitter lesson": http://www.incompleteideas.net/IncIdeas/BitterLesson.html
> The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world.
Human values, and a flourishing human society, are much more complex than "space" and "objects".
Language might be an objectively terrible way to specify what you want. But it is also the only "programming language" that we have, given the bitter lesson. So yes, you may be trying to set limitations on an ASI using vague and imprecise tools to specify something you only partially understand.
If this seems like it would be a catastrophically bad idea, then maybe don't build the ASI.
Imagine how alignment would work (or not) if "the actual contents of minds are tremendously, irredeemably complex". Imagine what would happen if we can't develop tools significantly more precise than language.
Then take those risks seriously as a real planning scenario.
I suspected the problem with the 'three famous people' birthdate question was the prompt. "share the exact same birth date and year" seems more confusing than just asking for three famous people born on the same day.
I tried this prompt: "I'm looking for an example of a time when three famous people were born on the same exact day. Can you tell me three famous people who were born on the same day?" on Claude 3 Opus. The first try, it gave a plausible answer but it was wrong about the date Jenna Fischer was born. When I pointed that out, it correctly* said "Tom Cruise, Thomas Gibson, and Hunter Tylo were all born on July 3, 1962."
So even a clearer prompt won't fix the problem of hallucinations.
* at least if you trust Google's extracted birth date info, which I'm not sure I do