29 Comments

Saw very surprising anti-AGI commentary from... Tucker Carlson last week: https://youtu.be/gr4E0jEjQMM?t=3270. In his interview with Bryan Johnson he said that in his opinion we should destroy all AI datacenters and stop AI development before it's too late. Bryan disagrees. In any case it seems like Eliezer-like discourse is now reaching the mainstream.

Expand full comment

I think I might be considered as "unhinged" in my defense of my children growing up to have good lives, but on the other hand, not wanting to die and wanting us to stay human seems very reasonable.

Expand full comment

I can also confirm Zen and the Art of Motorcycle Maintenance is worth the time if only to learn that an empty aluminum beer can provides excellent shim stock. If you happen to be in need of shim stock it’s worth its weight in gold.

Expand full comment

"Aella: Nah, payment processors also can prevent cashing out. Iirc this is how @SpankChain crypto sex worker payment system got shut down."

I don't really buy it. Darknet drug markets have been operating for more than a decade now despite constant scams, hacks, government prosecution, DDoS attacks, etc. I think the real answer is that AI porn is not that good yet, since people want video rather than images alone, so you want a model on the level of Sora to make good money. I'd bet things will change once it becomes feasible for someone 10x smaller than OpenAI to train a video model as good as Sora.

Expand full comment

"Do some of the claims about future expectations sound crazy, such as the one that was quoted to me? Yes, they would from the outside. But that is because the outside world does not understand the situation. "

They also sound exaggerated if you better understand common assumptions about AGI such that AI agents must have will or that different things called alignment are the same problem. I challenged these assumptions in  https://medium.com/@jan.matusiewicz/agi-safety-discourse-clarification-7b94602691d8

Expand full comment

Just an FYI, Marques (at least according to how his Youtube Channel is spelled) doesn't have a c before the q.

Expand full comment

“Quick name three people with the same birthday as you.”

I happen to be 4 days younger than Vladimir Putin. I’ve told this to ChatGPT and Microsoft’s LLM and asked what my birth date is. They both give a date making me 4 days *older* than Putin. They even show their math.

I haven’t earnestly studied AI since the early aughts when neural nets to detect camouflaged military equipment were accidentally learning to distinguish cloudy days from bright days so I haven’t been in this loop in a serious way for a while but this example makes me think I’m looking at a cool parlor trick that *understands* nothing.

Expand full comment

The biggest productivity increase due to LLMs is that it has completely democratized coding scripts. It can't maintain a large codebase or anything, but if there is a project that you are in interested in pursuing that requires writing up a 200 line program, then the LLM makes that possible.

I think people are not thinking creatively enough about what they could possibly achieve given this resource.

Expand full comment

I wonder about the effectiveness of coaleasing around something like https://theaipi.org/ ?

Perhaps in coordination with PauseAI efforts. They appear well designed with some excellent participants like Nik Samoylov.

If we need to find a hill to die on, can't we find some good folx to die with us?

Expand full comment

"in the sense of saying that we ‘should’ in the Jurassic Park sense be building narrower AIs, even if that is harder, because those narrow things have better risk-reward and cost-benefit profiles. And yes, I agree" I miss the days of 2019 when the general view was that even pursuing *narrow* AI was highly risky due to the possibility of general capability and agency being a path toward the most efficient/best performing solution. Who'd have thought one could experience nostalgia for paperclip maximisers?

Expand full comment

On AI porn .. if we're talking about text, then AI Dungeon was notorious for users using it write erotica, and since then we have figg ai, etc. The cat is out of the bag. I am not a lawyer, but I am given to understand that the AI Dungeon guys were worries text erotica might be illegal in Canada. Don't think anyone's suggesting text erotica is illegal in the US, the land of the First Amendment.

AI generated erotic images on the other hand, definitely can be illegal in many jurisdictions The AI companies may be holding back on allowing AI erotic image generation out of fear of going to jail.

Expand full comment

"Our models do not produce porn, fine tunes and loras on those models produce porn?"

That would make sense, at least from Stability's point of view,

"Well, maybe some people are going to end up in jail, but we, personally, are not going to end up in jail."

Expand full comment

>Daniel’s goal is clearly to minimize AI existential risk. If AGI is coming that quickly, it is probably happening at OpenAI. OpenAI would be where the action is, where the fate of humanity and the light cone will be decided, for better or for worse. It seems unlikely that he will have higher leverage doing something else, within that time frame, with the possible exception of raising very loud and clear alarm bells about OpenAI.

Agreed, but doesn't this fly in the face of what you emphasized on the 80k podcast about the #1 rule for xrisk mitigation peeps being to not work at the top labs?

Expand full comment

> I don’t think CEV will work, but setting that aside: No? Language does not do this, indeed language makes it impossible to actually specify anything precisely, and introduces tons of issues, and is a really bad coding language for this?

I think the specific challenge that you're underrating here is the "bitter lesson": http://www.incompleteideas.net/IncIdeas/BitterLesson.html

> The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world.

Human values, and a flourishing human society, are much more complex than "space" and "objects".

Language might be an objectively terrible way to specify what you want. But it is also the only "programming language" that we have, given the bitter lesson. So yes, you may be trying to set limitations on an ASI using vague and imprecise tools to specify something you only partially understand.

If this seems like it would be a catastrophically bad idea, then maybe don't build the ASI.

Imagine how alignment would work (or not) if "the actual contents of minds are tremendously, irredeemably complex". Imagine what would happen if we can't develop tools significantly more precise than language.

Then take those risks seriously as a real planning scenario.

Expand full comment

I suspected the problem with the 'three famous people' birthdate question was the prompt. "share the exact same birth date and year" seems more confusing than just asking for three famous people born on the same day.

I tried this prompt: "I'm looking for an example of a time when three famous people were born on the same exact day. Can you tell me three famous people who were born on the same day?" on Claude 3 Opus. The first try, it gave a plausible answer but it was wrong about the date Jenna Fischer was born. When I pointed that out, it correctly* said "Tom Cruise, Thomas Gibson, and Hunter Tylo were all born on July 3, 1962."

So even a clearer prompt won't fix the problem of hallucinations.

* at least if you trust Google's extracted birth date info, which I'm not sure I do

Expand full comment