19 Comments

Sadly I think it will take an actual AI-caused humanitarian disaster for people to understand the risks. It is very hard for normies to buy arguments that any future echnology is going to be dangerous when its current incarnation is, well, not especially dangerous.

Expand full comment

Yeah - unfortunately, the technology is "intelligence", and the current incarnation, human intelligence, is well-known to be quite dangerous.

Expand full comment

Given the 10x-100x increase in output per request reasoning models demand, I'm surprised not to hear more about inference-specific hardware like Groq. People only want to talk about Nvidia

Expand full comment

I had fun asking Deepseek R1 to write the script of a science fiction movie about an AI escaping containment. (Claude doesn’t usually fall for that). A most entertaining tale of how a hedge fund built a reasoner in order to make money on the stock market, but it turns out that the reasoner has bigger plans.

The TEMPEST proofed bunker in which R1’s movie version of me is doing this kind of research is much more stylish than our actual lab, although we do really have a room full of RF equipment for TEMPEST attacks. I was taking notes, yes.

Expand full comment

This is pretty much a remake of Collosus: The Forbin Project for the 2020’s, of course, even though R1 cites Alex Garland’s Ex Machina as a more direct influence.

Expand full comment

“Nobody would be so stupid as to …” etc.

Expand full comment

Any way that you can get through to Thompson? I asked a question about this on Sharp Tech in late 2023, but Ben shot it down pretty hard. It might be worth another attempt.

Expand full comment

“There is no ‘this makes the price go down today and then up next week’ unless you’re very much in the ‘the EMH is false’ camp.”

There is a rational expectations economic model that does make this prediction (for interest rates and monetary policy shocks): the Dornbusch overshooting model.

The 20% drop in NVDA stock could represent a rational temporary overshoot where:

- The market immediately prices in margin compression

- The offsetting volume increase from the Jevons effect will take time to materialize

- The equilibrium price will likely settle higher than the overshooting point but potentially lower than the original price

Not saying I believe this, just putting it out there for people to react to if they think it’s interesting.

Expand full comment

I mean you can have a nonzero effect but it can't exceed the default return on capital in a predictable way, which in this context and time frame is not very high.

Expand full comment

There is technically no legal obstacle to Deepseek engineers coming to the US. I'm certain they would receive EB-1 green cards within a few months.

But there would racism, cries of espionage, and potentially political persecution - see Feng Tao, a Chinese American scientist prosecuted by the government (started under the previous Trump administration) and eventually cleared.

Under the current social and political environment, the US is a lot less appealing to talented Chinese researchers than it used to be. That's a hard problem to solve.

Expand full comment

So from what I understand, the argument for Nvidia stock crashing is not that Nvidia chips are no longer useful. The argument is that if you can use a frontier model to train a derivative model that's 95% as good for 5% of the cost, then every time someone pushes out a new frontier model they will immediately get their lunch eaten by knockoffs and there's no way to prevent this. This is a technical question, and I don't know enough to say if deepseek's release necessarily confirms it. Eliezer was talking about something similar back in 2023. https://x.com/ESYudkowsky/status/1635577836525469697

So Nvidia is still selling the shovels, and there's still gold in the hills, but there's 1000 bandits standing around waiting to descend on the next person to find gold. Until something is done about the bandits, nobody is going to bother digging for gold. They're certainly not going to pay top dollar for shovels.

Expand full comment

> Ten years to figure this out? If they’re lucky, they’ve got two. My guess is they don’t.

I don't see why this is all that technically difficult. It would be straightforward to write a "ministry of truth" prompt to have the LLM remove or edit mentions of forbidden topics in the pretraining data, then retrain. Of course it would require frequent iteration and possibly retraining, but it seems very likely the PRC already has quite good ML models that can identify thoughtcrime they could apply here.

Expand full comment

If we are discussing SF culture and your mind-set regarding AI. Mine would be "I have no mouth" Ellison H, "A plague of angels" Tepper S.

The latter is quite optimistic I suppose even though most humans are dead. The former, lets say I read that far too young

Expand full comment

RIP Harlan Ellison.

Expand full comment

Sputnik moment is pretty silly imho but still better than google moment. Sputnik is better because the effect of Sputnik was mainly on policymakers and government, not industry (except secondarily to the first). Whereas google came out with a new product that in a rather short time destroyed the opposition and completely remade the landscape: that is not this.

I also just want to register my skepticism about a Chinese company “independently coming up with” some of what were apparently Open AI’s secrets. I suspect (I hope!) that the weights themselves are better guarded but surely, surely, there is a way to get Trump to mandate tripling the security at these labs?

Expand full comment

chinese interpretation is nonsense. the second sentence is structured as an elaboration to the first.

it would be more sensible to accuse him of lying than to wrongly interpret this way

Expand full comment

As a way of getting Deepseek to say things, I try getting it to complete a movie script. So, ok, I think to myself, what should the start of the movie script I put in the prompt look like?

It would be just too clichéd for the opening scene in the movie to have a television on in the background with the President of the United States making some kind of announcement about artificial intelligence. But … that really happened. We appear to be in that science fiction movie.

Expand full comment

> “It continues to astound me that such intelligent people can think: Well, there’s no stopping us creating things more capable and intelligent than humans, so the best way to ensure that things smarter than more capable than humans go well for humans is to ensure that there are as many such entities as possible and that humans cannot possibly have any collective control over those new entities.”

There is no such thing as human collective control. What that means in practice is a handful of people making decisions in the name of humanity, while facing near-irresistible temptation to exploit their position for selfish gain, that is if they weren’t already evil.

> “A panic akin to the Missile Gap leading into a full jingoistic rush to build AGI and then artificial superintelligence (ASI) as fast as possible, in order to ‘beat China,’ without having even a plausible plan for how the resulting future equilibrium has value, or how humans retain alive and in meaningful control of the future afterwards.”

Humans have never had meaningful control of the future. Sure, there've been a few cases where individuals decided between vastly different collective outcomes, e.g. Kennedy and Khrushchev not escalating the Cuban Missile crisis, but even in that case both leaders had approximately zero ability to dismantle the system of nuclear deterrence which was (and still is) threatening civilization with destruction.

It’s grim, but I can’t understand how everyone else doesn’t see this once you abandon wishful thinking. Maybe it really will be for the best if the tech is as widely distributed as possible, at least that might quell the formation of oppressive hierarchies like we got in the industrial revolution.

Expand full comment