22 Comments

🩵 thx Zvi. You thought about using chat gpt voice unstead of the built-in? Might be better.

Expand full comment

If you are looking for a high-quality audio conversion, I produce a podcast of Zvi's posts through ElevenLabs, with multiple voices to differentiate the different quoted sources:

https://open.substack.com/pub/dwatvpodcast/p/ai-88-thanks-for-the-memos

Expand full comment

Zvi, I have some involvement in the Texas bill. Can you message me? I may be able to help.

Expand full comment
7 hrs ago·edited 7 hrs ago

It's a laboratory of democracy case right. I feel like the most draconian AI blocking bill doesn't matter because there are 49 other US states, and Texas is not a center for AI startups. There are some in Texas but they can all leave for bay area or NYC, or just not deploy their products to Texas.

1047 was important because it takes time for startups to leave California and it delays the potential singularity or fizzle by the amount of time required to move.

For the laboratory of democracy model to work we need control groups. States like Texas who make using AI illegal, and then we can compare head to head the consequences. Will AI positive states have extreme economic growth, leading to AI banning states declining faster and worse than Detroit? Will they have mass unemployment from using AI in everything and a steady stream of incidents of AI failing and killing workers and customers?

I don't know but it seems like valuable information to try it out and find out.

Expand full comment

An AI genocide of humanity knows no borders.

Expand full comment
2 hrs ago·edited 1 hr ago

If that's your threat model Texas is completely irrelevant. Probably in that case you should be for some group - probably the government - pushing forward as fast as possible to develop AI strong enough to be dangerous in an isolated lab. Prove it to everyone especially China that the threat exists. Texas is a waste of your time if this is what you genuinely believe.

California was a different case - 1047 or some other meaningful legislation that leads to a slowdown as AI labs leave the state or let China pull ahead - buys us a few extra months to live by your threat model.

Expand full comment

I'm not sure your model of AI risk matches Zvi's model of AI risk.

Expand full comment

An anecdote about your comment about how something that's so good for productivity being so slow to be adopted. I'm an avid AI user and work in a very quantitative, tech forward, and young department. Almost everyone that's adopted AI here has done it because I've walked them through using it for this or that problem. After, I dunno, a week of this kind of collaboration something suddenly clicks and they realize they love it. At that point they're off to the races and are way more than 25% more productive.

I mention the demographics because my department's profile is basically optimal for AI adoption. But here at least, adoption still moves at the speed of one person convincing another. When I talk to other departments about this, I often have conversations like: "I will gladly and patiently teach you to use this tool, and I can very credibly say that it will save you hours per week", "eh." People just be like that I guess.

Expand full comment

I was recently in hospital, and basically everyone I interacted with checked that I was the right patient, and asked me to confirm that the relevant parts of my medical record in the computer were correct. Like, I was asked to confirm what medications I am on at least five times. I entirely understand why that’s the protocol, and I agree with doing that. Best one was them asking me to confirm stuff just before the intravenous sedation goes into my arm (like, this is the last possible moment you could ask me anything at all).

The whole system probably has a certain amount of redundancy against AI generated garbage in the online medical record.

(And I can merely guess at the medical negligence lawsuits that led to current protocols).

Expand full comment

Sure that might catch some of the clerical errors it'll make in the paperwork, but it won't have those kind of safeguards on the operating table when it gets there.

Expand full comment

Ultimately what matters is not whether risks are possible but what the odds are. As Zvi notes whisper hallucinations may not actually be worse than a physician dictating later and getting 2 patients confused.

Expand full comment

“If AI is not a huge deal over the next 20 years, I presume either we collectively got together and banned it somehow, our else civilization collapsed for other reasons.”

I feel like it’s common for tech to have a lot of hype and even a lot of investment and then to end up not being a huge deal over the short-medium term.

Crypto, nuclear power, everything involving space, nanotech, VR, cloning/genetic engineering (until COVID vaccines), AI (for the entire history of AI research up to now)

Other stuff comes out and is genuinely really big for a while but then the world changes and in the end it’s a historical footnote. Mosquitoes develop resistance to DDT. Google Search gets overrun with SEO garbage. People get bored with the core gameplay loop for MMORPGs.

I can imagine AI going like that over the next 20 years. Maybe it needs more to keep scaling than what we can provide. Or maybe AI systems get kinda popular for a bit, but then fade when we end up being better at subverting them than securing them.

Expand full comment

Praxis does take time for sure, VR's infancy was in the late 1970s and it's continuously suffered waves of being repeatedly overhyped (and fairly so). AI is going to be different. Even if capabilities don't develop any further beyond where they are today, it will have an outsized impact on society beyond any other technology before. Luckily the implementation efforts are still bound by human adoption, which does take time, but of course another big fear is that our rate-limiting step might soon be obviated.

Expand full comment

Great stuff as always Zvi. That YouGov poll from the Center for Youth and AI does smell a bit like opt-in online polling to me though. Many of your point still stand however.

Expand full comment

1% probability of your device being taken over has been a thing since the days of the earliest browsers. Sometimes this goes down a lot (a critical image library vulnerability is patched), sometimes it goes up (a malicious actor uses a zero day while paying for ad coverage for a high profile keyword, or successfully commits a backdoor to a low level utility library and isn't discovered quickly). I don't think LLM prompt injection massively changes the attack surface over existing exploits by itself. Running a system in autonomous agent mode does change the risk because that allows multiple shots at the target.

Unfortunately, most people who use the Internet seem to have accepted that clicking on the occasional random link will let evildoers take over their machine and steal money from their bank account. This makes it less likely that the more flexible, adaptable kind of malware enabled by agents with a world model will get special consideration, even as over time the additional capabilities and potential for persistence make these systems harder to detect and defend against. Antivirus software that relies on pattern matching to trigger an alarm isn't going to help when the malware can adapt to the detector, dynamically work around it, or redirect the alarm.

That box meme? That's where we are headed, unless friction is added to the iterative agent loop.

Expand full comment

One way to make this specific use case work is reference sources and a whitelist of trusted websites. Ironically this is de facto a rollback to a prior era - basically your containerized AI agent isn't going to use anything but nonfiction with positive peer reviews from a set of websites that have reference books and journals and major news sources to do its assigned work. No random websites.

Not fundamentally different than going to the library and only reading the books they have there in nonfiction and the major newspapers. Just thousands of times faster.

Expand full comment

The timing on the AI memo is super weird, and frankly extremely stupid. What the Biden administration should do is say loud and clear that they are interested in the complete and total acceleration of all AI research. That way if Trump wins, he will immediately declare himself the AI safety president. If Harris wins, they can just change course immediately.

Expand full comment
6 hrs ago·edited 6 hrs ago

The memo reads to me like "acceleration and fuck China but we are going to do this with so much bureaucracy that in practice nothing will be accomplished".

Basically "noop, cost hundreds of millions".

Expand full comment

The timing on the AI memo is super weird, and frankly extremely stupid. What the Biden administration should do is say loud and clear that they are interested in the complete and total acceleration of all AI research. That way if Trump wins, he will immediately declare himself the AI safety president. If Harris wins, they can just change course immediately.

Expand full comment

Google larger models are fantastic for one use case: video input! https://simonw.substack.com/p/video-scraping-using-google-gemini

Expand full comment

>People toss off plausible-sounding stories about how, if X happens later, markets ought to behave like Y later; and then X happens; and then Y just doesn't happen. This happens ALL THE TIME. It happens to professional traders.

Eliezer is doing the thing where he speaks hyperconfidently with little to no supporting evidence.

A quick Google surfaced this report from professional traders:

"The US stock market has experienced its share of crisis events—from wars to political upsets, to many unforeseen human tragedies. The table below highlights 25 international crises that have occurred since 1940. During these events, the Dow Jones Industrial Average (DJIA) dropped by an average of about 6%. In all but four cases, the market returned to positive territory within six months of the end of each decline."

https://www.amundi.com/usinvestors/Resources/Classic-Concepts/Crisis-Events-and-the-US-Stock-Market

Here's another report: https://www.lpl.com/research/blog/middle-east-conflict-how-stocks-react-to-geopolitical-shock.html

I suspect Eliezer is falling prey to reporting bias. "Man bites dog" is more likely to make the news. "Catastrophe looms; stock prices rise" is more likely to make the annals of financial history.

[Note: I also disagree with Cowen for various reasons. I'm writing this because I'm much more worried about people reading these comments placing excessive trust in Eliezer.]

Expand full comment