22 Comments

The hope for sane regulation remains. At least the strike appears to be working

Expand full comment

FYI The link to "Fine tune Llama-2 on anyone’s text and see what happens" doesn't seem to work for me.

Expand full comment

Can they make an AI that writes your term paper for you and then generates a photorealistic six-hour video of you writing it yourself?

Expand full comment
Aug 17, 2023·edited Aug 17, 2023

This link: "From MR’s links, American states as real people generated by MidJourney" sends you to

https://www.dailybee.com/en/bloopers-sports-time?ly=native_one&abtv=072b0948-63ec-4ed5-b226-982a92b04918

Also: "Any investor not concerned about increasing existential risk would kill to invest in OpenAI."

And might end up killing everyone to invest in OpenAI, well played.

Expand full comment

Jack Clark had great strike zone judgement. Nice to see he found a new job.

Expand full comment

Regarding the Epoch timeline for transformative AI, my reading (which could be just me) is not that they're saying that it's all just based on ramping up compute, but rather that other necessary improvements come as part of the overall scaling process and timeline. Thus compute is a good predictive metric to use even if it doesn't explicitly model architecture and whatnot. Predicting the future is hard of course, but this doesn't seem inherently unreasonable to me. I wouldn't be surprised if architecture innovations have basically scaled with compute applied to AI.

Expand full comment

Re bogus AI, of course there is some. We've had bogus AI for the last 5-10 years. Many of the older "AI" companies out there are built on chewing gum and if statements, or just pick your metaphor. I could imagine that the generative AI wave actually reduces the percentage of fraudulent startups, per Graham's comment about it making problems finally tractable and thus reducing the incentive to obfuscate your solution to an intractable problem. But yeah, lots of money will lead to BS.

Expand full comment

100 points for the System Shock joke.

I love AI Town, and want it to get bigger and have many clones. For testing purposes.

Re: governmental cooperation -> AI regulation. I'm a crazy libertarian, but I admit that international cooperation around nukes has been actually kinda successful.... at least, later in the 20th century, once everyone got convinced of the existential risk. Early on, it was terrible - Russia stole/raced to get nukes and then raced to radically increase their power, and so did US. They were not convinced, in 1945-1950, that "us trying hard to get nukes might have existential negative impacts for us" because no one thought that yet - they just thought that if the US could blow up 10-15 of their cities or armies, they would lose a war, so they wanted the ability to blow up 20-25 of ours. No one was thinking of nuclear winter or MAD yet.

There has to be successful, comprehensive persuasion of that existential risk first, otherwise governmental regulation will be be informed by state competition -> arms race, because that's the default context. Right now we're at the 1946 stage: one country has the (still relatively weak) Thing, and everyone else thinks they want it (in many cases for totally positive reasons) but doesn't know that they need to be careful, slow down or even why. We need to skip to the ~1970s era as quickly as possible. I still don't think theoretical arguments will convince enough people, no more than explaining in 1946 how nukes might become thermonuclear, continent irradiating, nuclear winter/extinction level events would slow down Russian scientists. What did convince people of that? Showing video of large swathes of Siberian tundra/Bikini atoll being turned into radioactive wastelands with tests of a single weapon, and pointing out that nations had thousands of those weapons. Sorry for always saying the same thing, but: this is what AI Doom people need to win the argument, and convince people and governments to cooperate and why.

Expand full comment

GPT-4 performing better than mixed or no use reminds me of when I discovered in high school that I could get the best outcomes in English class by deliberately not reading the assigned material (even in the rare cases it was something I was interested in and wanted to read) and relying solely on sparknotes.

Expand full comment

Also, on the topic of LLMs in video games, pre-registering here a completely random out-of-the-blue totally unfounded prediction that Starfield is going to be found to have stuff written by an LLM and get completely roasted for it.

Suspect Bethesda is just smart/lame enough to try this but either not smart enough to have read the room to see how much of a disaster it will be if people figure it out or reckless enough to know how much attention they'll get for it without caring that it's all negative.

Expand full comment

I just downloaded the app because the gmail newsletter always cuts off and even when you open it in a new window, any link clicking is fatal.

Great content as always

Expand full comment

"How awful? Existentially awful, you see, there are no words such folks will let be."

Even the word doom is under threat:

https://pca.st/hofrhhyy

Expand full comment

I wrote about the less-interesting piece of the Generative Red Team thing that Zvi cites -- what are the basic LLM red teaming lessons I learned from Defcon -- if having a citable piece about it is of use to anyone: https://davekasten.substack.com/p/ai-needs-wizards

Expand full comment

"When used properly, I continue to strongly believe LLMs strongly contribute to human learning, as they have to my own."

You will be an outlier, Grimes is right to worry.

Expand full comment