22 Comments

The hope for sane regulation remains. At least the strike appears to be working

Expand full comment

FYI The link to "Fine tune Llama-2 on anyone’s text and see what happens" doesn't seem to work for me.

Expand full comment

It's a link to tweetdeck rather than twitter - maybe it works if you have the tweetdeck app installed? Anyway just removing "tweetdeck" from the URL fixes things. I think this was the intended link: https://twitter.com/alyssamvance/status/1690507889587200000

Expand full comment
author

Oh huh. I've never seen that before. Thanks for figuring that out!

Expand full comment

Can they make an AI that writes your term paper for you and then generates a photorealistic six-hour video of you writing it yourself?

Expand full comment
Aug 17, 2023·edited Aug 17, 2023

This link: "From MR’s links, American states as real people generated by MidJourney" sends you to

https://www.dailybee.com/en/bloopers-sports-time?ly=native_one&abtv=072b0948-63ec-4ed5-b226-982a92b04918

Also: "Any investor not concerned about increasing existential risk would kill to invest in OpenAI."

And might end up killing everyone to invest in OpenAI, well played.

Expand full comment

Jack Clark had great strike zone judgement. Nice to see he found a new job.

Expand full comment

Regarding the Epoch timeline for transformative AI, my reading (which could be just me) is not that they're saying that it's all just based on ramping up compute, but rather that other necessary improvements come as part of the overall scaling process and timeline. Thus compute is a good predictive metric to use even if it doesn't explicitly model architecture and whatnot. Predicting the future is hard of course, but this doesn't seem inherently unreasonable to me. I wouldn't be surprised if architecture innovations have basically scaled with compute applied to AI.

Expand full comment

Re bogus AI, of course there is some. We've had bogus AI for the last 5-10 years. Many of the older "AI" companies out there are built on chewing gum and if statements, or just pick your metaphor. I could imagine that the generative AI wave actually reduces the percentage of fraudulent startups, per Graham's comment about it making problems finally tractable and thus reducing the incentive to obfuscate your solution to an intractable problem. But yeah, lots of money will lead to BS.

Expand full comment

100 points for the System Shock joke.

I love AI Town, and want it to get bigger and have many clones. For testing purposes.

Re: governmental cooperation -> AI regulation. I'm a crazy libertarian, but I admit that international cooperation around nukes has been actually kinda successful.... at least, later in the 20th century, once everyone got convinced of the existential risk. Early on, it was terrible - Russia stole/raced to get nukes and then raced to radically increase their power, and so did US. They were not convinced, in 1945-1950, that "us trying hard to get nukes might have existential negative impacts for us" because no one thought that yet - they just thought that if the US could blow up 10-15 of their cities or armies, they would lose a war, so they wanted the ability to blow up 20-25 of ours. No one was thinking of nuclear winter or MAD yet.

There has to be successful, comprehensive persuasion of that existential risk first, otherwise governmental regulation will be be informed by state competition -> arms race, because that's the default context. Right now we're at the 1946 stage: one country has the (still relatively weak) Thing, and everyone else thinks they want it (in many cases for totally positive reasons) but doesn't know that they need to be careful, slow down or even why. We need to skip to the ~1970s era as quickly as possible. I still don't think theoretical arguments will convince enough people, no more than explaining in 1946 how nukes might become thermonuclear, continent irradiating, nuclear winter/extinction level events would slow down Russian scientists. What did convince people of that? Showing video of large swathes of Siberian tundra/Bikini atoll being turned into radioactive wastelands with tests of a single weapon, and pointing out that nations had thousands of those weapons. Sorry for always saying the same thing, but: this is what AI Doom people need to win the argument, and convince people and governments to cooperate and why.

Expand full comment

I know I'm replying 3 weeks late, but I note that nuclear non-proliferation, while it has *delayed* things, has not been sufficient to stop the proliferation of nukes. When *North Korea* can get nukes, you have utterly failed, they're about as bad as a state actor can be.

Expand full comment

Certainly it's not a great result from the non profileration plan, and I am as critical of these kind of organizations as anyone, but compared to what people are asserting for it as an analogy to AI doom, I don't think that's "utterly failed." NK might be the worst actor, but it also hasn't used its nukes! There are way worse outcomes than what has happened.

Expand full comment

GPT-4 performing better than mixed or no use reminds me of when I discovered in high school that I could get the best outcomes in English class by deliberately not reading the assigned material (even in the rare cases it was something I was interested in and wanted to read) and relying solely on sparknotes.

Expand full comment

Also, on the topic of LLMs in video games, pre-registering here a completely random out-of-the-blue totally unfounded prediction that Starfield is going to be found to have stuff written by an LLM and get completely roasted for it.

Suspect Bethesda is just smart/lame enough to try this but either not smart enough to have read the room to see how much of a disaster it will be if people figure it out or reckless enough to know how much attention they'll get for it without caring that it's all negative.

Expand full comment
Aug 21, 2023·edited Aug 21, 2023

Counterpoint: if Starfield has LLM writing stuff, it will have no effect on sales. Gamers and writers have no unusual overlap. (And if they were put off by rote/basic writing, people wouldn't be playing Bethesda Game to begin with.)

Expand full comment
author

The backlash to this would be immense if people found out. If it was successfully kept secret and they did a good job checking it, probably fine.

Expand full comment
Aug 21, 2023·edited Aug 21, 2023

Iunno, I really don't think so. We're talking about the fandom that modded LLMs into Skyrim in the first place.

It'll mostly lead to a lot of memes, probably.

"Scripted by AI" / "Finally, a bug-free launch"

And I mean, again, this is the studio that made millions with such repetitive quest design that it's the most memorable thing about their game. "Another settlement needs our help!" Bethesda have been blatantly, overtly trying to achieve AI writing for several generations of games, and it doesn't seem to have hurt sales much if any. So if you've already accepted radiant quests, LLM quests could only be a step up.

edit: To clarify, I agree there would be "backlash", I just doubt it'd be from players.

Expand full comment

Oh yeah, I absolutely wouldn't expect it would have any impact on sales especially long term, but people are looking for reasons to be mad at any large company that uses AI tech and also looking for reasons to be mad at Bethesda specifically for traditionally having overhyped, atrocious launches that they get to lean on the modding community to fix.

Plus yeah as you mention I could definitely see them trying to pass it off as the next step in their Radiant scripting with, again, no sense of having read the room on it.

Expand full comment

I just downloaded the app because the gmail newsletter always cuts off and even when you open it in a new window, any link clicking is fatal.

Great content as always

Expand full comment

"How awful? Existentially awful, you see, there are no words such folks will let be."

Even the word doom is under threat:

https://pca.st/hofrhhyy

Expand full comment

I wrote about the less-interesting piece of the Generative Red Team thing that Zvi cites -- what are the basic LLM red teaming lessons I learned from Defcon -- if having a citable piece about it is of use to anyone: https://davekasten.substack.com/p/ai-needs-wizards

Expand full comment

"When used properly, I continue to strongly believe LLMs strongly contribute to human learning, as they have to my own."

You will be an outlier, Grimes is right to worry.

Expand full comment