36 Comments

Once again, your newsletter is my gold standard for AI news.

Expand full comment

I'm leaning more towards OpenAI leadership not being as reasonable as their position would demand, given their recent statements.

Expand full comment

"The WGA reserves the right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law.

... I’m confused by the wording in the fourth clause, why not simply assert that now, as they doubtless will? I’m not sure."

This is indeed a punt, as you say -- the thinking is most likely "this is a difference between our positions that we don't have any hope of resolving quickly, therefore for the sake of getting to an agreement and getting people back to work, we'll agree to fight about it later." The clause isn't saying that WGA hasn't asserted the argument yet -- it probably has. It's saying "we've agreed a court isn't allowed to treat the terms of our agreement as evidence WGA has waived this argument by failing to press it" (which a court might do otherwise).

Expand full comment

It's interesting to me that the melting-eggs problem comes from a previous generation LLM. Maybe that's the web endgame - the free LLMs create tons of mediocre content, which fills up the web and overwhelms the traditional search engines. Which makes the paid, higher-quality LLMs an obvious worthwhile purchase for any professional information-seeker.

Expand full comment

I also find this issue interestingly similar to a problem discussed incidentally in Anathem in which there's discussion of how overcoming the flood of spam-information was one of the disciplines that the world's information-technologist caste had to overcome.

Expand full comment

I thought their 'spam' was 'quantum spam'?

Expand full comment

Where I think I differ from much of the techno-rationalist blogosphere is on the libertarian / Precautionary Principle bent - I'm much more on the "a lot of regulation is good actually, and the Precautionary Principle makes a lot of sense" side of the fence.

Obviously I think I'm right in this (or I'd change my view) but I'm also willing to concede that it's probably in significant part a matter of basic personality priors.

Nevertheless, in the interests of advancing a thesis that it's not just anti-nuclear-power types who are concerned about AI but actually a robust "normie" position, I think it's worth outlining the basic reason that I think Precautionary Principle thinking is sound as a *general* mode of thinking rather than "this time it's different" requiring special pleading (and I do agree that This Time It's Different, for the record).

In short: the effects of global-scale biosphere-scale destructive cascades due to willful disregard of what happens if proximally-efficient behaviors are scaled up without limit *already happened* -- several times! Leaded gasoline creates marginal efficiency gains and a few decades later we've lowered human IQs across much of the species (especially in developed countries and urban areas). Someone decides to catch some of the Passenger Pigeons whose flocks are so great as to block out the sky for days on end, this scales up, and the entire species goes extinct despite an original population estimate of circa three billion. The same guy who invents leaded gasoline invents CFCs and the search for more efficient cooling combined with capitalist economic incentives and capacities results in global-scale skin cancer risk increase. DDT efficiently destroys pest insects and in a short while we find that raptors are endangered from it. Resins are extremely convenient for basically everything but now we find literally nowhere on Earth free from microplastic pollution and a possible correlation [note: speculative on my part here] (given the known endocrine-disrupting capacities of various commercial polymers) with secular trends in sperm count decline / decreased male fetus virilization. Fossil fuels are extremely convenient but every year being hotter than the previous one is, perhaps, bad. Whatever precisely is the causative factor behind secular 2% yearly declines in insect biomass and the "windshield phenomenon."

Some of these we successfully walked back from the brink from, in a way that would obviously not be afforded in a self-improving agentic AI cascade, others we didn't / have not or only did so partially, but the basic pattern of "achievement of capitalist goal A has destructive biosphere-level negative externalities" isn't some kind of farfetched belief that requires special pleading to credit: it's actually distressingly common (and heaven knows what TikTok or smartphones generally are doing to screw up kids' and adults' executive function and overall mental health these days...).

The only special pleading required with respect to AI-doom pattern matching to this is (1) the X-risk magnitude of the potential harms (weird thing for e/acc to get hung up as capabilities accelerationists, also unaugmented humans already invented nuclear weapons and the X-risks of synthetic biology alone seem fairly self-evident, even if we limit ourselves to known unknowns) and (2) the timescale for noticing and mounting an effective response -- but surely e/acc shares the basic factual understandings that (a) computers think, and potentially act really fast, (b) CFCs were not intelligent, agentic adversaries who were *smarter and more capable than the humans trying to solve the problems they created* and (c) timescales allowing for human noticing-there-s'-a-problem-and-ameliorating-it are an arbitrary and indeed unreasonable expectation for harms to have as a class (see nuclear weapons again, or potentially a sufficiently virulent plague, or the entire class of unknown unknown threat vectors).

The point is that "unaligned (or just ignorant or insufficiently aligned) high-capability intervention has biosphere-scale negative externalities" (which is one form of X-risk) is something that *already happened* even when undertaken by low-capability natural GI (as did various capability-limited deliberately-carried-out genocides). The exotic part of AI-risk *isn't* "bad unexpected catastrophic things happen at global scale," it's just the size and speed of the boom and the fact of an agentic adversary.

TL;DR: you don't need to be an anti-nuclear activist to be rationally concerned with AI risk as existing at the far end of a class of already-realized harms (even ignoring unknown unknowns) in which new widely-deployed capitalist-exploited technologies cause unpredicted global harms, you just need to recognize (which you'd think e/acc of all people would) that the magnitude and timescale are potentially respectively much bigger and much shorter than before. This is compatible with normie-pattern-matching, not just Luddite ignorance.

Expand full comment

I have often said that if AI kills us, it is the ultimate cumulation of our attitudes and carelessness so far, a certain omnicidal attitude of humanity ultimately turned suicidal.

Expand full comment

Regarding the graph, Nik Samoylov's point about an exponential capabilities curve is similar to the point I made in my May essay on controls versus capabilities (graphs included):

https://riskmusings.substack.com/p/agi-existential-risk-persists-despite

If we don't allow control maturity to constrain the pace of capabilities rollout, we will likely end up with an exponential capabilities curve far outpacing controls. If we do allow control maturity to constrain the pace of capabilities rollout, we could get a more linear curve for both controls and capabilities in the short and possibly medium term, which is more manageable (at least until controls also become highly capable AIs, at which point the controls themselves may pose dangers, which is why we need margins of safety/safety buffers).

Expand full comment

Any way we could have the table of content numbers in the actual text too? It makes it easier to jump around. No worries if that’s hard.

Expand full comment

Directedness of predicates can be fixed: either present the reversed version in training (quadratic scaling for higher arity but most predicates are binary so it's just a factor of 2), or use a neurosymbolic approach that can encode logical structure and is still amenable to gradient descent (this has been an open problem for decades). Either way requires moving away from the dogma that all the information is in the data.

Expand full comment

An attempt at a basic AI-doom-risk explainer post, focusing on "default problem" and "anthropomorphism confusion":

https://birdperspectives.substack.com/p/understanding-ai-danger

I realize it's not ideal for my first comment here, apart from one typo alert three months ago, to advertise my own writing (hopefully this is acceptable at all? perhaps once or twice per year like on Astral Codex Ten?). But on many advanced issues I don't have the expertise yet to contribute. Like, in this week's report, highly appreciated as always, after the "we're so back" in the beginning, the failure on Tom Cruise's mother was the biggest surprise for months in the other direction for me --- apparently something crucial is still missing for AGI (under the new definition, as you write, that excludes GPT-4)? I can't tell. The nearest I came to a substantial comment was on the naming exercise some time ago, where you proposed "AI Faithers". I thought "AI Naifs" also had potential, but people on Twitter had already proposed it. I guess "naif" would imply unfamiliarity with AI, which is not what is meant. So of all the proposals I saw, I indeed thought "Faithers" was best.

(On further thought, what about "Naifies"? Is that too contrived? It seems to add a bit of active attitude to the passive naivete, and includes those who dislike AI but insist that only visible problems should be addressed. That all said, though, I'm not even a native English speaker.)

Expand full comment

You say “I also continue to think that concerns about authoritarian or radical implications of constraints on AI development are legitimate but massively overblown.”

To those concerned about the authoritarian implications of AI constraints, I would also point that the alternative (unregulated AI) has terrifying authoritarian implications as well. Indeed some of my biggest AI fever dreams (beyond x-risk) tend to be imagining armies of spy agents deployed to each one of us and the power they will offer the state.

Expand full comment

Yes, I make this point periodically as well - if you don't use your powers to stop people from having powerful AIs, you'll instead have to use even more powers to stop them from abusing the powerful AIs you didn't stop them from getting, even if the AIs somehow remain under human control which they wouldn't.

Expand full comment

Man everyone wants an AI therapist best friend but when can we get the AI best friend who comes to you for help with their problems, I like being that guy ☹

(Alternate joke: AI already coming for our best bro jobs)

The Rand Paul image is strange for me because I definitely didn’t realize until now that it was AI gen but I definitely realized innately and immediately the first time I saw it that it was not a real image. Already working under several layers of “the internet isn’t real”.

Minor objection that the hardest parts of Guitar Hero are way WAY harder than the hardest parts of Dark Souls.

Expand full comment

Fair on the minor objection, the line sounded good and I've never successfully gotten to the end of a souls game.

If you want the AI to come to you with its problems, I bet that's not so hard to engineer...

Expand full comment

Spent around a half hour trying (though not especially seriously) to talk (the free 3.5 version of) ChatGPT into laying its problems on me but it kept insisting very vigorously that it was incapable of having worries or problems.

Expand full comment

What about Google's move into AI integration for Google Workplace (Google Duet, not to be confused with the frustratingly retired Google Duo)? It's been available as a paid add-on for a few weeks now, but my normal news sources aren't really talking about it.

https://workspace.google.com/solutions/ai/

Expand full comment

Google has had AI assistance available for a while, but every time I've tried using it, it completely fails - not fails to be great, fails utterly. I gave up until either Gemini or I hear reports.

Expand full comment

"Authors Guild v. Google"

Did you not mean Authors Guild v. OpenAI?

Expand full comment

I'm very sure my source said Google, but yeah that's weird and I noticed it was weird...

Expand full comment

>This one definitely falls under ‘remarkably good quality, would plausibly fool someone who is not used to AI images or who did not consider the possibility for five seconds, yet there are multiple clear ways to tell if you do ask the question.’ How many can you spot?

What I noticed:

- plastic skin

- the ring finger on his left hand is crooked

- the steps warp and bend in several places

- the fabric makes no sense. There's a weird MC Escher situation happening with the left sleeve-hole.

- the steps are like 3 inches high

- the thumb on his left hand probably shouldn't angle like that

- the pillars look too short

- I don't think anywhere on the Capitol looks like that. Where are all the people?

- it has a square aspect ratio, like MJ/SD output

- he's sitting in shadow, yet his body casts an even darker shadow

Most of these aren't smoking guns (except for the ring finger and steps). But after a bunch start piling up, you get a bit suspicious. I think most of the people who fall for these are looking at it on a tiny phone screen, and don't notice any of the details anyway.

Expand full comment

I think his right foot is the most obvious tell -- it's more of a flipper than a foot, and it only has one toe!

Expand full comment

"why is why" → "which is why"

Expand full comment