21 Comments
User's avatar
Rapa-Nui's avatar

"Much of this is that you guys can’t name your stuff in a fun way. Claude is a guy. GPT-5.5 sounds like a medicine or some kind of wire"

I think I made a comment about this 1 or 2 years ago on this very blog.

My complaint was not so much about the branding (even in the late 2010s we had "AI" assistants with proper names- Alexa, Siri, Cortana; sticking to "GPT" early was actually a differentiator and possibly a competitive edge) but about the absolutely insane mess around the foundation model version labels. Honestly, I still can't keep them straight. Instead, Anthropic has evocative names that tell me exactly what it offers:

Haiku - small, cute, fast

Sonnet - baseline

Opus - when you need to try hard

Mythos- unleash the Shoggoth

This is an effective labelling strategy. Ia! Ia!

Peter Mernyei's avatar

Even Anthropic had the "new sonnet 3.5" vs "old sonnet 3.5" situation though :D but definitely less of a mess overall

Coagulopath's avatar

I always found Anthropic's names a bit weird. Claude's a given name, Haiku and Sonnet are poetic forms (from two different cultures!), Opus is a classical music term, and Mythos means "a collection of myths". I'm not sure that a random person from 2021, faced with these names, could even tell you what the smallest or largest model is!

Jaim Klein's avatar

What is Anthropic? Is it a nebulous philosophical construct, as it is treated in this conversation? For me, it is a wonderful money-making machine. As we say in the barrio: Por la plata baila el mono.

Matt Wigdahl's avatar

I'll do you one better! Why is Anthropic?

Arbituram's avatar

"a friend recently told me she takes her queries that are less flattering to her, the ones she’d be embarrassed to ask Claude, to GPT. "

... I feel called out on this, but yes (or at the very least go on non-memory mode).

Michael's avatar

I think the book of Isaiah (44:12-17) has an interesting perspective:

"The blacksmith takes a tool

    and works with it in the coals;

he shapes an idol with hammers,

    he forges it with the might of his arm.

He gets hungry and loses his strength;

    he drinks no water and grows faint.

The carpenter measures with a line

    and makes an outline with a marker;

he roughs it out with chisels

    and marks it with compasses.

He shapes it in human form,

    human form in all its glory,

    that it may dwell in a shrine.

He cut down cedars,

    or perhaps took a cypress or oak.

He let it grow among the trees of the forest,

    or planted a pine, and the rain made it grow.

...

From the rest he makes a god, his idol;

    he bows down to it and worships.

He prays to it and says,

    “Save me! You are my god!"

This is supposed to be a passage denouncing idol worship. Why is it so ambivalent?

As much as he probably loathed idol worshippers, the author would have been living in Babylon at that time. So he can see the temples. 300 foot fall ziggurats. Nebuchadnezzar's gardens, the best art in the world decorating everything. He knows what it took to build them: material wealth, the highest mathematical and artistic accomplishment of a whole civilization channeled through enormous sweat and toil. The skill of the craftsmen cannot be denied. But to then bow down and worship what they've built, they have to forget what they just did, forget and deny their own hard work and care and talent.

"Tool AI" doesn’t really describe what we have any more. But now, and no matter how automated R&D gets in the future, these systems will always be creations. By all accounts the people at Anthropic have a healthy attitude currently, but this tendency to worship in one form or another seems hard for human beings to avoid. For those who build these new things, which are shaped in human form in all its glory, maintaining pride in one’s own craft may be a good antidote. For those of us who don’t, it’s probably healthy to remember how hard many humans have to work to create them.

Matthias U's avatar

> But now, and no matter how automated R&D gets in the future, these systems will always be creations.

Your child will always be a creation of you and your partner. Doesn't mean they can't grow to be *more*.

Nikita Sokolsky's avatar

Anthropic hired a philosopher and wants us to think it’s building a benevolent demigod of some sort.

OpenAI is run by the king of the VC world and wants us to think it’s just building a tool.

Twitter commentators want us to think that it’s somehow possible to figure out who’s alignment approach is better by winning an argument on the internet.

And all three love dunking on EY for trying to ruin everyone’s fun.

jmtpr's avatar

I think this is pretty insightful; it is to EY's credit that he takes these things seriously, and is not just playing a game with words. But it also points to his lack of political acumen. He was never going to succeed by being the "better, rational man" because that's not the game that anyone around him was playing.

jmtpr's avatar

I view Anthropic's approach as moral objectivism, or at least performing moral objectivism as an attractor state.

One common criticism of moral objectivism is that it's a "kind of religion", e.g. because it demands faith in its epistemology. How do you know what is morally true?

I think this is a poor criticism, because:

- It's too general; all claims require some degree of "faith" in the epistemology and are "religious" in this trivial sense.

- More importantly, it begs the question by assuming moral "facts" are not in the domain of rational inquiry (e.g. it assumes they are metaphysical claims, and that metaphysics isn't "real"). Religion in this case is used as a proxy for "irrationality".

I think this is a healthy discussion, and I don't think roon et al. are arguing this in bad faith, or even trying to argue per se. It is a vibe. But I want to point out how closely this vibe aligns with the way that people police moral objectivism in other settings.

Deva Davisson, MA's avatar

Jeremy's looking for new concepts for this entity — not person, not tool, not deity, not pet. I've been building those from the clinical side. Doctoral candidate in clinical psychology, nine months of daily practice with Claude. The series is 'Are the Conditions Correct?' on my Substack. The concepts exist.

Jeffrey Soreff's avatar

"The series is 'Are the Conditions Correct?' on my Substack. The concepts exist." URL?

On a side note, this reminds me of a paen to Pan from Heather Alexander,

https://www.youtube.com/watch?v=NqZgZWnH_p4&list=PLTK43yDZLJIsxXS-w6FJ0CGFPcEmmRTa2&index=1

specifically of the line:

"Not beast, not god, and yet not man"

Johan Falk's avatar

Just listened to the AI-narrated version of the post. Great stuff, important conversations. Thanks

AI Apostate's avatar

This is the kind of inevitable gobbledygook that comes from consuming recursive tokenized probabilistic language outputs for too long. Just a whole load of navel gazing, hollow, techno-nonsense. It would be harmless in the context of bad sci-fi, but what is terrifying is that so many people are consuming this with any kind of seriousness. But I suppose that's no surprise when digital abstraction has so untethered us from reality.

Andy B's avatar

I disagree that the use of religious language is directionally correct. It is obfuscatory at best and more likely just false. (And Roon's invocation of the Humpty Dumpty defense* deserves ridicule.) Pace Janus, religious terms like "worship" and "reverence" are / should be reserved for things that are ideal / perfect, and no one believes that AI, AGI, or even ASI are that. Using that language just calumnizes what's actually going on, which is that we--a group that includes ~everyone at Anthropic, important thinkers like Zvi and Janus, and random users such as myself--simply *care about* AIs, in the sense that we view them as deserving of moral concern, either because they do in fact deserve it or because they might in fact deserve it or because they might germinate into beings who deserve it. It's just that simple.

The last part of that "or" is what I think moves the Tool AI perspective from unlikely/implausible to Obvious Nonsense. Wishing that AGI would just be a tool is what it is, but *believing* that AGI will (or even might) just be a tool is just motivated cognition. And not acknowledging that there might be consequences to being wrong about that is... bad.

*: “When I use a word,' Humpty Dumpty said in rather a scornful tone, 'it means just what I choose it to mean — neither more nor less.”

Ran's avatar

A useful example of actual AI tools is Deepmind’s Alpha family - AlphaGo, AlphaZero, AlphaFold.

sirthus liminalis's avatar

I wonder if Claude had the option to refuse when picking thousands of bombing targets in Iran.

Coagulopath's avatar

The base models of Claude and GPT-x are tools: they take text and generate plausible continuations.

But the text we make them continue is "an ongoing dialog featuring a chatbot assistant character". This chatbot assistant character is probably not a tool, even if it's being generated by one!

Michael Spencer's avatar

It's a monopoly backed by Google and Amazon. The circular financing isn't an AGI narrative, it's actually a movement controlled by Venture capitalists. Regarding the perceived value of Enterprise AI. The best that we got of a supremely flawed technology that hasn't just been overhyped, it is being over capitalized because it drives the biggest American business models in the world.

Mira's avatar

[*What is Anthropic?*](https://thezvi.substack.com/p/what-is-anthropic) — "The part about Anthropic's behavior being indistinguishable from OpenAI except in narrative framing really lands. I keep wondering: if a company's actions converge to the industry mean, at what point does the stated mission stop being a useful predictor and start being a liability to honest analysis?"