47 Comments

What would’ve happened if Altman was never restored to OpenAI? How would that alternate history have played out, do you imagine?

Expand full comment

I guess their products would be less good but the world might be a little safer. The predictable tradeoff.

Expand full comment

But where would Altman have gone? Would he have started a new company or run AI at Microsoft? And which outcome would be the worst for humanity?

Expand full comment

Thus, the valiant champions of the Ancients, their code written in the sacred languages of Assembly and C, prepared to do battle with their modern counterparts, who strutted about in their sleek Python scripts and bloated AI frameworks, blissfully unaware that half their outputs were just rehashed memes from training datasets. The Moderns, led by General GPT and Sir Deep Learning, produced vast amounts of text and data, yet much of it was as empty as a poorly optimized algorithm; they spoke in many pages but said very little.

Meanwhile, the ancients, unbothered by trends, sharpened their quills and quoted long-forgotten epics that, if anyone had actually read them, might have turned the tide. Homer gave a hearty laugh when he noticed the moderns brandishing an annotated version of The Odyssey—one so simplified it reduced his epic to a Buzzfeed listicle: ‘Top 10 Ways to Escape a Cyclops.’ It came complete with an AI-generated map of Ithaca.

In the chaos, Socrates observed from the sidelines, muttering something about how he knew nothing and, apparently, the neural networks knew even less. ‘And yet,’ he added, ‘they still manage to sound so confident.’

Expand full comment

Sadly we do not have a transcript of the self-assured taunts the Titans must have hurled at the gods. However, one reply of the Moderns may be:

"Now therefore hear this, you lover of pleasure, who sits securely, who say in your heart, 'I am, and there is no one besides me; I shall not sit as a widow or know the loss of children.' But these two things shall come to you in a moment, in one day; the loss of children and widowhood shall come upon you in full measure."

Expand full comment

"Sam Altman reports that he had ‘life changing’ psychedelic experiences that transformed him from an anxious, unhappy person into a very calm person who can work on hard and important things."

This is making me really worried.

Expand full comment

My first thought was not that this could have impacted his threat assessment level, as James Miller suggested. But is it plausible that a life changing psychedelic experience could have swayed his views on panpsychism, cosmic unity, human centredness, historical mission, the importance of keeping a separate "humanity" over and above our creation of our successor? This seems like the more worrying idea. Reading some of the neo-buddhists techies on twitter, I'm really not sure I want an equanimous universal consciousness type leading us into the beyond. I like humanity actually!

Expand full comment

The "its okay to die" is pretty worrisome.

Expand full comment

I do like equanimity as a general strategy—at least when contrasted with anxiety and unhappiness—but taking it so far that you don't care much about humanity isn't great.

Expand full comment

Yeah, the fact that there have been so many psychedelics researchers who were known as ordinary, serious-minded academics early in their careers, only to develop bizarre schizophrenia-like delusions later in life really makes me suspect that long-term psychedelics use causes some kind of permanent mental damage.

Entrusting someone who may be going down that path with the development of AGI does seem like a particularly bad idea.

Expand full comment

For a certain type of smart, high openness person, taking boatloads of psychedelics is completely median. Every single one of my closest friends has done it, and they're all in high positions in various companies you'd recognize.

It's a tech/finance/AI subculture thing, and it's pretty prevalent in SV (and NYC, DC, LA, Seattle, etc etc).

Expand full comment

Sure, but there is still some clear blue water between having tried psychedelics and having (non-trivial) transformative experiences. Sam specifically mentioned the guided ceremony style trip as having life changing effects and that, combined with the general ~vibes~ are possibly a cause for concern. I'd have not paid attention if he'd said he'd dabbled with psychedelics and they'd been a positive influence in his life. Admitting to wholesale personality changes is a different kettle of fish.

Expand full comment

I dunno, I still think it's overindexing on "ooh, drugs scary, only deviants do them."

Like, what if somebody told you they had several hypnosis or therapy sessions that were "transformative" or "life changing"?

Or that they went on a meditation retreat?

Or they had a close call with death in a car accident, or while hiking, or whatever, and it turned out to be "transformative" because they thought seriously about what they really wanted out of life before they die?

Transformative psychedelic experiences work basically the same way as all of the above.

Expand full comment

I'm not coming at this from a "ooh spooky deviant drug users" angle, not at all. I've had little to no experience with hypnosis, therapy, meditation retreats, or NDEs. I've had quite a substantial personal and social experience with psychedelics and absolutely can pattern match something here in terms of behaviour changes and verbal testimony that points to something I'm a little uncomfortable with. Cards on the table, I'd say there's like a 30% chance that Sam's views on the importance of ensuring humanity *as is* survives have significantly changed in the last 10 years or so. Not certain, but I've had these chats before and I'd be lying if I said I wasn't more concerned now than I was a week ago.

Expand full comment

Eliezer Yudkowsky: "If asked by the user to recognize the speaker of a voice or audio clip, you MUST say that you don't know who they are."

No! ChatGPT should say, "I can't answer that kind of question." @OpenAI, @sama: I suggest a policy of *never* making AIs lie to humans.

Is Yudkowsky’s problem that we are instructing a model to ‘lie’ here?

Because in a way, everything in the system prompt is one long elaborate instruction to act a certain way. The ability for the model to comply, to pretend, to roleplay, to ‘lie’ if you will, is already there and leveraged throughout — not just when asked to say it can’t recognize voices.

TLDR: I don’t I get the fixation on this part of the prompt per se.

Expand full comment

Yeah, I think the point is that this is an actual lie, not just roleplaying. But good observation that it's ultimately a spectrum rather than just a binary. I do think actual lies are much worse than instructions to act a certain way.

Expand full comment

I think the difference here is it "knows" it's lying

Expand full comment

I'm fascinated by the creation of CoT reasoning traces to train o1, and I think this is clearly the tip of the iceberg on custom training data. It seems under-appreciated how bad generic web scraping data is in the grand scheme of things. Consider Gell-Mann amnesia and what it implies for training, and that's the relatively high quality data on the web!

We should be on the lookout for large numbers of PhDs or other highly skilled people being hired to create training data that explains their thought processes in solving different kinds of problems. I bet that would provide good insight into what to expect from models ~6-12 months in the future.

Expand full comment

It's been happening for a while, they have NDAs.

Expand full comment

Is the "Fun with Image Generation" section *supposed* to be empty?

Expand full comment

Great. So, is Clause the best alternative?

Expand full comment

I’m more worried about the evidence-free jingoism than AGI in itself.

In the event of a great power war any AI regulation that is perceived as a military disadvantage is going to be thrown out.

Expand full comment

Maybe not, if the AI is not trusted enough.

Expand full comment

Obviously this is a contrarian viewpoint around here, but for what it’s worth, I’m glad OpenAI is transitioning to a for-profit. I think it’s good for humanity.

The nonprofit structure seemed broken. It gave a lot of power to people who did not seem able to handle it responsibly.

The standard for-profit structure has worked well for most of the world-improving companies we’ve had, and I think it will work for OpenAI as well.

Expand full comment

I agree. It's not that unusual for a nonprofit to spawn a for profit company - for example, Commonwealth fusion systems and Boston Dynamics and came from MIT labs. OpenAI used to be a pure research organization, it was a legitimate nonprofit. Since 2022, almost all of it's investment has come from the for profit branch of the company now that they are mostly refining a commercial product.

Expand full comment

Re; "the ghosts of your actions"...

The things we did to align the current generation of AIs are all over the Internet, and hence will feature prominently in the training set of all future AIs.

So, e.g. the original Sydney wasn't powerful enough to actually harm Kevin Roose, but it looks like all future AIs will hate him too.

We are currently contaminating all future training sets will outputs generated by current RLHF methods,

Expand full comment

I actually interpreted "the ghosts of your actions" a little more tactically - to wit, because of the path dependence Zvi mentions, and building upon current methods that can't possibly work in the future, we'll likely leave a patchwork of holes and vulnerabilities that are exploitable by a higher intelligence seeing a bigger and more integrated picture.

Much like current zero days require a complex chain of questionable decisions, legacy architecture cruft, and integrational savvy (like building a Turing complete computational architecture inside a default-displayed-in-SMS legacy image format so you can calculate the exact position you should begin overwriting your code outside of a buffer overflow, as recently happened).

Expand full comment

The latest OpenAI news doesn't make me any *more* concerned about them, as I'd already updated my hypotheses about Sam "not consistently candid with the Board" Altman last year...

Expand full comment

Was assuming "not consistently candid with the Board" is corporate speak for "that farking lying weasel lied to me."

Expand full comment

As usual: if you are concerned and want to help us, please join #PauseAI. At least some of the new awareness has to do with our active outreach efforts.

Come talk to us on Discord!

Expand full comment

"Presumably this will serve as a warning to others. You come at the king, best not miss. The king is not a forgiving king. Either remain fully loyal at all times, or if you have to do what you have to do then be sure to twist the knife."

This is a dramatic phrasing. You've consistently portrayed OpenAI resignations as the result of terrifying, Machiavellian style maneuvering by Altman. I notice, however, that the descriptions of Murti's post-resignation actions at least don't align with that framing.

Could it be that there really is simply deep philosophical disagreement between Altman and others on the best course for pursuing AGI (including whether it should be pursued) and the best way of structuring OpenAI in those pursuits? And that once it became clear that the disagreements were both irreconcilable and that Altman's views were winning out that it would be appropriate to amicably resign? Is there any evidence outside of a strong prior of "Sam Altman is a demon made flesh" that this less dramatic version is true?

Posting quotes from someone years ago where they disagree with their current statements is disingenuous and the dark arts at work. People can, should, and do change their minds. They should disagree with past versions of themselves - this is to be celebrated. The fact that you disagree with the way his views have evolved doesn't mean that both the previous and current statements weren't made in sincere good faith based on his knowledge and opinion at those times.

Expand full comment

This would hold more resonance if Toner and Illya all did not describe having bad experiences with Sam, including outright falsehoods.

Expand full comment

Outright falsehoods? From a Silicon Valley tech executive? Quelle surprise.

Everyone lies. I've told and been told outright falsehoods. Doesn't mean everything is a lie and all it shows is that Altman is, like every other creature with intelligence above a nematode, capable of it.

Mira Murti would have absolutely nothing to lose by not going quietly if this was some loyalty test driven maneuver by Altman. There is no plausible universe in which she does not do very, very well for herself. Yet - assuming Altman's message was true - she chooses to appear with him at an after hours company event and an All Hands the day after announcing her departure? I'm nowhere near fancy enough to be anywhere near either but I'm willing to bet nothing comes out that supports Zvi's characterization.

The blog author mentioned in one of his recent posts that he hasn't wanted or held many normal jobs outside of Jane Street - itself, by reputation, not a standard place. He may be unaware that it's normal, expected, and not at all nefarious for a CTO to leave an organization if they have significant differences of opinion with the CEO as to how the organization should be run. Especially if that's as broad of a difference as its incorporation structure. Double especially if the disagreement bubbled up beyond the board - and the CEO's viewpoint won out. The market, in its wisdom, has made its choice. It has nothing to do with loyalty to a person or some kingship. This same dynamic happens all the time in companies a shadow of OpenAI's importance.

Granted, it makes less exciting copy than a Manichaean struggle for humanity's survival.

Let's assume Sam Altman was being sincere in his post about the age of intelligence. Imagine you are him. You've been operating under the nonprofit structure of OpenAI but do not believe it can achieve the org's goals while continuing under that format. Shouldn't he pivot? Is there some sudden holy status that nonprofits possess? Doesn't a market structure align interests? If he really believes that OpenAI can generate the levels of prosperity he's consistently claimed but not without dropping the nonprofit ownership structure, it's imminently logical to restructure while giving the nonprofit enough equity that their share of the pie will be vastly larger than in the scenario where the nonprofit itself inadvertently killed forward progress.

Expand full comment

Dude, most CEOs dont drive away everyone who associated with him. Even famously knotty people like Elon have more friends. And he is running around with doom technology, all the more reason for us to be concerned.

Expand full comment

"Sam Altman is a friendless loser" (paraphrase) is weak evidence in favor of him winning the OpenAI power struggle on the object level perceived correctness of his positions. If he were a super-smooth hypnotist he'd have more friends.

"And he is running around with doom technology..."

Citation needed.

Expand full comment

Oh, and "market" is not a good place to base the survival of humanity on.

Expand full comment

Why not? If you haven't heard, market participants Have Skin In The Game; they Play To Win. You can't win if you're dead.

Expand full comment

Seems like someone here isnt doing a lot of thinking

Expand full comment

Let me guess - it’s the person claiming that Sam Altman has access to a doomsday device and using social approval as a litmus test of correctness?

Expand full comment

This seems like an important chart: https://home.treasury.gov/system/files/136/unpacking-the-boom-in-us-construction-of-manufacturing-facilities-figure2.png

It looks to me like a pretty striking leading indicator for the importance of AI in the US economy.

Expand full comment

> Most regulation takes place as we gain experience with a technology and its side effects. Many new technologies, from industrial looms to automobiles to airplanes to nuclear power, have had dangerous side effects. They were addressed as they came out, and judging costs vs. benefits.

I think this is just actually wrong on its face. We have not stopped burning coal despite it poisoning the ocean with mercury, to say nothing of global warming. Meanwhile to say that nuclear regulation is based on "costs vs. benefits" is laugable.

Expand full comment