Last week Sam Altman spent two hours with Lex Fridman (transcript). Given how important it is to understand where Altman’s head is at and learn what he knows, this seemed like another clear case where extensive notes were in order.
Lex Fridman overperformed, asking harder questions than I expected and going deeper than I expected, and succeeded in getting Altman to give a lot of what I believe were genuine answers. The task is ‘get the best interviews you can while still getting interviews’ and this could be close to the production possibilities frontier given Lex’s skill set.
There was not one big thing that stands out given what we already have heard from Altman before. It was more the sum of little things, the opportunity to get a sense of Altman and where his head is at, or at least where he is presenting it as being. To watch him struggle to be as genuine as possible given the circumstances.
One thing that did stand out to me was his characterization of ‘theatrical risk’ as a tactic to dismiss potential loss of human control. I do think that we are underinvesting in preventing loss-of-control scenarios around competitive dynamics that lack bad actors and are far less theatrical than those typically focused on, but the overall characterization here seems like a strategically hostile approach. I am sad about that, whereas I was mostly happy with the rest of the interview.
I will follow my usual format for podcasts of a numbered list, each with a timestamp.
(01:13) They open with the Battle of the Board. Altman starts with how he felt rather than any details, and drops this nugget: “And there were definitely times I thought it was going to be one of the worst things to ever happen for AI safety.” If he truly believed that, why did he not go down a different road? If Altman had come out strongly for a transition to Murati and searching for a new outside CEO, that presumably would have been fine for AI safety. So this then is a confession that he was willing to put that into play to keep power.
(2:45) He notes he expected something crazy at some point and it made them more resilient. Yes from his perspective, but potentially very much the opposite from other perspectives.
(3:00) And he says ‘the road to AGI should be a giant power struggle… not should… I expect that to be the case.’ Seems right.
(4:15) He says he was feeling really down and out of it after the whole thing was over. That certainly is not the picture others were painting, given he had his job back. This suggests that he did not see this outcome as such a win at the time.
(5:15) Altman learned a lot about what you need from a board, and says ‘his company nearly got destroyed.’ Again, his choice. What do you think he now thinks he needs from the board?
(6:15) He says he thinks the board members were well-meaning people ‘on the whole’ and under stress and time pressure people make suboptimal decisions, and everyone needs to operate under pressure.
(7:15) He notes that boards are supposed to be powerful but are answerable to shareholders, whereas non-profit boards answer to no one. Very much so. This seems like a key fact about non-profits and a fundamentally unsolved problem. The buck has to stop somewhere. Sam says he’d like the board to ‘answer to the world as a whole’ so much as that is a practical thing. So, WorldCoin elections? I would not recommend it.
(8:00) What was wrong with the old board? Altman says insufficient size or experience. For new board members, new criteria is more considered, including different expertise on a variety of fronts, also different perspectives on how this will impact society and help people. Says track record is a big deal for board members, much more than for other positions, which says a lot about the board’s old state. Lex asks about technical savvy, Altman says you need some savvy but not in every member. But who has it right now except for Altman? And even he isn’t that technical.
(12:55) Altman notes this fight played out in public, and was exhausting. He continues to say he was ready to move on at first on Friday and didn’t consider the possibility of coming back, and was considering doing a very focused AGI research effort. Which indeed would have been quite bad AI safety. He says he only flipped when he heard the executive team was going to fight back and then on Saturday the board called to consider bringing Altman back. He says he did not want to come back and wanted to stabilize OpenAI, but if that is true, weren’t there very clear alternative paths he could have taken? He could have told everyone to embrace Emmett Shear’s leadership while they worked things out? He could have come back right away while they worked to find a new board? I don’t understand the story Altman is trying to tell here.
(17:15) Very good gracious words about Mira Murati. Then Altman makes it clear to those who listen that he wants to move on from that weekend. He later (21:30) says he is happy with the new board.
(18:30) He asks about Ilya Sutskever. Ilya not being held hostage, Altman loves Ilya, hopes they work together indefinitely. What did Ilya see? Not AGI. Altman notes he loves that Ilya takes safety concerns very seriously and they talk about a lot about how to get it right, that Ilya is a credit to humanity in how much he wants to get this right. Altman is clearly choosing his words very carefully. The clear implication here is that ‘what Ilya saw’ was something that made Ilya Sutskever concerned from a safety perspective.
(21:10) Why is Ilya still so quiet, Lex asks? Altman doesn’t want to speak for Ilya. Does mention they were at a dinner party lately.
(22:45) Lex asks if the incident made Altman less trusting. Sam instantly says yes, that he thinks he is an extremely trusting person who does not worry about edge cases, and he dislikes that this has made him think more about bad scenarios. So perhaps this could actually be really good? I do not want someone building AGI who does not worry about edge cases and assumes things will work out and trusting fate. I want someone paranoid about things going horribly wrong and not trusting a damn thing without a good reason. Legitimately sorry, Altman, gotta take one for the team on this one.
(24:40) Lex asks about Elon Musk suing OpenAI. Altman says he is not sure what it is really about. That seems like the right answer here. I am sure he strongly suspects what it is about, and Elon has said what it is about, but you don’t want to presume in public, you can never be sure, given that it definitely isn’t about the claims having legal merit. Says OpenAI started purely as a research lab, then adjusted the structure when things changed and things got patched and kind of weirdly structured.
(28:30) Lex asks what the word Open in OpenAI meant to him at the time? Altman says he’d pick a different name now, and his job is largely to put the tech in the hands of people for free, notes free ChatGPT has no advertising and GPT-4 is cheap. Says ‘we should open source some stuff and not other stuff… nuance is the right answer.’ Which is wise. Both agree that the lawsuit legally unserious.
(32:00) Lex mentions Meta opening up Llama, asks about pros and cons of open sourcing. Altman says there is a place, especially for small ones, a mix is right.
(33:00) Altman outright says, if he knew what he knew now, he would have founded OpenAI as a for-profit company.
(34:45) Transition to Sora. Altman is on Team World Model and thinks the approach will go far. Says ‘more than three’ people work on labeling the data, but a lot of it is self-supervised. Notes efficiency isn’t where it needs to be yet.
(40:00) Asked about whether using copyrighted data for AI is fair use, Altman says the question behind the question is should those artists who create the data be paid? And the answer is yes, the model must change, people have to get paid, but it is unclear how. He would want to get paid for anyone creating art in his style, and want to be able to opt out of that if he wanted.
(41:00) Sam excitedly says he is not worried people won’t do and get rewarded for cool shit, that’s hardwired, that’s not going away. I agree that we won’t let lack of hard incentives stop us too much, but we do still need the ability to do it.
(42:10) Sam says don’t ask what ‘jobs’ AI can do, ask what individual tasks it can do, making people more efficient, letting people work better and on different kinds of problems. That seems wise in the near term.
(43:30) Both note that humans care deeply about humans, Altman says it seems very deeply wired that this is what we ultimately care about. Play chess, run races, all that. But, character.ai. So we will see if this proxy can get hijacked.
(45:00) Asked about what makes GPT-4 amazing Altman says it kind of sucks, it’s not where we need to get to. Expects 4→5 to be similar to 3→4. Says he’s using GPT-4 more recently as a brainstorming partner.
(50:00) Altman expects unlimited future context length (his word is billions), you’ll feed in everything. You always find ways to use the exponential.
(53:50) Altman expects great improvement in hallucinations, but does not expect it to be solved this year. How to interpret what that implies about releases?
(56:00) The point of memory is for the model to know you and get more useful over time. User should be able to edit what the AI remembers.
(1:00:20) Felt the love, felt the love. Drink!
(1:00:55) Optimism about getting slower and deeper thinking about (and allocating more compute to) harder problems out of AIs.
(1:02:40) Q*? ‘Not ready to talk about it.’ Also says no secret nuclear facility, but it would be nice. Altman says OpenAI is ‘not a good company at keeping secrets. It would be nice.’ I would think it is going to be highly necessary. If you are playing for these kinds of table stakes you need to be able to keep secrets. Also, we still do not have many of the details of the events of November, so I suppose they can keep at least some secrets?
(1:04:00) Lex asks if there are going to be more leaps similar to ChatGPT. Sam says that’s a good question and pauses to think. There’s plenty of deliberate strategicness to Altman’s answers in general, but also a lot of very clear genuine exploration and curiosity, and that’s pretty great. Altman focuses on the continuous deployment strategy, which he sees as a success by making others pay attention. Which is a double edged sword. Altman says these leaps suggest there should be more iterative releases, not less. Which seems right, given the state of play? At this point might as well ship incrementally?
(1:06:10) When is GPT-5 coming out? Altman says ‘I don’t know, that’s the honest answer.’ I do think that I believe him more because of the second half of that. But what does it mean to not know, beyond the answer not being tomorrow? How much not knowing is required to say you do not know? I don’t know that, either.
(1:06:30) Altman says they will release an amazing new model this year, but he doesn’t know what they’ll call it. Given his statement about the size of the leap from 4→5, presumably this is not a ‘4.5 vs. 5’ question? It’s something else? He says in the coming months they will release ‘many different important things’ before GPT-5.
(1:09:40) Seven trillion dollars! Altman says he never Tweeted that, calls it misinformation. He believes compute will likely be the currency of the future, the most precious commodity, and we should be investing heavily in having more. And it’s a weird market because the demand curve can go out infinitely far at sufficiently low price points. Still believes in fusion, and fission.
(1:12:45) Worry about a fission-style reaction to AI, says some things will go ‘theatrically wrong’ with AI, which seems right, and that he will be at non-zero risk of being shot. Expects it to get caught in left vs. right wars too. Expects far more good than bad from AI, doesn’t talk about what time frame or capability level.
(1:14:45) Competition means better products faster. The downside is a potential increase in an arms race. He says he feels the pressure. Emphasises importance of slow takeoff, although he wants short timelines to go with them. Says Elon Musk cares about safety and thus he assumes Elon won’t race unsafely, which strikes me as a sentence not selected for its predictive accuracy. Also not something I would count on. Consider the track record.
(1:18:10) Better search engine? Boring. We want a whole new approach.
(1:20:00) Altman hates ads. Yes, internet needed ads. But ads are terrible. Yes. Altman not ruling ads out, but has what he calls a bias against them. Good.
(1:23:20) Gemini Incident time. They work hard to get this right, as you’d assume. Would be good to write down exactly what outputs you want. Not principles, specific rules, if I ask X you output Y, you need to say it out loud. Bravo. Of course writing that down makes you even more blameworthy.
(1:25:50) Is San Francisco an ideological bubble impacting OpenAI? Altman says they have battles over AGI but are blessed not to have big culture war problems, at least not anything like what others experience.
(1:26:45) How to do safety, asks Lex. Altman says, that’s hard, will soon be mostly what the company thinks about. No specifics, but Lex wasn’t asking for them. Altman notes dangers of cybersecurity and model theft, alignment work, impact on society, ‘getting to the good outcome is going to take the whole effort.’ Altman says state actors are indeed trying to hack OpenAI as you would expect.
(1:28:45) What is exciting about GPT-5? Altman again says: That it will be smarter. Which is the right answer. That is what matters most.
(1:31:30) Altman says it would be depressing if we had AGI and the only way to do things in the physical world would be to get a human to go do it, so he hopes we get physical robots. They will return to robots at some point. What will the humans be doing, then?
(1:32:30) When AGI? Altman notes AGI definition is disputed, prefers to discuss capability X, says AGI is a mile marker or a beginning. Expects ‘quite capable systems we look at and say wow that is really remarkable’ by end of decade and possibly sooner. Well, yes, of course, that seems like a given?
(1:34:00) AGI implies transformation to Altman, although not singularity-level, and notes the world and world economy don’t seem that different yet. What would be a huge deal? Advancing the rate of scientific progress. Boink. If he got an AGI he’d ask science questions first.
(1:38:00) What about power? Should we trust Altman? Altman says it is important no one person have total control over OpenAI or AGI. You want a robust governance system. Defends his actions and the outcome of the attempted firing but admits the incident makes his case harder to make. Calls for governments to put rules in place. Both agree balance of power is good. The buck has to stop somewhere, and we need to ensure that this somewhere stays human.
(1:41:30) Speaking of which, what about loss of control concerns? Altman says it is ‘not his top worry’ but he might worry about it more later and we have to work on it super hard and we have to get it right. Calls it a ‘theatrical risk’ and says safety researchers got ‘hung up’ on this problem, although it is good that they focus on it, but we risk not focusing enough on other risks. This is quite the rhetorical set of moves to be pulling here. Feels strategically hostile.
(1:43:00) Lex asks about Altman refusing to use capital letters on Twitter. Altman asks, in a way I don’t doubt is genuine, why anyone cares, why do people keep asking this. One response I would give is that every time he does it, there’s a 50% chance I want to quote him, and then I have to go and fix it, and it’s annoying. Same to everyone else who does this - you are offloading the cognitive processing work, and then the actual work of capitalization, onto other people, and you should feel bad about this. Lex thinks it is about Altman not ‘following the rules’ making people uncomfortable. Altman thinks capitalization is dumb in general, I strongly think he is wrong, it is very helpful for comprehension. I don’t do it in Google Search (which he asks about) but I totally do it when taking private notes I will read later.
(1:46:45) Sora → Simulation++? Altman says yes, somewhat, but not centrally.
(1:49:45) AGI will be a psychedelic gateway to a new reality. Drink!
(1:51:00) Lex ends by asking about… aliens? Altman says he wants to believe, and is puzzled by the Fermi paradox.
(1:52:45) Altman wonders, will AGI be more like one brain or the product of a bunch of components and scaffolding that comes together, similar to human culture?
Was that the most valuable use of two hours talking with Altman? No, of course not. Two hours with Dwarkesh Patel would have been far more juicy. But also Altman is friends with Lex and willing to sit down with him, and provide what is still a lot of good content, and will likely do so again. It is an iterated game. So I am very happy for what we did get. You can learn a lot just by watching.
I doubt Altman will give Fridman a third podcast appearance after this episode. I like Lex, but the man sounded like a billionaire lapdog when he asked Sam if he thought a friendship could still be possible in the future between him and Musk--- seconds after sharing the context of the question that MUSK IS CURRENTLY SUING OPENAI. I would have been furious, and Sam's awkward silence in response to that question was both understandable and appropriate.
Zvi, I don't know how u do this... I watched the entire vid without remembering or noting a fraction of what you did. I only remember this 1) I don't like the man, 2) I don't trust the man, 3) 650$ a year for a sharable Builder version of Chat-GPT4 is cheap? Well, if ur Sam Altman it must be. 4) For the love of god, build a real interface for the app, it looks, feels, and acts like something designed by someone in High School.