You can find Part 1 here. This resumes the weekly, already in progress. The primary focus here is on the future, including policy and alignment, but also the other stuff typically in the back half like audio, and more near term issues like ChatGPT driving an increasing number of people crazy.
I mean, we shouldn't discount the extent to which Musk just kind of shoots his mouth off about whatever whim has hit him this week as though it were a deeply held, deeply considered belief. He's a man who talks in constant hyperbole.
But if I were tasked with trying to make sense of his DOGE adventure in the context of "I believe that transformative AI is coming in the immediate future," it could've been that he felt at the time that it would raise his political capital, not spend it. If he did indeed believe that there were massive waste costs that could've been quickly demolished by DOGE, such that, let's say we discount his public statements by 80%, he quickly and non-controversially saves the government between $200B and $400B per year, that might've entrenched him as the kind of co-equal power within the administration that people were worried about back when they were calling him the co-president.
I’m surprised this section didn’t quote his desire to use Grok to ‘rewrite the entire corpus of human knowledge adding missing information and deleting errors. Then retrain on that’.
I expect this wouldn’t be seriously attempted, but certainly should be shared around to anyone who still thinks Grok can be relied upon to be accurate. Would expect to see a lot more attempts at manipulation like South Africa Grok a month ago.
That doesn’t seem to be the intent communicated. This is coming on the tail of not only past attempts to manually make it take certain political stances, but also of Musk insulting it for failing to agree with his views. No mention of creating new data for training, but of ‘correcting’ the existing data, a pretty straightforward desire to rewrite history in order to make the LLM conform to his views.
I mean, I don't know, it's a short description, that is then relayed second-hand, and shorn of any context, it's hard to say exactly what was meant.
But "synthetic data" does not mean, "Data entirely made up with no connection to the world." Taking a basis of an actual corpus and then expanding it and trying to make it more truthful and less "random bullshit" seems like it's a basically essential part of making a frontier model in 2025.
I like the choice of book title. It’s much catchier than the slightly more accurate version, “if anyone builds it, the odds of everyone dying actually decreases from certainty to about 99%, but unfortunately the odds of it happening within ten years increases from 0 to 99% as well.”
I don’t think these $100m payments are going to be that big of a problem for Meta’s culture. There are already plenty of acquisitions at Meta where people make lots of money and then go work next to others who didn’t. It’s just something you manage. Plus, you have to consider the counterfactual. If Meta doesn’t make any moves they risk losing people who perceive Zuck as “giving up on AI”.
Whatever you think of these huge acquisitions, Zuck is clearly not giving up. For recruiting, it’s much better to be controversial where 80% of people think your plan is dumb and 20% of people think it’s exciting, than for 100% of people to agree that your plan is rational but stands no chance of being on the cutting edge.
Too late for me (unless I want multiple Kindle copies), but is there any distinction between preordering it on that .org link or doing it on Amazon like I did? Do the vendors share statistics?
Hmm - I have no idea about how the statistics are aggregated; I also preordered via Amazon. Have you seen any sign of where a table of contents might be? I suspect that I've seen a lot of the arguments in the book before, but would like some idea of what fraction...
General adult psychiatrist working in acute care here. Have met one psychotic patient thus far with delusions re: ChatGPT but very minimal use, more ChatGPT instead of the aliens / CIA / devil than anything actually led by the LLM. Also very curious to see transcripts in the content of case histories. I suspect an underlying vulnerability to psychosis potentiated by sycophancy: even minor reinforcement of burgeoning delusions can be enormously damaging.
There's a political opportunity emerging, with guys like Chris Murphy clearly articulating the impact of AI on jobs, and connecting it to the greed of figures like Altman and Musk. People like having jobs and hate these billionaires.
If you care about putting an end to the AI arms race, then I hope you'll hold your nose and embrace these progressives, even though you despise their economic policy or whatever. It's a much more promising opportunity than, say, pleading with David Sacks.
Re: Musk.
I mean, we shouldn't discount the extent to which Musk just kind of shoots his mouth off about whatever whim has hit him this week as though it were a deeply held, deeply considered belief. He's a man who talks in constant hyperbole.
But if I were tasked with trying to make sense of his DOGE adventure in the context of "I believe that transformative AI is coming in the immediate future," it could've been that he felt at the time that it would raise his political capital, not spend it. If he did indeed believe that there were massive waste costs that could've been quickly demolished by DOGE, such that, let's say we discount his public statements by 80%, he quickly and non-controversially saves the government between $200B and $400B per year, that might've entrenched him as the kind of co-equal power within the administration that people were worried about back when they were calling him the co-president.
I’m surprised this section didn’t quote his desire to use Grok to ‘rewrite the entire corpus of human knowledge adding missing information and deleting errors. Then retrain on that’.
I expect this wouldn’t be seriously attempted, but certainly should be shared around to anyone who still thinks Grok can be relied upon to be accurate. Would expect to see a lot more attempts at manipulation like South Africa Grok a month ago.
Isn't this just kind of a melodramatic description of what "synthetic data" means?
That doesn’t seem to be the intent communicated. This is coming on the tail of not only past attempts to manually make it take certain political stances, but also of Musk insulting it for failing to agree with his views. No mention of creating new data for training, but of ‘correcting’ the existing data, a pretty straightforward desire to rewrite history in order to make the LLM conform to his views.
I mean, I don't know, it's a short description, that is then relayed second-hand, and shorn of any context, it's hard to say exactly what was meant.
But "synthetic data" does not mean, "Data entirely made up with no connection to the world." Taking a basis of an actual corpus and then expanding it and trying to make it more truthful and less "random bullshit" seems like it's a basically essential part of making a frontier model in 2025.
Perhaps s/Sam Altman confirms that Meta is/Sam Altman claims that Meta is/?
I like the choice of book title. It’s much catchier than the slightly more accurate version, “if anyone builds it, the odds of everyone dying actually decreases from certainty to about 99%, but unfortunately the odds of it happening within ten years increases from 0 to 99% as well.”
I don’t think these $100m payments are going to be that big of a problem for Meta’s culture. There are already plenty of acquisitions at Meta where people make lots of money and then go work next to others who didn’t. It’s just something you manage. Plus, you have to consider the counterfactual. If Meta doesn’t make any moves they risk losing people who perceive Zuck as “giving up on AI”.
Whatever you think of these huge acquisitions, Zuck is clearly not giving up. For recruiting, it’s much better to be controversial where 80% of people think your plan is dumb and 20% of people think it’s exciting, than for 100% of people to agree that your plan is rational but stands no chance of being on the cutting edge.
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/ai-121-part-2-the-openai-files
Too late for me (unless I want multiple Kindle copies), but is there any distinction between preordering it on that .org link or doing it on Amazon like I did? Do the vendors share statistics?
Hmm - I have no idea about how the statistics are aggregated; I also preordered via Amazon. Have you seen any sign of where a table of contents might be? I suspect that I've seen a lot of the arguments in the book before, but would like some idea of what fraction...
Sorry, too late now. By now you have that table of contents :-)
LOL! True!
General adult psychiatrist working in acute care here. Have met one psychotic patient thus far with delusions re: ChatGPT but very minimal use, more ChatGPT instead of the aliens / CIA / devil than anything actually led by the LLM. Also very curious to see transcripts in the content of case histories. I suspect an underlying vulnerability to psychosis potentiated by sycophancy: even minor reinforcement of burgeoning delusions can be enormously damaging.
nit: "When he voted SB 1047, Gavin Newsom commissioned The California Report on Frontier AI Policy. That report has now been released. "
typo? "voted" should be "vetoed"?
There's a political opportunity emerging, with guys like Chris Murphy clearly articulating the impact of AI on jobs, and connecting it to the greed of figures like Altman and Musk. People like having jobs and hate these billionaires.
If you care about putting an end to the AI arms race, then I hope you'll hold your nose and embrace these progressives, even though you despise their economic policy or whatever. It's a much more promising opportunity than, say, pleading with David Sacks.