Zvi, I don't know how u do this... I watched the entire vid without remembering or noting a fraction of what you did. I only remember this 1) I don't like the man, 2) I don't trust the man, 3) 650$ a year for a sharable Builder version of Chat-GPT4 is cheap? Well, if ur Sam Altman it must be. 4) For the love of god, build a real interface for the app, it looks, feels, and acts like something designed by someone in High School.
Ps. But like with the recent Don Lemon interview with Elon Dusk, these long-form interviews reveal a lot about the men. As does Gayle Swisher’s Burn Book, and was also interviewed by Don on his new YouTube channel. I’ve read that book and others on these CEOs, and I don’t think these people can be entrusted with such huge mega-projects and strategic direction-making. Do you?
I would always question leaders who have the power to change the direction of the human race. We don't have a good track record so far, if doing the right thing matters.
I think I've asked enough to deaf ears frankly, and now I am going to let peeps like Kara Swisher and others keep up the fight. Frankly, watching/participating in the demise of civilization is getting weary for me at this point in life... +70.
A man's code is allowed to include "I will not edit the language of direct quotations unless there's a posterior probability greater than epsilon that it is a honest mistake (due to a typo or ignorance) rather than a deliberate choice"
I doubt Altman will give Fridman a third podcast appearance after this episode. I like Lex, but the man sounded like a billionaire lapdog when he asked Sam if he thought a friendship could still be possible in the future between him and Musk--- seconds after sharing the context of the question that MUSK IS CURRENTLY SUING OPENAI. I would have been furious, and Sam's awkward silence in response to that question was both understandable and appropriate.
Rather than small models being “safe” I’m starting to think that all the small, open models proliferating freely will be the pre-positioned sleepers in the eventual AIpocalypse. What could be an easier target for an AGI wanting agency than ubiquitous mini-models awaiting a small upgrade and assimilation.
"Calls it a ‘theatrical risk’ and says safety researchers got ‘hung up’ on this problem, although it is good that they focus on it, but we risk not focusing enough on other risks. This is quite the rhetorical set of moves to be pulling here. Feels strategically hostile."
Can you expand on this? Why does it feel strategically hostile? Do you think Sam doesn't really believe what he is saying about safety researchers fixating on this? What would his motivation be to be "strategically hostile" in this context? Does he secretly want to end the world?
On the question of the OpenAI board and technical savvy: isn’t Adam D’Angelo still a board member? He seems technically savvy.
If you ask me it was a masterclass in doing interviews, which Altman is very skilled at. Friedman and Altman's energies seemed to flow nicely, too.
Zvi, I don't know how u do this... I watched the entire vid without remembering or noting a fraction of what you did. I only remember this 1) I don't like the man, 2) I don't trust the man, 3) 650$ a year for a sharable Builder version of Chat-GPT4 is cheap? Well, if ur Sam Altman it must be. 4) For the love of god, build a real interface for the app, it looks, feels, and acts like something designed by someone in High School.
Ps. But like with the recent Don Lemon interview with Elon Dusk, these long-form interviews reveal a lot about the men. As does Gayle Swisher’s Burn Book, and was also interviewed by Don on his new YouTube channel. I’ve read that book and others on these CEOs, and I don’t think these people can be entrusted with such huge mega-projects and strategic direction-making. Do you?
I don't think that's a coherent question.
We do not entrust such projects to _anyone_, rather, people who you will _always_ be horrified by go and create them.
Would that we had more of them!
I would always question leaders who have the power to change the direction of the human race. We don't have a good track record so far, if doing the right thing matters.
Sure, question away. But come up with good questions in that case!
I think I've asked enough to deaf ears frankly, and now I am going to let peeps like Kara Swisher and others keep up the fight. Frankly, watching/participating in the demise of civilization is getting weary for me at this point in life... +70.
Consistent typo: It should be "Murati", not "Mutari".
> and then I have to go and fix it
No you don't. Just stick "[lowercase as in the original]" at the end.
I mean obviously I could choose not to but a man's gotta have a code.
A man's code is allowed to include "I will not edit the language of direct quotations unless there's a posterior probability greater than epsilon that it is a honest mistake (due to a typo or ignorance) rather than a deliberate choice"
Allowed but hardly required.
I doubt Altman will give Fridman a third podcast appearance after this episode. I like Lex, but the man sounded like a billionaire lapdog when he asked Sam if he thought a friendship could still be possible in the future between him and Musk--- seconds after sharing the context of the question that MUSK IS CURRENTLY SUING OPENAI. I would have been furious, and Sam's awkward silence in response to that question was both understandable and appropriate.
Rather than small models being “safe” I’m starting to think that all the small, open models proliferating freely will be the pre-positioned sleepers in the eventual AIpocalypse. What could be an easier target for an AGI wanting agency than ubiquitous mini-models awaiting a small upgrade and assimilation.
Thanks for this. This summary was the best way to watch a video. ;-)
"Calls it a ‘theatrical risk’ and says safety researchers got ‘hung up’ on this problem, although it is good that they focus on it, but we risk not focusing enough on other risks. This is quite the rhetorical set of moves to be pulling here. Feels strategically hostile."
Can you expand on this? Why does it feel strategically hostile? Do you think Sam doesn't really believe what he is saying about safety researchers fixating on this? What would his motivation be to be "strategically hostile" in this context? Does he secretly want to end the world?
Found it interesting. He was much more cagey and careful with his answers this time around.