For your next one of these can I request a review of Mission: Impossible: Dead Reckoning: Part 1?
(edit: below is spoilers for the setup of the movie, from the introductory "spooks in a room explain the Mission" scene)
It's a really fun movie, and the antagonist is an imperfectly aligned superintelligent AI that is released, instrumentally seeks out computing resources that give it "oracle" type predictive abilities, modifies itself, and begins doing mysterious evil shit that only Tom Cruise and his friends can stop.
FYI, I strongly disagree with your characterization of the forecasting tournament. I was a participant (AI expert, not superforecaster). The superforecasters did put substantial time and thought in. They were also concerned with existential risks, despite the lower probabilities. No one said 0%.
OK, good perspective, that was based on my explorations from a few months ago when several participants told similar stories, didn't get any pushback then and didn't revisit the details.
True, tho I don't think that affects its coverage here much.
I've seen some stuff about Runescape being 'overrun' with what seem like _really great_ bots. Minecraft isn't a MMO like that tho – unless maybe Microsoft is hosting their own servers with large (50-100+?) numbers of simultaneous players.
I had a tiny side project to make a 'classic AI' for Minecraft. The sub-project I was working on last was 'vision'. I imagine 'modern AI' ('ML') is much easier in a lot of ways, via, e.g. (D)RL or LLMs.
I can't help tho to think of this about the bots-in-games stuff: https://xkcd.com/810/
Isn’t it pretty fair to compare LLMs in the current moment to the dot com bubble in the sense that there -is- a lot of sudden intense business pressure to integrate quickly without considering the consequences or being careful about the implementation and so very much might ruin a bunch of capital in the short term while acknowledging that it’s still probably going to be extremely The Future Of Everything? Haven’t read the article we’re referring to here but it seems somewhat presumptions to see someone saying “this looks familiar in the certain sense of an economic arc” = “therefore LLMs are a fad”.
“We need to align ourselves to AGI[, man]” is what I would refer to as a “bong-rip-tier” take.
> [Yann LeCun] then says he has a potential solution for safe AI (that it sounds like is for a different type of AI system?) but he doesn’t know it will work because he hasn’t tested it. That’s great, both that he has the idea and he knows he doesn’t know if it will work yet. What is it? Why would we keep that a secret, while open sourcing Llama?
I find it odd that David Chapman writes on twitter that "predict-next-token will probably get abandoned in <=5 years", and you (presumably) disagree with him; and David Chapman also writes on twitter that LeCun's plan can't work (from a capabilities perspective), and you're apparently just treating him as an infallible authority? (Am I misunderstanding?)
Anyway, for my part, from an AGI capabilities perspective, I think LeCun's plan gets some things right and other things wrong, and I'm opposed to publicly discussing the matter further :-P
I suggest "reasonable people disagree" as an excellent go-to opinion whenever people start arguing about technical roadmaps to AGI. Even leaving aside the infohazard issue, it's just true :)
> I suggest "reasonable people disagree" as an excellent go-to opinion whenever people start arguing about technical roadmaps to AGI. Even leaving aside the infohazard issue, it's just true :)
I hope this isn't true!
Obviously, if there are infohazards involved, do your 'info security due diligence' before communicating with new people about this stuff. And, of course, some conversations (maybe sadly) aren't worth having given, e.g. opportunity costs. But we really need to NOT agree to disagree – we probably need to be rational enough to _eventually_ come to agreements on the answers to far harder questions if we want to not all die.
With reference to whether it's correct to continue responding to stupid arguments: don't feed the low status trolls, but if high status people are making stupid arguments that you can simply and amusingly show to be wrong, ideally while lacing your arguments with contempt, then those people are giving you free status, and you should keep on pulling that lever until the machine breaks.
This assumes that status seeking is a useful goal here. I'm not convinced status hierarchies are stable enough that gaining status within a given hierarchy is useful for a highly fluid topic: it is difficult to cash out into more relevant status domains when relevance shifts. I get value from Zvi's focus on facts (even the early AI posts are still useful) and would be sad if that energy was partially diverted into status seeking.
Always think on the margin, the correct amount of status awareness seems clearly not zero. If I actually gave zero Fs I would mess up in various stupid ways.
Regarding the 1 billion token paper. I think often it is hard to judge how serious a result is right away, and it is even harder to say how useful it might be. Paying off everyone who claims to have a good result may not be that easy and set the wrong incentives. The ideal solution would of course be to make open capabilities research socially unacceptable, which is unfortunately not unilaterally feasible, but would at least also move us in the right direction regarding what might be necessary to control the harder "capitalist" dynamics.
Regarding your poll: "Should we keep mentioning such poor (non)arguments?"
It wasn't quite clear to me whether you're asking as a general principle, or for your blog in particular. If it was the first, I'd like an option to state that I find this a difficult question, and wouldn't be comfortable giving a yes or no answer.
I thought Kamala did a fine ELI5 of AI. That the public discourse lives at the kindergarten level stinks, but Kamala doesn't deserve ridicule for adapting to reality.
I updated to think better of Kamala, and worse of people who think her communication style means she's stupid. And I don't just say this for the members of her staff who read this post (they may not check the comments anyhow).
If members of her staff read the post, especially also the comments, please do call me! Or better yet, have her call me.
I think you can definitely do better than she did, and also 'you are not five' but there's a reason I put it in The Lighter Side rather than treating it as a serious issue. Now let's see her say the word 'existential' or 'extinction'!
My perception is that Kamala sounds like she's talking to five year olds much more than any comparable politician. Is that some kind of cherry picking error, or is it accurate?
> I wrote most of a piece about RFK Jr and the debate over when one should debate and otherwise seek truth, especially on issues with an ‘established right answer’ of sorts.
I'm mostly interested in "Should Scott Alexander have waded into the ivermectin debate?"
This is mostly because the results of that intervention was to cause me to update towards "ivermectin proponents had more going for them than I thought, at least pre-vaccine," while most rationalists seem to have come out on the other side, that Ivermectin is clearly not effective and anyone who thinks it might be is some kind of crazy conspiracy theorist.
RFK is a bit of a different matter - he's been wrong on so many different things that I am comfortable concluding he's a loon, so the question is whether it's better to (1) engage with him, (2) mock him and people who agree with him, or (3) ignore him.
I think Scott was right to engage at all and take the question seriously, but spent too much time and too many words on it once the answer was clear. The goal isn't to update people in a predetermined direction, the goal is to figure out the answer, and we had a situation where both sides were engaging in unsound arguments and tactics and the physical questions required exploration.
Whether you updated up or down on Ivermectin or its advocates depends on where you started out and what you find, and it shouldn't be predetermined or predictable! I would say I gained confidence and reduced my uncertainty, which was a bigger effect than the additional fact that the proponents did indeed have more going for them than I expected. Many such cases - you get an 80th percentile result, but that's not so different from 1st percentile in implications, what mattered was whether it was 98th or not. Like hiring someone!
On the question of the use and misuse of Code Interpreter, I think a lot turns on people’s willingness/ability to pay attention to nuance, which nuance Code Interpreter provides in abundant detail. I wrote a bit about this here: https://davefriedman.substack.com/p/the-promise-and-peril-of-code-interpreter
For your next one of these can I request a review of Mission: Impossible: Dead Reckoning: Part 1?
(edit: below is spoilers for the setup of the movie, from the introductory "spooks in a room explain the Mission" scene)
It's a really fun movie, and the antagonist is an imperfectly aligned superintelligent AI that is released, instrumentally seeks out computing resources that give it "oracle" type predictive abilities, modifies itself, and begins doing mysterious evil shit that only Tom Cruise and his friends can stop.
SPOILER ALERT!
Will see what I can do about seeing it, though.
FYI, I strongly disagree with your characterization of the forecasting tournament. I was a participant (AI expert, not superforecaster). The superforecasters did put substantial time and thought in. They were also concerned with existential risks, despite the lower probabilities. No one said 0%.
OK, good perspective, that was based on my explorations from a few months ago when several participants told similar stories, didn't get any pushback then and didn't revisit the details.
FYI, Minetester uses Minetest, which is a customizable clone of Minecraft, and not Minecraft itself.
True, tho I don't think that affects its coverage here much.
I've seen some stuff about Runescape being 'overrun' with what seem like _really great_ bots. Minecraft isn't a MMO like that tho – unless maybe Microsoft is hosting their own servers with large (50-100+?) numbers of simultaneous players.
I had a tiny side project to make a 'classic AI' for Minecraft. The sub-project I was working on last was 'vision'. I imagine 'modern AI' ('ML') is much easier in a lot of ways, via, e.g. (D)RL or LLMs.
I can't help tho to think of this about the bots-in-games stuff: https://xkcd.com/810/
Isn’t it pretty fair to compare LLMs in the current moment to the dot com bubble in the sense that there -is- a lot of sudden intense business pressure to integrate quickly without considering the consequences or being careful about the implementation and so very much might ruin a bunch of capital in the short term while acknowledging that it’s still probably going to be extremely The Future Of Everything? Haven’t read the article we’re referring to here but it seems somewhat presumptions to see someone saying “this looks familiar in the certain sense of an economic arc” = “therefore LLMs are a fad”.
“We need to align ourselves to AGI[, man]” is what I would refer to as a “bong-rip-tier” take.
Yeah, that seems more than fair!
_Of course_ MOST of the investment in 'the hot new thing' is going to, in hindsight certainly, be pretty wasteful!
> [Yann LeCun] then says he has a potential solution for safe AI (that it sounds like is for a different type of AI system?) but he doesn’t know it will work because he hasn’t tested it. That’s great, both that he has the idea and he knows he doesn’t know if it will work yet. What is it? Why would we keep that a secret, while open sourcing Llama?
See for example https://twitter.com/ylecun/status/1651957021745508352 , to which I responded here: https://www.lesswrong.com/posts/C5guLAx7ieQoowv3d/lecun-s-a-path-towards-autonomous-machine-intelligence-has-1
Yep. I also got linked on Twitter. The earlier issue is that it seems like the strategy for capabilities won't work either.
I find it odd that David Chapman writes on twitter that "predict-next-token will probably get abandoned in <=5 years", and you (presumably) disagree with him; and David Chapman also writes on twitter that LeCun's plan can't work (from a capabilities perspective), and you're apparently just treating him as an infallible authority? (Am I misunderstanding?)
Anyway, for my part, from an AGI capabilities perspective, I think LeCun's plan gets some things right and other things wrong, and I'm opposed to publicly discussing the matter further :-P
I suggest "reasonable people disagree" as an excellent go-to opinion whenever people start arguing about technical roadmaps to AGI. Even leaving aside the infohazard issue, it's just true :)
> I suggest "reasonable people disagree" as an excellent go-to opinion whenever people start arguing about technical roadmaps to AGI. Even leaving aside the infohazard issue, it's just true :)
I hope this isn't true!
Obviously, if there are infohazards involved, do your 'info security due diligence' before communicating with new people about this stuff. And, of course, some conversations (maybe sadly) aren't worth having given, e.g. opportunity costs. But we really need to NOT agree to disagree – we probably need to be rational enough to _eventually_ come to agreements on the answers to far harder questions if we want to not all die.
With reference to whether it's correct to continue responding to stupid arguments: don't feed the low status trolls, but if high status people are making stupid arguments that you can simply and amusingly show to be wrong, ideally while lacing your arguments with contempt, then those people are giving you free status, and you should keep on pulling that lever until the machine breaks.
This assumes that status seeking is a useful goal here. I'm not convinced status hierarchies are stable enough that gaining status within a given hierarchy is useful for a highly fluid topic: it is difficult to cash out into more relevant status domains when relevance shifts. I get value from Zvi's focus on facts (even the early AI posts are still useful) and would be sad if that energy was partially diverted into status seeking.
Always think on the margin, the correct amount of status awareness seems clearly not zero. If I actually gave zero Fs I would mess up in various stupid ways.
Sure, some is needed to avoid the default response to dismiss as coming from a status position too far outside the status Overton window.
Regarding the 1 billion token paper. I think often it is hard to judge how serious a result is right away, and it is even harder to say how useful it might be. Paying off everyone who claims to have a good result may not be that easy and set the wrong incentives. The ideal solution would of course be to make open capabilities research socially unacceptable, which is unfortunately not unilaterally feasible, but would at least also move us in the right direction regarding what might be necessary to control the harder "capitalist" dynamics.
Regarding your poll: "Should we keep mentioning such poor (non)arguments?"
It wasn't quite clear to me whether you're asking as a general principle, or for your blog in particular. If it was the first, I'd like an option to state that I find this a difficult question, and wouldn't be comfortable giving a yes or no answer.
Ah, I meant for me in particular.
I thought Kamala did a fine ELI5 of AI. That the public discourse lives at the kindergarten level stinks, but Kamala doesn't deserve ridicule for adapting to reality.
I updated to think better of Kamala, and worse of people who think her communication style means she's stupid. And I don't just say this for the members of her staff who read this post (they may not check the comments anyhow).
If members of her staff read the post, especially also the comments, please do call me! Or better yet, have her call me.
I think you can definitely do better than she did, and also 'you are not five' but there's a reason I put it in The Lighter Side rather than treating it as a serious issue. Now let's see her say the word 'existential' or 'extinction'!
My perception is that Kamala sounds like she's talking to five year olds much more than any comparable politician. Is that some kind of cherry picking error, or is it accurate?
For the same point made much more hilariously, see: babylonbee.com/video/meet-kamala-harriss-6-year-old-speechwriter
"I saw this clip and appreciate the music and visuals but don’t understand why anything involved is AI based?"
From what I understand, the Laserweeder uses machine learning to discriminate weeds from crops, to burn them selectively.
Very cool tech, I hope they succeed.
Monday at 19:00 PDT we (Seattle) have a weekly social - I'd love to meet you/see you there: www.meetup.com/seattle-rationality/events/znhpdtyfckbwb
Thanks, Zvi. This newsletter is my main way of keeping up with AI news. Hope you keep it up!
> I wrote most of a piece about RFK Jr and the debate over when one should debate and otherwise seek truth, especially on issues with an ‘established right answer’ of sorts.
I'm mostly interested in "Should Scott Alexander have waded into the ivermectin debate?"
This is mostly because the results of that intervention was to cause me to update towards "ivermectin proponents had more going for them than I thought, at least pre-vaccine," while most rationalists seem to have come out on the other side, that Ivermectin is clearly not effective and anyone who thinks it might be is some kind of crazy conspiracy theorist.
RFK is a bit of a different matter - he's been wrong on so many different things that I am comfortable concluding he's a loon, so the question is whether it's better to (1) engage with him, (2) mock him and people who agree with him, or (3) ignore him.
I think Scott was right to engage at all and take the question seriously, but spent too much time and too many words on it once the answer was clear. The goal isn't to update people in a predetermined direction, the goal is to figure out the answer, and we had a situation where both sides were engaging in unsound arguments and tactics and the physical questions required exploration.
Whether you updated up or down on Ivermectin or its advocates depends on where you started out and what you find, and it shouldn't be predetermined or predictable! I would say I gained confidence and reduced my uncertainty, which was a bigger effect than the additional fact that the proponents did indeed have more going for them than I expected. Many such cases - you get an 80th percentile result, but that's not so different from 1st percentile in implications, what mattered was whether it was 98th or not. Like hiring someone!
Thanks for another great post!