29 Comments

On the question of the use and misuse of Code Interpreter, I think a lot turns on people’s willingness/ability to pay attention to nuance, which nuance Code Interpreter provides in abundant detail. I wrote a bit about this here: https://davefriedman.substack.com/p/the-promise-and-peril-of-code-interpreter

Expand full comment
Jul 13, 2023·edited Jul 13, 2023

For your next one of these can I request a review of Mission: Impossible: Dead Reckoning: Part 1?

(edit: below is spoilers for the setup of the movie, from the introductory "spooks in a room explain the Mission" scene)

It's a really fun movie, and the antagonist is an imperfectly aligned superintelligent AI that is released, instrumentally seeks out computing resources that give it "oracle" type predictive abilities, modifies itself, and begins doing mysterious evil shit that only Tom Cruise and his friends can stop.

Expand full comment

FYI, I strongly disagree with your characterization of the forecasting tournament. I was a participant (AI expert, not superforecaster). The superforecasters did put substantial time and thought in. They were also concerned with existential risks, despite the lower probabilities. No one said 0%.

Expand full comment

FYI, Minetester uses Minetest, which is a customizable clone of Minecraft, and not Minecraft itself.

Expand full comment

Isn’t it pretty fair to compare LLMs in the current moment to the dot com bubble in the sense that there -is- a lot of sudden intense business pressure to integrate quickly without considering the consequences or being careful about the implementation and so very much might ruin a bunch of capital in the short term while acknowledging that it’s still probably going to be extremely The Future Of Everything? Haven’t read the article we’re referring to here but it seems somewhat presumptions to see someone saying “this looks familiar in the certain sense of an economic arc” = “therefore LLMs are a fad”.

“We need to align ourselves to AGI[, man]” is what I would refer to as a “bong-rip-tier” take.

Expand full comment

> [Yann LeCun] then says he has a potential solution for safe AI (that it sounds like is for a different type of AI system?) but he doesn’t know it will work because he hasn’t tested it. That’s great, both that he has the idea and he knows he doesn’t know if it will work yet. What is it? Why would we keep that a secret, while open sourcing Llama?

See for example https://twitter.com/ylecun/status/1651957021745508352 , to which I responded here: https://www.lesswrong.com/posts/C5guLAx7ieQoowv3d/lecun-s-a-path-towards-autonomous-machine-intelligence-has-1

Expand full comment

With reference to whether it's correct to continue responding to stupid arguments: don't feed the low status trolls, but if high status people are making stupid arguments that you can simply and amusingly show to be wrong, ideally while lacing your arguments with contempt, then those people are giving you free status, and you should keep on pulling that lever until the machine breaks.

Expand full comment
Jul 13, 2023·edited Jul 13, 2023

Regarding the 1 billion token paper. I think often it is hard to judge how serious a result is right away, and it is even harder to say how useful it might be. Paying off everyone who claims to have a good result may not be that easy and set the wrong incentives. The ideal solution would of course be to make open capabilities research socially unacceptable, which is unfortunately not unilaterally feasible, but would at least also move us in the right direction regarding what might be necessary to control the harder "capitalist" dynamics.

Expand full comment

Regarding your poll: "Should we keep mentioning such poor (non)arguments?"

It wasn't quite clear to me whether you're asking as a general principle, or for your blog in particular. If it was the first, I'd like an option to state that I find this a difficult question, and wouldn't be comfortable giving a yes or no answer.

Expand full comment

I thought Kamala did a fine ELI5 of AI. That the public discourse lives at the kindergarten level stinks, but Kamala doesn't deserve ridicule for adapting to reality.

I updated to think better of Kamala, and worse of people who think her communication style means she's stupid. And I don't just say this for the members of her staff who read this post (they may not check the comments anyhow).

Expand full comment

"I saw this clip and appreciate the music and visuals but don’t understand why anything involved is AI based?"

From what I understand, the Laserweeder uses machine learning to discriminate weeds from crops, to burn them selectively.

Very cool tech, I hope they succeed.

Expand full comment

Monday at 19:00 PDT we (Seattle) have a weekly social - I'd love to meet you/see you there: www.meetup.com/seattle-rationality/events/znhpdtyfckbwb

Expand full comment

Thanks, Zvi. This newsletter is my main way of keeping up with AI news. Hope you keep it up!

Expand full comment

> I wrote most of a piece about RFK Jr and the debate over when one should debate and otherwise seek truth, especially on issues with an ‘established right answer’ of sorts.

I'm mostly interested in "Should Scott Alexander have waded into the ivermectin debate?"

This is mostly because the results of that intervention was to cause me to update towards "ivermectin proponents had more going for them than I thought, at least pre-vaccine," while most rationalists seem to have come out on the other side, that Ivermectin is clearly not effective and anyone who thinks it might be is some kind of crazy conspiracy theorist.

RFK is a bit of a different matter - he's been wrong on so many different things that I am comfortable concluding he's a loon, so the question is whether it's better to (1) engage with him, (2) mock him and people who agree with him, or (3) ignore him.

Expand full comment

Thanks for another great post!

Expand full comment