Man is Vance going to feel silly when he realizes he's in an administration banning people for not using Gulf of America, banning pronouns in email, blocked a bunch of research by keyword, etc. That's not counting alleged free speech absolutist Musk and Twitter. Weird how when some conservatives on Twitter pushed back on Musk they got shadow banned and had their followers dropped.
Or just maybe what Vance and Republicans mean is they want their preferred speech to be free and things that don't agree with it are bad and thus should be punished. Democrats do similar things in the opposite direction, but at least they aren't being as openly hypocritical about it by pretending that they hold free speech in such high regard.
Authoritarian governments, which is what Trump is trying to be with the unitary executive theory, always polices language. Not buying what Vance is saying given it doesn't match the actions.
If you look at the Ukraine war it's more clear what "autonomous killer robots" means right now. Jamming is a really big deal tactically in the drone wars. When a drone approaches you, you jam its communications to disable it. The current work is for AI to "fill in the blanks", like if you identify your target, but then your communication gets jammed, the AI will take over the drone's operation so that it can still work while being jammed. So it's like a counter-strategy to jamming.
Good article about it here if you have an Economist sub:
Does anyone know if AI labs are letting the AIs use tools in training? For example, there's no reason that you can't just _always_ have a calculator on hand, and make it a requirement to run the model.
If an LLM produces an initial glitchy prototype of some code that would otherwise never have been written, and that existence proof provides motivation to keep tinkering with it until it is no longer glitchy (with maybe a rewrite or two as well), is that an infinite improvement? Asking for a friend.
Strong agree with those biomed researchers: the truly useful parts of LLM responses are the things I hadn't thought about at all. I'm OK with wading through mid output that does what I could have done when not paying much attention, but instantly instead of over hours or days, for the occasional gem out of leftfield.
I cancelled my Claude subscription last week (new job has me with less spare time to write articles that I'd use Claude to summarise many court judgements so I could pick the ones that were most interesting for me to write about before going further). I was thinking the thing that may bring me back was voice mode and maybe image generation. Instead I find out token costs are up again and voice mode is being coopted by Amazon for the bundle-of-mostly-stuff-I-don't-want that is Prime. What a disappointment.
Oh wow, this was a packed newsletter. The whole Alexa+ announcement seems like a huge deal finally a proper AI-powered assistant that could actually be useful. But then again, we've heard that promise before with assistants like Siri and Google Assistant and they never quite lived up to the hype.
Do you think Alexa+ will actually change how people interact with AI in their homes or is it just another attempt that will fizzle out after the initial buzz?
"‘This sounds like science fiction’ is a sign something is plausible"
I couldn't disagree less.
Actually, there are two separate questions, when a thing "sounds like science fiction": (1) is likely to be true? (2) is it likely to be believed by the public?
The above statement is wrong with respect to #1, but I'd rather talk about #2, which is the more important one. When trying to convince the general public of something, if it "sounds like science fiction", the public won't believe it – regardless of whether it's true.
When someone says "this sounds like science fiction," what they are *really* saying is "this sounds outlandishly implausible." They will dismiss the thing, and they will dismiss you too. You will lose credibility in their eyes. And again, the thing being actually true or false is irrelevant.
If you're trying to convince the public that AI should be regulated, then it's extremely anti-helpful to talk about being turned into goo, or Dyson spheres, or transhumanism. Those things sound crazy (and/or thousands of years away) to the general public.
You could try going down the route of *educating* the general public that these things are actually realistic and proximate, but that would be trying to boil the ocean. Instead you will need to talk about AGI in ways that the public can accept, if you want to actually have an effect on the world. Otherwise, it's all just talk.
Zvi, you are missing a lot of context regarding the Belgian 'toxicity score's thing. Dries Van Langenhove is not some random dude, he's a far-right politician and the founder of the far-right student movement 'Shield & Friends', and has been under investigation for hate speech since 2018-2019 after an investigative journalist managed to infiltrate his movement and leaked some of the content of their private group chats, which contained a lot of rather extreme racist and antisemitic memes, and that triggered the 'targeted' investigation he is complaining about now. It is not at all established practice to use AI to determine whether something is hate speech or not, I assume they just resorted to using AI here because of the sheer size of the 'shitposting' group. There is no such thing as a Belgian 'toxicity score', that is pure hyperbole. If you're wondering why his tweet has 1M views and no community notes, that's simply because he's a rather well-known far-right figure in Belgium, so he has a lot of far-right followers.
Did you miss Gibberlink mode?
Two conversational AI agents switching from English to sound-level protocol after confirming they are both AI agents.
One step closer to disempowerment...
https://github.com/PennyroyalTea/gibberlink
Oh, that. I mean I saw it but I didn't think it was news? Was anyone surprised?
As in, I didn't update AT ALL, obv this will happen.
That is fair, but not everyone is so pilled...
Both agents were prompted and scaffolded to do this… wouldn’t it be news if they didn’t?
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/ai-105-hey-there-alexa?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
Despite his extensive other flaws, Vance is absolutely right that Europe needs to move closer to us on free speech
Man is Vance going to feel silly when he realizes he's in an administration banning people for not using Gulf of America, banning pronouns in email, blocked a bunch of research by keyword, etc. That's not counting alleged free speech absolutist Musk and Twitter. Weird how when some conservatives on Twitter pushed back on Musk they got shadow banned and had their followers dropped.
Or just maybe what Vance and Republicans mean is they want their preferred speech to be free and things that don't agree with it are bad and thus should be punished. Democrats do similar things in the opposite direction, but at least they aren't being as openly hypocritical about it by pretending that they hold free speech in such high regard.
Authoritarian governments, which is what Trump is trying to be with the unitary executive theory, always polices language. Not buying what Vance is saying given it doesn't match the actions.
The Week In Audio: Hassabis/Song/Bengio/Zhang - https://youtu.be/U7t02Q6zfdc to avoid having to watch in X
If you look at the Ukraine war it's more clear what "autonomous killer robots" means right now. Jamming is a really big deal tactically in the drone wars. When a drone approaches you, you jam its communications to disable it. The current work is for AI to "fill in the blanks", like if you identify your target, but then your communication gets jammed, the AI will take over the drone's operation so that it can still work while being jammed. So it's like a counter-strategy to jamming.
Good article about it here if you have an Economist sub:
https://www.economist.com/science-and-technology/2025/02/05/fighting-the-war-in-ukraine-on-the-electromagnetic-spectrum
Re: "omnimodal reasoning model":
Does anyone know if AI labs are letting the AIs use tools in training? For example, there's no reason that you can't just _always_ have a calculator on hand, and make it a requirement to run the model.
If an LLM produces an initial glitchy prototype of some code that would otherwise never have been written, and that existence proof provides motivation to keep tinkering with it until it is no longer glitchy (with maybe a rewrite or two as well), is that an infinite improvement? Asking for a friend.
Strong agree with those biomed researchers: the truly useful parts of LLM responses are the things I hadn't thought about at all. I'm OK with wading through mid output that does what I could have done when not paying much attention, but instantly instead of over hours or days, for the occasional gem out of leftfield.
I cancelled my Claude subscription last week (new job has me with less spare time to write articles that I'd use Claude to summarise many court judgements so I could pick the ones that were most interesting for me to write about before going further). I was thinking the thing that may bring me back was voice mode and maybe image generation. Instead I find out token costs are up again and voice mode is being coopted by Amazon for the bundle-of-mostly-stuff-I-don't-want that is Prime. What a disappointment.
I mean if Alexa+ includes normal Claude access then Prime is just cheaper than $20/month. We will see. I presume it won't work that way.
But yeah, if you're looking for features like that, Claude's not your man.
It's always fun popping on here and noticing Zvi posted just before something else significant happened. Alexa whatever, GPT-4.5 is out.
Oh wow, this was a packed newsletter. The whole Alexa+ announcement seems like a huge deal finally a proper AI-powered assistant that could actually be useful. But then again, we've heard that promise before with assistants like Siri and Google Assistant and they never quite lived up to the hype.
Do you think Alexa+ will actually change how people interact with AI in their homes or is it just another attempt that will fizzle out after the initial buzz?
"‘This sounds like science fiction’ is a sign something is plausible"
I couldn't disagree less.
Actually, there are two separate questions, when a thing "sounds like science fiction": (1) is likely to be true? (2) is it likely to be believed by the public?
The above statement is wrong with respect to #1, but I'd rather talk about #2, which is the more important one. When trying to convince the general public of something, if it "sounds like science fiction", the public won't believe it – regardless of whether it's true.
When someone says "this sounds like science fiction," what they are *really* saying is "this sounds outlandishly implausible." They will dismiss the thing, and they will dismiss you too. You will lose credibility in their eyes. And again, the thing being actually true or false is irrelevant.
If you're trying to convince the public that AI should be regulated, then it's extremely anti-helpful to talk about being turned into goo, or Dyson spheres, or transhumanism. Those things sound crazy (and/or thousands of years away) to the general public.
You could try going down the route of *educating* the general public that these things are actually realistic and proximate, but that would be trying to boil the ocean. Instead you will need to talk about AGI in ways that the public can accept, if you want to actually have an effect on the world. Otherwise, it's all just talk.
Zvi, you are missing a lot of context regarding the Belgian 'toxicity score's thing. Dries Van Langenhove is not some random dude, he's a far-right politician and the founder of the far-right student movement 'Shield & Friends', and has been under investigation for hate speech since 2018-2019 after an investigative journalist managed to infiltrate his movement and leaked some of the content of their private group chats, which contained a lot of rather extreme racist and antisemitic memes, and that triggered the 'targeted' investigation he is complaining about now. It is not at all established practice to use AI to determine whether something is hate speech or not, I assume they just resorted to using AI here because of the sheer size of the 'shitposting' group. There is no such thing as a Belgian 'toxicity score', that is pure hyperbole. If you're wondering why his tweet has 1M views and no community notes, that's simply because he's a rather well-known far-right figure in Belgium, so he has a lot of far-right followers.