18 Comments
User's avatar
Askwho Casts AI's avatar

The quote from Reuters about halfway down is malformed:

Reuters: yteDance and Alibaba (9988.HyteDance and Alibaba (9988.HK), opens new tab have asked Nvidia (NVDA.O), opens new tab about buying its powerful H200 AI chip after U.S. President Donald Trump said he would allow it to be exported to China, four people briefed on the matter told Reuters.K), opens new tab have asked Nvidia (NVDA.O), opens new tab about buying its powerful H200 AI chip after U.S. President Donald Trump said he would allow it to be exported to China, four people briefed on the matter told Reuters.

hwold's avatar

> I’m sorry, but yeah, I’ll take immortality and scientific wonders over a few scientists getting the joy of discovery. That’s a great trade.

Change My Mind, but I don’t think that’s a trade you can make in isolation, and the general trade you can make is just "replacing us".

Again, once you have an AI that can cure cancer, you presumably have an AI that is also (or will be soon) superhuman at education, goverance and policy, law, economics (both running business and investment). Refusing to deploy AI here to let a place to humans will raise the obvious objection “bad governance/economy does more harm than cancer ever did, so why the hell do you accept AI in the latter but not the former”.

AI doom or what?'s avatar

>AI is the best tool ever invented for learning.

>AI is the best tool ever invented for not learning.

>Which way, modern man?

As a former teacher, I agree and would add that the extent to which a teacher adopts an adversarial approach to teaching is a secondary consideration; primary is the extent to which a student wants to learn/do the work of learning. A whole lot figures into the latter equation. School is super frustrating for a lot of students. It's associated with trauma and lack of agency. They're literally forced to be there. Compared to video games and phones, it's an utter bore unless you're smart and curious. Smart, curious people have a hard time relating to being not-smart and not-curious.

(That said, I'm far less worried about education than I am about existential and catastrophic issues around the house of cards that comes from building too fast, too high, too soon.)

Bardo Bill's avatar

I've been thinking about this lately. It seems to me that AI is a great tool for learning, *presuming you already have a lot of learning under your belt.* That is, presuming you've already developed critical thinking skills, know how to ask questions, know how to interpret texts, know when to be credulous and when to push back. Will a generation that grows up with AI *already ubiquitous* get the chance to develop these skills? I'm sure some will. But a lot who otherwise might have might never actually do so.

Cara Tall's avatar

Prescient worry my friend lmao

0xdeadbeef's avatar

Stephen Hawking is not general intelligence because his ability to impact the physical world is limited.

Remote-first companies are not staffed by general intelligences.

Okay, fine, but let's not play stupid semantic games. Would 100 million remote Von Neumanns impact the world in a big way? Why wouldn't tons of processes change to take advantage of them, and avoid slowdowns caused by interacting with the physical world?

John's avatar

>We do not have the luxury of saying AI and lethal do not belong in the same sentence, if there is one place we cannot pause this would be it, and the threat to us is mostly orthogonal to the literal weapons themselves while helping people realize the situation

Why not argue the same thing about biological weapons (many people did in the 20th century)? Should we really just put AI in charge of bioweapons labs, because we'll lose to China if we don't? Seems crazy! We can in fact come to stable agreements to "not do X" internationally, even if there's a unilateral advantage to "doing X" for individual actors. Chem/bio weapons being a case in point; autonomous lethal weapons could be another. If your position is "let's not have AI kill everyone" doesn't it seem like a good place to start might be "let's not have AI kill anyone"?

Mark's avatar

>Most of all, I don’t think people are comprehending what ‘AI does almost any job better than humans’ means, even if we presume humans somehow retain control. They’re thinking narrowly about ‘They Took Our Jobs’ not the idea that actually nothing you do is that useful.

I’ll claim that, by definition, AI will never be able to do ALL jobs as well as a person because there will always be some consumers willing to pay for a “human touch.”

We’re long past the point where machines could take over every sewing job. But in practice, they’ve just taken over *most* sewing jobs. Because some consumers value traditional production methods and a human touch, hand-knitted apparel—despite costing much more than an equivalent quality machine product—still have a market and some people make their living making them. The number of those people has only increased as society has gotten richer.

You might be able to model the AI takeover in a Baumol’s cost disease framework. Productivity in everything people don’t care about a “human touch” can go up 10,000,000%... and that will massively increase the value of jobs providing “human touch” even if they don’t have any productivity gain. ~Slowly~ Soon, all jobs will be about providing a “human touch,” probably often in ways we’d fine hard imaging now. But in a rich enough future society, wages will be, from today’s perspective, astronomically high for what currently seems like negligible subjective quality additions.

(Of course, this is assuming humans do retain control.)

Jonathan Woodward's avatar

Well, as you imply in your final sentence, even that logic only works if the people who want those human crafts control enough resources to be able to support the people making those crafts... and if that's the only thing humans can still do productively, then there will have to be a *lot* of demand for those crafts.

Kevin's avatar

Looks like China cares more about the possibility of invading Taiwan than about competing in the AI race. I'm not sure if this is reassuring or not....

Bruce Lambert's avatar

I’ve been teaching college for 35 years, never have I seen curiosity or agency at a lower level. So I’m pessimistic about AI being used as the best learning tool ever, though I continue to urge my students to use it as such. They continue to ignore me, to stare at their screens, and to sleepwalk through their education.

Ondrej Kubu's avatar

@Zvi, about the This week in audio section: could you please link to actual audio for the podcasts, or to youtube videos directly?

If you link to x.com, it is a pain to extract... Beware trivial inconvenieces, levels of friction and all that.

The whole reason for podcasts is to listen while doing something else, which does not work with youtube videos...

Sinity's avatar

> Maybe they are emphasizing self-reliance and don’t understand the trade-offs and what they’re sacrificing.

There's an old gwern comment (November 2022, tho I see it was edited in April 2025): https://www.lesswrong.com/posts/oBTkthd7h8sDpkiu2/analysis-us-restricts-gpu-sales-to-china?commentId=7nuZ2ANCw97oGqw2B

> From the scaling-pilled perspective, or even just centrist AI perspective, this is an insane position: it is taking a L on one of, if not the most, important future technological capabilities, which in the long run may win or lose wars. If China wants to dominate Asia, much less surpass the obsolete American empire, or create AGI, or lead in aerospace, or create '5G' or whatever, it's hard to see how it's going to do that while paying more for chips which are half a decade or worse out of date.

> But Xi is not scaling-pilled (after all, few people are, even in the most cutting-edge AI R&D labs). So maybe we should ask: is he centrist on AI? Er... Oh - does he care about AI at all? What evidence is there that he does? There doesn't seem to be much. Going further: what evidence is there that he even regards chips in toto as being all that important?

(...)

> the senior CCP leadership is semi-famous for being 'technical' (typically engineering degrees like hydrology or mechanics or aerospace) but little to do with anything computer. Xi Jinping has a degree in chemical engineering from a low-rigor period 43 years ago, and then a degree in BS, both of which might just be mostly fake (pretty common). Propaganda aside, his major intellectual interest is literature, particularly Goethe. He has not overseen any major technical projects, or made any major intellectual contributions I'm aware of.

> (...) his reign has been marked by an emphasis on legible atom-heavy scientific projects, and a general downplaying of everything related to bits or information, unless it has a national security angle (leading to 'Dutch disease' where an ultra-niche like facial recognition gets lavishly funded, crowding out more generalizable research). For all the talk of 'data is the new oil' or 'China as a data superpower' or the advantages from 'Chinese lack of privacy', China still drastically underperforms in making good use of it.

> There seems to be considerable contempt for the USA and American capabilities in China among 'wolf warriors', taking cues from the top, and with considerable historical precendent for authoritarian countries to mistakenly gauge the USA as 'decadent' and 'weak'. This may have been trimmed a bit after Ukraine and seeing what things like HIMARSs can do, but it runs deep and inside the Chinese bubble, there is little correction. (When was the last time Xi Jinping was in the USA and saw more than political flunkies? Or any Chinese, for that matter, given their multi-year near-shutdown of international travel?) The thinness of the air at the heights Emperor Xi inhabits is prone to induce altitude sickness and hallucinations. (But at least, thanks to "Zero COVID", among his many problems, personally getting COVID is not one of them.) If the Americans are decadent because of their emphasis on software and compute, and China & Xi are superior because they aren't decadent...

> (...) in past chip incidents, the primary problem has been a complete absence of any chips, and not so much the advancement of the chips themselves. He has never seen anyone lose a war due to lack of AI or GPUs; he's only seen disasters caused by lacking perfectly ordinary chips that his domestic manufacturers probably could've made 10 years ago. And in learning lessons from past chip incidents, what Xi brings to the table is: zero technical competence or expertise in the relevant area, a hatred of software and everything to do with it, and long-standing prioritization of heavy-industry-like stuff (which is clearly visible to the naked eye and 'conventional' and 'prestigious' and applauded by old credentialed foreigners).

> mistakes in this regard may be hard to see. 'The seen and the unseen' is a dangerous trap because it is so much easier to see the seen than it is to see the unseen. If Xi makes a mistake on chips, a military mistake, then by the nature of things military, he may never realize it. If the engineers of, say, hypersonic missiles can't get enough high-end GPUs, their complaints will be ignored by the next layer of management and never punted all the way up to Beijing, and they will simply run their simulations at a lower resolution or take other shortcuts; and if the hypersonic missile in question turns out to be a lemon, inadequate to hit NATO units or US aircraft carriers, how will anyone ever find out short of a war over Taiwan---at which point it is far too late? Naturally, of course, given a supply of at least basic chips to work with, the establishment will assure him everything is fine, just like the Russian military assured Putin it was not a paper tiger or hopelessly undermined by corruption, and almost all the time they will be right.

> from Xi's perspective, all in all, maybe it looks fairly reasonable to neglect chips right now. They aren't that important, and don't seem in that much worse trouble than anything else, while bailing them out to the degree where they can potentially gain, or at least near, the cutting-edge would use up a ton of an increasingly skint government's money. Plus, as master of the currents of history piloting China to a glorious Chinese Century avenging the Century of Humiliations, he has much bigger fish to fry, like the house of cards which is real estate, and Zero COVID. There will be side effects, yes, but if gaming GPUs becoming expensive helps turn little Aiguo away from a career as a useless game programmer into a respectable hardworking fusion physicist, perhaps that's even a feature rather than a bug?

> Well, I could be wrong about all this. But now I can see at least one perspective from which the chip embargo is a big deal but also Xi's rational response is to indeed just take it on the chin, and perhaps tone down the rhetoric and engage in a bit more biding one's time & hiding one's strength. (I doubt that the long-term aims have changed meaningfully just because Beijing is calibrating its rhetoric a little down from recent peaks of aggression, but in the short term, things will be superficially more peaceful.)

gregvp's avatar

Which human values?

Those that say that removing girls' clitorises and sewing together their labia, and marrying them off to their old male cousins at the age of nine, are fine and honorable things to do? Those values?

Not those values? But they are held by humans. How is an AI supposed to know?

Ondrej Kubu's avatar

> Gemini 3 continues to be very insistent that it is not December 2025, using lots of its thinking tokens reinforcing its belief that presented scenarios are fabricated. It is all rather crazy, it is a sign of far more dangerous things to come in the future, and Google needs to get to the bottom of this and fix it

This seems to be quite a general LLM problem, I have similar experience with Claudes, though maybe less strong. They overindex on training data or the cut off date in the system prompt and are not able to adapt even if I give them the current date explicitly in the prompt.

Bardo Bill's avatar

I am starting to get the impression that Anthropic is the only company that's actually trying to achieve AGI. OpenAI seems increasingly interested in becoming the new facebook - basically a monetized, engagement-maximizing, enshittifying app. Altman's focus and directives just seems too small-bore and inane to square with any sincere world-historical goals (which brings it back to the pack with Meta, xAI, and Google). Is that at all fair?

The defense might be "they're doing that stuff to get the revenue and investment they need to build AGI." But actually that seems like a description of what Anthropic is doing. Maybe I am being too generous to Anthropic though (if "generous" is the right word) and everyone's playing the same game.

Nathan Franz's avatar

Interestingly, the Pope's comment seems like it's saying exactly what the "don't use AI to cure cancer" guy was saying. I don't find myself applauding it.