32 Comments

A couple of thoughts:

(1) Megan McArdle's point about most people not wanting customization or to be power users is, I think, underappreciated by the vast majority of technologists. And her point about open source software bears consideration, I think, when people confidently assert that the future of AI is open source. To the extent that open source software is harder to use, for the median user, than is closed source, open source AI will be relegated to the sidelines. On the other hand, this time may really be different, and it may be the case that autonomous AI agents, which of course we can expect to have no issues with complexity, will become avid users of open source AI tooling.

(2) Sam Altman keeps saying that compute is currency. To the extent that that claim is true, one has to wonder what happens with OpenAI. OpenAI has no compute resources of its own, and seems to be entirely dependent on Microsoft's good graces. And, as you mentioned, MSFT essentially just engineered a wholesale acquisition of Inflection without calling in an acquisition. In a world where compute is currency, what happens to those companies devoid of such currency?

Expand full comment

> Baby, if you are smarter than all humans combined, you can drive my car

Trying to steelman: what if the AI running on a gargantuan supercomputer with five nuclear power plants nearby is smarter then all humans combined, but the AI that fit in a car computer still can't drive a car better then human. Yes, this situation requires extremely unlikely comparative complexity of different tasks, but it seems to me there is no internal contradiction here.

Expand full comment

Reasoning ability is not all that is needed to drive a car. GPT-4 can already answer most useful questions about an image of a typical driving scene, but it might take up to a minute to do so. By the time GPT-4 gets through its self-talk about how a ball in the street may indicate the presence of a tunnel-visioned child chasing it and gives the order to brake, the critical moment has probably long since passed.

Expand full comment

Better Steelman: Elon knows firsthand the regulatory hurdles and barriers to adoption of a new car technology. Getting to a majority of self driving care will be years after we solve the self driving problem.

Expand full comment

alternative theory: Musk is speaking off the cuff, using the narrowly delineated definition of 'smart' that we use for LLMs right now. An LLM can't shoot a three pointer but it can do much better on the SAT than probably 90% of people who can shoot three pointers.

furthermore, the regulatory and commercial environment required for even a brilliant and totally safe completely autonomous AI vehicle does not exist. Even if it was legal it's not clear who insures it and how you evaluate who is at fault in the event of an accident. It is almost certain that the technical solution will arrive complete from the brow of Jove before legislators allow it or anybody but Musk will risk it on the road.

I would argue that current tech is obviously safer than human drivers for probably 99% of scenarios already, and should be in widespread mandatory use if only because it would save lives, but regulators don't see it that way.

Expand full comment

> statistical logics underpinning artificial intelligence reveal continuities with "racial imperialist views of national progress."

Wow, that's "jewish physics" again? Yes, yes, by Godwin's law I lose, I know.

Expand full comment

I suspect the French are correct that universal cheap broadband is net negative for society and human flourishing, but even if you disagree, shouldn’t you be excited about the experiment? Suppose the French succeed and demonstrate that within developed economies, happiness and national prosperity are completely compatible with draconian regulation of software and networking tech. Isn’t that bullish for the future of humanity?

Expand full comment

>On the plus side this would certainly motivate greatly higher efficiency in internet bandwidth use. On the negative side, that is completely and utterly insane.

Wonderful sentence. I want a science fiction novel where the ASI can only speak 50 words a day

Expand full comment
author

I presume it starts speaking German!

Expand full comment

Re: Noah Smith's point on compute:

There are two sides to the "Cost of AI Assistance" metric, the cost of the compute and the required amount of compute to complete a given task. While your point that the cost of compute will continue going down is correct, it leaves out that second part - we have no way of knowing how significantly the necessary amount of compute per unit of utility is going to need to be before AGI is reached. But we can be sure it's going to be significantly higher than it is today, just by looking at the performance of current generation LLMs.

If it ends up taking something like GPT-7 levels of compute to reach the level of intelligence necessary to e.g. substitute for a doctor, and Moore's Law continues on its logarithmic trajectory, then there's plenty of reason to expect that that titanic level of compute will prove to be a binding constraint. In that scenario, Noah's premise would prove correct.

As a point of evidence, maybe there's a reason Sam Altman wants $7 trillion to build more compute capacity?

Expand full comment

I can't tell for sure, but unless "I appreciate the honesty" is not serious, you do know that Nvidia website screenshot is faked as a joke, right?

Also, SXSW's audience is far more artist focused than you imply. Tech is often the first word in the promotional material, but I recommend looking at footage of the audience for a random presentation.

Expand full comment

"be more like Abraham did in his best moments, and tell Krishna no."

Abraham most certainly did not say no. He was in fact pretty gung-ho about killing his son for God! God had to stop him. Not sure what that says about us, parable-wise, but it doesn't seem great.

Expand full comment

No, Abraham was playing Chicken with God. See the work of game theorist Steven Brams. (Another clue--Abraham had a history of bargaining with God to stay God's vengeance, e.g. with Sodom and Gomorrah. Got it bargained down to a single righteous man being enough to spare the twin cities of sin.)

Expand full comment

This is a completely minor point, but on the fertility rates across cultures point, anecdotally, Chinese people (TFR 1.2-1.3) seem vastly more self aware than my western friends (TFR ~1.5, my social groups vastly below 1). Chinese people actively talk about "the last generation" and are keenly aware of the consequent social pressures. Childlessness is the default amongst my circles (late 30s, educated), and by and large people I speak to haven't given the slightest thought to what a declining population would look like in practice.

Expand full comment

“Elon Musk (March 13, 2024): It will take at least a decade before a majority of cars are self-driving”

I read this as, literally before a majority of cars on the road are self driving. Bringing cars to market takes time and replacing the worldwide fleet takes time

Expand full comment

Inflection: given the extremely large cash payment to Inflection from MS, this is looking like a rather blatant anti-antitrust gimmick. (Similar to MS's original structuring of the OA deal, its reported concerns during the OA coup about how to spin what looked like a great thing - Altman & co packing up and rebuilding OA inside MS - to the regulators which made it not so keen on that after all, and most recently, the gimmick structuring of its Mistral investment.)

'Muratori', not Mutari.

On a mortality note: Vernor Vinge died yesterday.

Expand full comment

>> Remember that even the simple things are great and most people don’t know about them, such as Patrick McKenzie creating a visual reference for his daughter so she can draw a woman on a bicycle.

Great, huh?

https://www.google.com/search?client=firefox-b-1-d&sca_esv=1fba1384b54230c4&sxsrf=ACQVn08oWi5ihEXeE0J51OO1PbGWF4gT1Q:1711072822558&q=woman+riding+a+bicycle+art

Why do we need AI again?

Expand full comment

What if your daughter wants to draw Shrimp Christ riding a bicycle?

Expand full comment
Mar 23·edited Mar 23

This is my favorite reply I've ever gotten on Substack.

Expand full comment

I am not an economist, so…

But…

I take it that your core objection to Noah Smith’s argument is that opportunity cost is in fact linked to/dependent on compute being expensive/scarce, because if it’s abundant enough then the space of “more valuable” jobs that were driving up the opportunity cost of eg AI doctors gets saturated, driving down the price of said jobs, and the opportunity cost disappears (like I said, not an economist; I had to think that through step by step; apologies if I’m stating the obvious). When I was reading his essay a couple of days ago, the vision of the future I thought he was presenting was much more along the lines of, “there will be an ever-expanding frontier of jobs that provide more value, and it will expand quicker than the cost of compute can come down (indeed, better and more abundant AI will accelerate its expansion), hence job space never gets saturated, hence AI will always have something better to do than take our jobs.”

Is this just a silly misinterpretation of what he’s on about? If not, does it make his claims less absurd?

Expand full comment
author

No, you've got it.

Expand full comment

I'm late to this party but also struggling. Can't wrap my mind around how if labor becomes too cheap to meter then job loss has much of a sting anymore. In that world it seems like we maybe just don't actually need jobs?

I feel like we're doing:

Premise 1) Assume a post-scarcity society.

Ps 2-n) <Miscellaneous Deduction>

Conclusion) You starve.

(Or Noah's, "by certain constrained assumptions, you don't actually.")

Middle bit seems fishy.

But all the smart people are on the other side of this argument so I must be really obtuse here and missing something obvious. Please tell me if my fly is down here... I'm honestly just too ashamed to ask outside a mummified thread.

Expand full comment

The International Dialogue statement is more substantial than you've quoted (their website hides the full thing by default, for some reason). It includes a call for red lines on autonomous replication/improvement, power seeking and deception. https://idais.ai/ and click the small "Read the full statement" button.

Expand full comment
author

Oh wow that is some of the tiniest print I have seen in a while. I did not see that at all. Thanks for pointing it out, I'll address next week (by Sunday there's no real point in fixing a weekly on things like this).

Expand full comment

Nice one. I've suggested to the team that they change the site design to make it clearer!

Expand full comment
Mar 22·edited Mar 22

I made this online thing for concatenating all the text files from a Github repo into one text file for sharing with an LLM: https://w4t.pw/ei

Last week Zvi shared a python script that did something similar, which inspired me to spend some time making this.

There's a link to the source code in the header if anyone wants to see how it works or contribute any fixes or improvements...I create several issues outlining what needs fixed/improved.

Expand full comment

> Baby, if you are smarter than all humans combined, you can drive my car

When we pass AGI, will wormholes appear above everyone's driveway, and instantaneously swap out the old cars for the new self-driving cars that the ASI nano-fabricated an hour earlier, or will it still perhaps take ten or more years for human beings in the real world to voluntarily shift preferences and engage in transactions so that a majority of cars on the road are self-driving?

Expand full comment