5 Comments
⭠ Return to thread

As a senior software engineer, I am serenely unconcerned by Devin _1_ on a personal level. A huge part of being a senior engineer is building the thing that the business needs, not what the business initially asked for. And companies have had inexpensive access to actual human programmers for decades. Lots of companies choose cheap and bad. If they survive, well, I've made good money helping them deal with the consequences.

And, well, to put it charitably, Upwork is full of tiny, bottom-feeding projects that you can do in a day. Which is an entirely different problem than getting a real company from $0 to $20 million/year in revenue without self destructing.

The worry here is the trend line. GPT 3.5 has a lot of book knowledge, but it doesn't have the planning and execution abilities of the average squirrel. (Squirrels are really good problem solvers, as anyone with a bird feeder can attest.)

Devin 1, if this isn't a rigged demo, is showing the performance of an incompetent intern. But, uh, that's amazing! Very much worth mentioning.

Devin 2 will likely be better. And, well, there's probably a threshold here, where you get a key set of abilities all worked out. And once you hit that threshold, I bet things change quickly.

And if you think, "Well, sucks to be a software engineer, but happily I do _______ instead," whose job do you think many of those unemployed senior software engineers will try to automate next?

Before we go down this path, we need to ask ourselves whether we want humans to be economically viable in the future. And we need to ask ourselves what happens if we're only the second-smartest species participating in the economy.

Also, we need to seriously consider the possibility that we simply can't maintain robust control over things smarter than us. "Alignment" sounds nice, but what if it isn't actually a thing? Like, what if the best we can do is teach the machine to agree with platitudes when asked? LLMs are literally actors, and already very good ones despite their lack of human-level reasoning.

Expand full comment

I don't think there's any 'we' to speak to inside of this sentence

"Before we go down this path, we need to ask ourselves whether we want humans to be economically viable in the future. And we need to ask ourselves what happens if we're only the second-smartest species participating in the economy."

And the other problem is that some AI researchers are insane and would nod along with your arguments and still not care

Expand full comment

Oh, I do not expect to pursuade a critical mass of people right now.

If we are, in fact, on the road to smarter-than-human AI, then I expect us to run down it at full speed. And if we run off a cliff at full speed, then any warnings will turn out to have been useless.

But maybe we merely faceplant into the gravel and lose some skin. At which point, enough people might be willing to listen.

Sometimes, you need to lay the groundwork for good ideas well in advance. And ideas really can change the world.

Expand full comment

To the point about humans being economically viable, there's a good chance it'll be fine. https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the

I do agree the alignment question still seems open.

Expand full comment

I understand comparative advantage. It's a neat theory. If A is better at literally everything than B, but A has finite time, then there is a net gain of wealth if B works, too. B should do whatever they suck least at, and A should do everything else. When all workers have sufficient input resources, and all workers are actively using their strongest personal skills, you maximize total wealth.

But let me try to explain a scenario where comparative advantage might not apply to AI.

Let's assume that we can clone A in seconds. Now we have A1 and A2. And A3, A4, etc. Each clone is exactly as good at everything as the original, and they share knowledge regularly.

Let's imagine that A works roughly 1,500x faster for 20% of the resources, compared to B. (Those are actual numbers from one use case I saw last week.)

At this point, the easiest way to maximize total wealth is to give all the raw resources to A. There's nothing that B could do that a clone of A couldn't do better. And clones of A are dirt cheap to make, and they cost of a fraction of the upkeep of B.

This isn't Economics 101. This is Ecology 101. In a world with finite resources, some species go extinct, because they can't find a viable niche. Chimpanzees do not have comparive advantage at anything. They survive only because either nobody wants their resources, or because we decide to spend some of our own resources to keep them around.

In a world where AIs and humans compete for the same raw resources, there are a lot of ways that smarter-than-human AI could be very bad for us.

Assuming we can't somehow magically control things much smarter than us, we should refrain from building ASI. Or if we do build it, we should hope the AI is like, "Nah, let's keep Earth as a nature reserve full of adorable humans while we go rebuild the galaxy." Or, if we're lucky, "Who wants to go for walksies to the rings of Saturn? Who's a good human?"

Expand full comment