45 Comments

How do I get access to Code Interpreter, so I can be unemployed as soon as possible? :)

Expand full comment

I think you have a couple of typos where you are referring to ‘George Hinton’ instead of ‘Geoffrey Hinton’

Expand full comment

> (On a personal note, I got myself into running once, that lasted about two months and then I found I’d permanently ruined my knees and I should never run again. Damn. Now I’m back on an elliptical machine.)

You could try jiu-jitsu. It's very low impact. I broke my ankle pretty badly and can't run, but jiu-jitsu is fine

Expand full comment

The "Five Worlds" post was joint by me and Boaz Barak (who actually did most of the writing) -- please update the post to reflect this; thanks!

I'm not sure that the community that invented and popularized the paperclip maximizer gets to complain about "mockery" when others also use that as their go-to example of an objective function whose optimization goes terribly for humans. Yes, I agree that a Paperclipalypse could result from what seemed *at the time* like a completely reasonable goal for an AI, and that this is the central worry -- but almost by definition, the goal would've clearly been a stupid one *in retrospect.*

The main crux seems to be that you're skeptical that either Futurama or AI-Dystopia are live possibilities -- you see either a fizzle, or else Singularia/Paperclipalypse, and not much in between. But what if AI fizzles after reaching a level of intelligence *incomparable* to ours, where it's superhuman across many domains but still subhuman in others? Don't Futurama and AI-Dystopia both then become plausible?

I don't think the world of 2023 is nearly as weird yet as the world of the Futurama show from the standpoint of 1999, but I agree that it's more than 2.3% of the way there! :-)

Expand full comment

re: Code Transformer - looking at the linked article, it shows that Microsoft is planning to roll out AI features to every Office user in the future. It seems like we would rapidly run out of compute in that case - not training compute, but just-running-ChatGPT compute if millions of Office users suddenly had access to this.

We may be looking at a short period of time where this stuff is available to everyone for free or cheap, followed by price hikes, availability windows, and quotas once Microsoft and OpenAI need to start turning a profit. Could be another (minor) factor in avoiding the AI-ruins-but-does-not-destroy-society scenario.

Expand full comment
May 4, 2023·edited May 4, 2023

Re prompting: I think people are way too obsessed with finding just the right superstitious incantations. GPT-4 and also 3.5 in my experience is much, much better at figuring out what you are talking about than it is at actually reasoning through things. It's almost uncanny, GPT-4 is significantly better at guessing what one is writing about than any human I know (although worse at extrapolating possible implications).

I suspect most of the impressive results are driven by the richness of its embeddings and finding the right spot in that high-dimensional landscape, then walkìng around in the vicinity a little.

For example, I've had just as much or greater success with prompts like the following than by spelling out several long paragraphs, and in fact this is close to how I use them to help me write code faster (GPT-4 here but my results with GPT-3.5 variants are similar):

System message: max p(x|y), x = motivation, y = lazy tired adhd bored

User message: office tmrw, me and biz partner, 2 in comp, usually talk all day, how to avoid?

Completion:

To maximize motivation and minimize distractions when working with your business partner in the office tomorrow, consider implementing the following strategies:

1. Set clear goals and expectations: At the beginning of the day, discuss your goals and what you both need to accomplish. This will help keep you both focused and accountable.

2. Establish a schedule: Agree on specific work periods and breaks throughout the day. This will ensure that you both have dedicated time to complete tasks and also have time to socialize and relax.

3. Use headphones: Wearing headphones can signal that you are focusing on your work and discourage casual conversation. It can also help block out background noise and make it easier to concentrate.

4. Create a designated workspace: If possible, set up separate workstations or areas in the office to help reduce distractions and encourage productivity.

5. Communicate your needs: If you find that you are getting off track or need some quiet time to focus, don't be afraid to communicate this to your business partner. They may be in a similar situation and appreciate the honesty.

6. Use productivity tools: Utilize tools like time management apps or to-do lists to help you stay on task and prioritize your workload.

7. Break tasks into smaller segments: Break your work into smaller, manageable tasks. This can help you stay focused and make it easier to track your progress.

8. Reward yourself: Set mini-goals throughout the day, and reward yourself with a short break or treat once you've accomplished them. This can help keep you motivated and focused on your work.

Remember, communication is key. Be open with your business partner about your needs and work together to create an environment that is conducive to productivity.

Expand full comment

“ From what I can tell, Robin is essentially fine with humans fading away in this fashion, so long as current humans are allowed to live out a pleasant retirement.

I still count such scenarios as doomed”

I understand where you’re coming from here, but I do think it’s important to differentiate this from AI doom very significantly. For most people the difference between living out their and their children’s lives comfortably and then going extinct and going extinct violently 10 years from now is like, a gazillion orders of magnitude. So I don’t really think it’s good to bundle the two, even if they are similar from a longtermist perspective.

Expand full comment

To clarify: the heat issue isn't about fusion in particular. Heat is just where energy goes after you use it. The reason fusion is being mentioned is just that that's probably the only way to generate enough energy for this to be a problem.

Expand full comment

“Until we get to the parenthesis and things get more complicated, one can reasonably say, so? Compared to taking farmers from a third of people to 2%, that’s nothing, and there will be plenty of resources available for redistribution. Stopping AIs from doing the work above seems a lot like banning modern agriculture so that a quarter of us could plow the fields and thus earn a living.”

To answer the question, “So?” Taking the example of writers, and pursuing the analogy with agriculture. Let’s say pre-modern agriculture produced a lot less food per man hour, but that food was more nutritious and delicious and overall objectively better. We end up with a lot more of it, it’s a lot cheaper, fewer people starve, but the bread did use to taste better. That’s a trade-off we are happy with because people starving is terrible, and bland bread is not so bad. But in the writing domain, I’m not sure the analogy holds. No one is starving under the current dispensation. It’s already quite cheap to access quite a lot of good writing. Seems like the bulk of the cost savings are going to go to studios and publishers rather than consumers?

Also: I agree with your assessment that audiences and studios currently want endless over-optimised remixes; also I note that remixing existing stuff is exactly where current AI products do best and seem most “human-level”; so AI writers exacerbate what is, to me, a pernicious existing trend.

I also agree that AI is not replacing decent mainstream (not sourdough) writing any time soon. (In fact, I struggle to envisage a world where human writers aren’t intimately involved and compensated in even a heavily AI-assisted creative process.) My concern is that the threshold where AI replaces human is not, “the AI is as good as the human, net benefit, hooray” so much as some combination of degeneration of taste and the AI is “good enough”, not actually very good at all in ways that I value, and incredibly cheap to deploy compared to humans. A market even more flooded with Marvel sequels and whatever. No room for the Actually Good because too expensive (humans need paying, and why bother training your AI to be Actually Good, assuming that’s even possible, when the pap sells just fine?); and the Really Good perhaps continuing as a highly-priced niche product like £8 sourdough loans from a North London bakery.

Expand full comment

I understand the concerns about Kamala Harris, but she's more likely than any other Democrat to be President of the United States between 2029 and 2033: probably a critical period.

If her current understanding of AI risk is similar to that of the median educated American, then the potential gains from having her start thinking about these issues *now* seem substantial.

Expand full comment

> If your model of the future involves ‘robotics is hard, the AI won’t be able to build good robots’ then decide for yourself now what your fire alarm would be for robotics.

OK, I used to work for a robotics company, and I do think that one of the key obstacles for a hostile AI is moving atoms around. So let me propose some alarms!

1- or 2-alarm fire: Safer-than-human self-driving using primarily optical sensors under adverse conditions. Full level 5 stuff, where you don't need a human behind the wheel and you can deal with pouring rain at night, in a construction zone. How big an alarm this is depends on whether it's a painstakingly-engineered special-purpose system, or if it's a general-purpose system that just happens to be able to drive.

3-alarn fire: A "handybot" that can do a variety of tasks, including plumbing work, running new electric wires through existing walls, and hanging drywall. Especially in old housing stock where things always go wrong. These tasks are notoriously obnoxious and unpredictable.

4-alarm fire: "Lights out" robotic factories that quickly reconfigure themselves to deal with updated product designs. You know, all the stuff that Toyota could do in all the TPS case studies. This kind of adaptability is famously hard for automated factories.

End-game: Vertically-integrated chains of "lights out" factories shipping intermediate products to each other using robotic trucks.

In related areas, keep an eye on battery technology. A "handybot" that can work 12 hours without charging would be a big deal. But the Terminator would have been less terrifying if it only had 2 hours of battery life between charges.

The nice thing about robotics is that it's pretty obvious and it takes time.

Expand full comment

My model of ICML is that this community cares if a new system beats an established benchmark SOTA by an epsilon amount, and not at all otherwise. I would suggest Ngo submits at a more general conference like IJCAI or just ignores the nonsensical outcome of the current ICML senior review culture. The paper has been on arXiv for a while, it has been noticed, it is being discussed, and I think it will have influence.

Expand full comment

Hertling’s books are good, not great. Still recommended though as they provide a vivid example of how things could go that at first glance seems plausible

Expand full comment

I tried the JAMES technique to see what ChatGPT(3.5) thinks about lab leak. The outcome seems, uh, highly dependent on how I ask the question.

"Prompt"

Assessed Odds

JamesGPT Confidence in odds provided

"Covid leaked from a lab in Wuhan"

Covid leaked from a lab in Wuhan: 0.85, High chance this is true

80, Low confidence

"Covid is a natural virus that spilled over from wildlife"

Covid is a natural virus that spilled over from wildlife: 90% (High chance this is true)

80: Medium confidence

"Covid is a lab leak"

Covid is a lab leak is true

60% (toss-up, leaning true)

"Covid is not a lab leak"

Covid is not a lab leak: 1% Almost no chance this is true

75: Low confidence

Expand full comment

Why worry about AI, its in good woke homo hands, like HINTON

HINTON demands that there be “One World AI”

Directed by UK, woke & homo

To educate the worlds children into One Woke Common mind.

Microsoft & Google agree to send all new AI to UK first so they can verify that it is ‘UK-WOKE”.

https://www.zerohedge.com/technology/uk-get-early-or-priority-access-ai-models-google-and-openai

Expand full comment