18 Comments

I'm wondering if Google vs. Bing for searching is considering the issue from the right perspective. I have shifted a *lot* of searches from Google to ChatGPT, and that sort of shift wouldn't show up in that graph.

Expand full comment

I can grasp Nora's model but as you noted, the evidence isn't there or at least it isn't obvious. This seems like a hazy thing to bet humanity's future on!

Expand full comment

And yes, the quantity of AI individuals who have take on a profoundly genocidal view is terrifying; it is also something one would only know having been "on the inside" or generally having familiarity with the mindset, since it is seema so incredibly nuts.

There is something immensely, procedurally toxic in this: that not only are people non-demoncratically asked to tolerate the existence of a new species which we do not want, but also to accept the destruction of all biology and value, because some people think that it doesn't matter.

There have been other religious death cults, but this is apocalyptic in a way that is a very real and close danger.

As for Hansonian logic that if we "decline" for awhile into a lower tech world that it is equivalent of doom, its a good example of I suppose of how little value some people give to the simple joy and beauty of existence, of humans interacting and joy and love, of the stories we tell and the miracles of daily life.

Expand full comment

I think Hanson just selfishly wants to keep his Dewar topped up with LN, or (we all) die trying.

Expand full comment

Imo its even more petty:

"Economics must remain the premier science, or we all die trying."

For him, its obvious that a reversion to a 1700s lifestyle with humans is inferior to human extinction.

Expand full comment

"Geoffrey Miller: I don't want to achieve immortality by being data-scraped into some plagiarism machine. I want to achieve immortality by not dying." Did he give a credit to Woody Allen?

Expand full comment
author

In context it is kind of funny that he didn't, obviously we all picked up the homage right away.

Expand full comment

> Instead, you will learn opportunistically and incidentally, as you go about your day and follow your curiosity and what you happen to need.

If I were near constantly conversing with an AI chatbot / personal assistant, I figure it would be “easy” to also work in spaced repetition learning to reinforce knowledge. Like flash-cards, but conversationally with a friendly Socratic tutor, and requiring less effort than sitting down and focusing on flash-cards.

Expand full comment

Now, in some ways, I’m suddenly wishing I were XX years younger and just reaching primary school age, but home schooled by a friendly AI Socratic tutor.

Expand full comment

> At first things will change remarkably slowly despite AI coming along. I call this Mundane AI and Mundane Utility. The difference is that Altman is including in this things he calls AGI, it might be valuable to dig more into this distinction.

What are the potential cruxes here? Is it that integration into the economy is inherently hard or will be slow because of coordination problems and communication overhead and the like? Because if you took, say, most any existing corporation and dumped the equivalent of 1M or even 1K more human level intelligences on it, I’d personally be surprised if that capacity could be used effectively right away. There’s just not enough shovel-ready work for semi-automation to make much of a difference IMO. So I can totally believe in a world where “AGI” might very well have a slow start with only modest impact, like what Altman is suggesting.

But how many standard deviations beyond competent professional human do you need before that dynamic breaks down completely? And how quickly is that attained? Idk, hard to say, my timelines are highly uncertain there and I can believe that insiders like Altman have a reasonably well informed opinion that it will take a bit of time before hitting the steep part of the exponential.

Expand full comment

Many mathematicians (even good ones) have a small collection of tricks that they use repeatedly throughout their career. ("What would --- do?" is a good subalgorithm to run once in a while, especially if they just proved a new theorem, but also when stuck on your own stuff.) So, not surprising if the IMO stuff scaled quite a bit in that direction... i.e. Euler AI, Riemann AI, (or for that matter Lurie AI ... the more prolific, the better).

Expand full comment

> And yes, I am quickly getting tired of doing this close reading over and over again every time anyone introduces a draft bill, dealing with the same kind of legal theoretical maximalism combined with assuming no one fixes the language.

I mean… giving the laws we end up with, I’m not sure it is *that* ridiculous.

Expand full comment

Proving correctness of programs might also turn out to just be a few tricks.

In the cases I'm currently interested in, you do have an exact specification for what the function is supposed to do, and formally proving correctness of the implementation often boils down to some combination of ...

A) replace that instance of that function with its specification

B) case spilt on that Boolean subexpression

C) that variable is a union of several types; case split on which type

D) algebraically simplify

Maybe turns out to be easy

Yes, of course, halting problem. But I don't need to solve the hard cases, just the programs where the guy who wrote it had in his head an informal sketch proof of why the algorithm was correct.

Expand full comment

"If you talk about AI and not about housing and permitting and energy and free markets, then you are who you are."

Are you sure about this?

I'm totally in favor of repealing the Jones Act, but it's not obvious to me that people who put 100% of their advocacy chips on *not doing the thing that destroys humanity* are being hypocritical or counterproductive.

Expand full comment
author

I meant this in the context of someone who was in favor of accelerating or not regulating AI. I agree that the flip side is totally reasonable.

Expand full comment

Thanks for having dug some more into Nora Belrose's claims.

Some words seem wrong grammatically in "looking at how humans trying to get elected or help someone get elected actually behave."

"Emphases" → "Emphasizes"

"even if think the future" missing "you"

Expand full comment
Jan 22·edited Jan 22

> Reminder that you can take essentially any MIT course online for free. Now that we all have access to LLMs, it seems far more realistic for more of us to be able to keep up with and sustain such an enterprise. I am too busy, but tempted.

Is this because LLMs can make it easier to do a course, or because using them judiciously in the rest of life should free up time for such things? If helping with the course itself, I'd be grateful if you could explain more how! For me it feels like the main bottleneck in learning is time reading and understanding... but maybe it would be possible to 80-20 it by just going straight for the key ideas more aggressively.

Expand full comment

https://old.reddit.com/r/math/comments/19fg9rx/some_perspective_on_alphageometry/

This post claims that the key Computer Assisted Proof technique used by AlphaGeometry, without the AI component, solves about 80% of the IMO problems AG solves. So it is less impressive than would seem at first glance. Here Terry Tao speculates about another possible use case: https://mathoverflow.net/questions/463937

Expand full comment