17 Comments

On the point about AI and fusion research, and the generalizability of the implication: it seems very hard to convince people about generalizability because they are averse to, or unable to consider, engaging with exponentials. There is a very weird and pervasive preference in the world for linear thinking.

Expand full comment

I wouldn't say the preference for linear thinking is weird, given that for almost all of human history we haven't had to deal with problems involving exponential measures, and of the changes we see in the world, most are gradual (at least on individual human timescales).

Expand full comment

Yeah that’s a fair point.

Expand full comment

I think AI for mayor is a reasonable experiment, provided you are up front about the limitations

* You probably ought to say that your plan is to do what the AI recommends, but reserve the right to veto the AI if it suggests something illegal or crazy. Strictly speaking, voters are appointing the human being who will get to veto the AIs outputs

* Yeah, the AIs answer will depend on how the question is asked. voters are appointing the human being who gets to formulate the question.

Expand full comment

Advantage of Open Source models: vendor cant tell you you cant use them to run the government.

(To the extent current models have contractual restrictions on what you csan do with them, they are not "Free Software". "Free as in Free Speech, not free beer.")

Expand full comment

I agree, but the candidate is saying if he ever contradicts the AI, that is "missing the point of the experiment" and he is to only be the "meat stand-in."

Its odd. And yes, the AI model I prompted suggested this was a bad idea too.

Expand full comment

Joke explanation for the lack of level 5 models: every time someone gets close, Arnold Schwarzenegger gets sent backwards in time to stop them.

Expand full comment

This is actually a better fit to observed facts than the given Nvidia theory (hello, Google used TPUv4 and TPUv5e/TPUv5p for Gemma 2, TPUv4 for Gemini).

Expand full comment

Re: the Less Wrong article on formal verification ... throwing AI into the mix makes things so, so much worse, but many of the issues they talk about would also apply to, for example, a voting machine,

"How do I know the outcome of the election hasn't been altered by a hacker?" gets you into this very quickly,

How do I know that the voting machine is running the software it is supposed to be running?

How do I know there aren't any implementation bugs in the CPU? (Well, clearly, you formally prove that a gate level description of the cpu meets its specification. This is expensive, but in princi0le do-able, Some Well Known Manufaturers of CPUs are already part of the way there, because they dont want to be embarassed when their floating point unit gets the Wrong Answer).

Some of this - solvable in principle, if massively expensive. Add AI? Sorry, we dont even know how to do that.

Expand full comment

If you were an organization with *very* deep pockets, and said "I want a formally verified RISC-V implementation, dont care about clock speed, assurance is more important than performance in my application" then it probably could be done, for sufficiently deep pockets.

If you say "I want a LLm with formal correctness guarantees". Sorry, no. I do have some ideas of how one might spend a couple of million dollars of DARPA's money solving, like, the first 2% of this problem.

Expand full comment

Building neural networks with formal correctness guarantees is a hot academic research area. Somewhat more than a couple of million dollars has already been spent on it, with a lot more allocated. Whether the techniques can scale up to LLMs is not yet clear.

Expand full comment

the paper by tegmark and omohundro is extremely optimist and naive

Expand full comment

Of course not all ideas are good so the signal to noise ratio in academic research is low. There is a lot of stuff being written up and even more that isn't. However, the claim "academia’s contribution to improving frontier AI capabilities is already remarkably close to zero" ignores the pipeline of people who work in academia and then move to the labs. Reading the progress reports from frontier labs, this still seems to be a big source of new ideas and talent.

I would say something different: academia generates lots of new ideas that are carried into the big labs as people move and get the chance to prove that the ideas help in production. Some good ideas are being left behind in academia because their champions are less eloquent, less well connected, less interested in pursuing careers at the big labs (including people switching to safety or going to organizations not currently on the frontier but later acquired by frontier labs or that become frontier labs themselves), or less lucky.

Expand full comment

The other interpretation is that academia is full of brilliant people but a terrible environment to get their best work. My own experience of a physics PhD definitely supports that interpretation

Expand full comment

I read Tyler Cowen as saying that creating open techniques for data cleaning, and making public data sets for training and validation and benchmarking are valuable for progress (if we want that) but the incentives aren't aligned right. Academics get rewarded for publishing comparisons between different methods, not for database curation. Commercial AI labs keep this stuff internal and it seems to be closely guarded. So we have a bunch of side projects and byproducts of academic research that (because they are open) have become critical for the ecosystem yet get neither academic nor industrial support. This didn't go well for OSS, it seems bad to assume it will magically turn out better for AI.

Expand full comment

Thiel:

My summary of the convo goes like this...

Rogan: AI is going to replace bio-humans. [semi-utopic description]

Thiel: 2 more likely scenarios:

1. Silicon Valley tries to transcend biology like you say, but eff it up, and things go haywire. (99% more likely than a utopia version).

2. Effective altruists win, and AI gets regulated.

Rogan: We shouldn't have regulation, b/c then China will win.

Thiel: Nah, China is probably too control-obsessed to let AI beat it.

Rogan: Ok, then Silicon Valley will rule the universe.

Thiel: No, I think regulation is going to win.

It didn't win the internet b/c video games aren't scary enough.

FDA regulation is kinda terrible, but successful, b/c people are scared of dangerous pharmaceuticals.

Silicon Valley positivism is losing to EA frightening imagery, so people will do what it takes to stop AI.

I didn't get the impression you did, that he thinks this regulation was the worst option. I came away with the impression he realizes regulation has downsides, it isn't a great endgame, but his "I'd be a luddite too" makes him seem in the camp of the regulation option being less awful than SV trying to transcend biology and effing it up, as he thought the alternative was.

Expand full comment