11 Comments

Whew, this heroic length post translates to 2 hours and 23 minutes of top-quality narration :D

Podcast episode for this post:

https://open.substack.com/pub/dwatvpodcast/p/ai-75-math-is-easier?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true

Expand full comment

You bring up superforecasters and mention that forecasting is real and being well calibrated is a thing (agree) and track records are important (agree), but as you’ve discussed previously the majority of well-calibrated superforecasters with good track records are deep within your Obvious Nonsense bracket on AI risk.

I think we both agree these people are too low for various reasons and ultimately not well calibrated on the question, but it does move the needle down for me and I think it makes the track records weak evidence for forecasting AI risk being more grounded.

Expand full comment

Medal?

Expand full comment

I find it tragically disappointing that the pro-life party is largely not being pro-life.

Expand full comment

> Have you ever trusted a human without personally calibrating them yourself in every edge case carefully multiple times?

Not sure how seriously you intend the comparison but it's worth being clear about this: humans have relevant affordances here that software (even LLM software) doesn't! They are much better at not getting stuck in loops, compromised by attackers, or otherwise going way off the rails. If they get something wrong, they will usually try to make it up to you. Their ability to cause harm is limited (with some exceptions where we *do* make significant efforts to personally calibrate) and they are aligned by default.

Expand full comment

Re: "I also agree that better visibility into and communication with, and education of, NatSec types is crucial. My model of such types is that they are by their nature and worldview de facto unable to understand that threats could take any form other than a (foreign) (human) enemy. That needs to be fixed, or else we need to be ready to override it somehow."

I honestly don't know that this is the case. I think unaligned AI can sort of can fit into the existing mental niche for non-state actors; put another way, an AI is just another thing a state can help come into being that is dangerous and it might not control.

(Yes, if you're a DC person, you're currently snarking at me, "oh and how well did that whole Global War On Terror go in its first few years, huh?" which is a fair critique, but irrelevant to the point I'm making about whether or not there's a mental niche that can be re-used.)

I do think that the more that this becomes a US-China thing, the harder it becomes. DC people have Less Than Zero trust in Chinese ability to make and keep diplomatic commitments on Tech Stuff after decades of IP theft and cyber espionage , and large chunks of DoD is convinced that China wants to fight a war in the Pacific as early as literally 3 years from now.

Expand full comment

I don't think IMO ability predicting Fields Medals is much evidence this will apply to AI, any more than chess ability (Probably IMO correlates more than chess because of various natural pipelines, similar brain stuff in both cases I'd bet) is. Lots of tasks "easy" for computers are impressive in this way in humans. The space of IMO geometry problems and techniques needed to solve them is still relatively highly constrained and does feel more "chess-like" than "math-like" to me IMO. Of course this is still good AI progress (fwiw I've done the Putnam/math undergrad and dropped out of PhD for a job, never did IMO)

Expand full comment

> On your local machine, you are likely limited to 8B.

Unless you have at least a mid-range MacBook Pro and are willing to accept a 3- or 4-bit quantized version, in which case you can run a quantized version of the 70B model. Historically the quantized versions of the bigger models have been much smarter than the smaller models, though I don't know if that's still the case for this generation.

Expand full comment

Went to see a movie for the first time for a while and in a new theater but there were -multiple- commercials for LLMs and I don't know if it's just the change in theater or what but I didn't realize we were at the advertising-to-the-masses phase already.

Not strictly AI but something that I could imagine being supercharged by AI: there was a Kitboga video that came across my YouTube recommended where he built an entire website to trap scammers by ending his usual scambait with a link to a website promising them crypto but then has them running endlessly through goofy captchas and calling into automated endless phone trees that put them on hold for random amounts of time and then eventually just hangs up on them. There was a side story to this where it caught a scammer using a victim to call into the system who had been being scammed by this guy for going on 6 years and they were able to help her get out of it.

Not gonna be interested in a friend AI until it can convincingly play video games and hold voice conversations over Discord.

Expand full comment

> A similar phenomenon that has existed for a long time: Pandora stations, in my experience, reliably collapse in usefulness if you rate too many songs. You want to offer a little guidance, and then stop.

I ran into this with Spotify when I made a bunch of playlists for D&D, which then turned all of my Spotify-generated playlists into instrumental background music, rather than what I normally listen to outside of D&D.

It took me a while to notice it, but one of the options on a playlist in Spotify is “exclude from your taste profile.” Once I did that with most of my d&d playlists, my recommended songs went back to normal.

Two takeaways for me:

1) Spotify seems to weight “added to a playlist” significantly higher than “listened to multiple times” - so if you’re making playlists for specific things or occasions, it’s worth excluding them

2) I want buttons like that in more apps

Expand full comment

Wonderful work, thank you.

Expand full comment