15 Comments

Podcast episode for this post, multi voice ElevenLabs narrated:

https://askwhocastsai.substack.com/p/ai-69-nice-by-zvi-mowshowitz

Expand full comment

Zvi...every time I click the links in your table of contents, it just goes to the top of the post. It's been doing that for a few weeks now.

Anyone else seeing this?

I'm running the Substack app on a new iPad Air....latest version of iPadOS. But my old iPad running iPadOS 16 did it too.

Expand full comment

Works fine using Chrome on Windows.

Expand full comment
author

Confirmed it works for Chrome on Windows. The App has had issues in the past.

Expand full comment

Thanks...I sent a bug report to the Substack team.

Expand full comment

I know "politics is the mindkiller" and all that, but let me tell you, having Trump be the dude in charge in the period of 2025-28 which has a non-zero chance of being extremely civilization-defining makes me absolutely terrified.

Expand full comment

But Biden/Harris/Blinken make you feel just fine ??

Expand full comment

I don't think the NSA is actually that great at defensive cybersecurity. They have a lot of experience breaking *into* things, but not much experience securing the sort of complicated attack surfaces that you have with a heavily used consumer product with many features. And the Snowden incident didn't make them look all the great.

The best defensive cybersecurity teams AFAICT are at places like Google or Facebook, which are constantly attacked by all sorts of bad actors and frequently thwart all sorts of attacks, including nation-state level attacks, internal spies, all sorts of things. I'm biased because I worked at both of those places, not directly on the security teams, but on related teams that sometimes worked with those security teams. The vast majority of these incidents, you don't hear about externally. You do hear about some of the failures externally.

The other issue is that a board member is really not going to be able to help much on practical cybersecurity. An NSA board member might help with high level US government communication, but the roles that matter for security are like, your chief of security, ideally having this person be high status within the company like a Chief Security Officer position although the position name is only really a fraction of what matters, and then the people they hire.

So what I'd most want to see for OpenAI on the security front would be hiring someone very good for a CSO role.

Expand full comment

The claim about 320 bit images seems wrong to me. I skimmed the paper, and per Section 4.2, the codebook is configured to N=4096, not 1024. Separately, the information about how decoding works is pretty sparse, but the relevant section mentions a sequence M of mask tokens appended to the 32 tokens. It's not specified where this mask comes from and how it is chosen, but common sense dictates this is probably included in the compressed data.

Regardless of these technical quibbles, I'm quite annoyed how Eliezer is strawmanning the skeptics here. It is true and important that you can't "compress things infinitely". If you don't care about the quality, it is *literally trivial* to make an encoder that compresses images to any target size (just use jpeg with the proper settings). What matters is the empirical performance of the compression over the range of images you're actually interested in compressing. Without this, the 320 bit figure is just a meaningless number thrown around to sound impressive. Is there such an empirical quality test anywhere in sight? No. Therefore, this is just vacuous AI hype.

Note: I'm not saying the research is bad. For all I know it might be really good! The idea of *not* working in geometric patches seems pretty sound, as there are longer-range regularities in quite a lot of images. Of course, the failure modes of such a compression algorithm are way less predictable. Also, the image decompressor is going to be pretty huge, both in memory footprint and processing power, compared to something like jpeg (which was literally designed to be efficiently implementable in hardware). So, if we strip away the AI hype, this is an algorithm that might plausibly be much better performing in terms of compression on certain distributions of images, but is guaranteed to be much worse in terms of efficiency. Yes, that's an interesting new point on the pareto frontier, but it's very far from the breathless claim that AI is blowing traditional algorithm development out of the water.

Expand full comment

I also disagree with EY in terms of such image compression being unimaginable to people 10 years ago. I bet most people with PhD in computer science would've agreed it was possible if you did a poll on the subject in 2014, as it wasn't that far fetched of an idea.

Expand full comment

There is something very productive about anger. Arthur Mensch's lies enraged me so much that it powered me up to write a decent article about it in ~20 minutes: https://rationalhippy.substack.com/p/confronting-deniers-ai-is-just-a, and a publication in Pause AI's LinkedIn: https://www.linkedin.com/posts/pauseai_aiethics-aigovernance-aiaccountability-activity-7207411226534891521-SxpH?utm_source=share&utm_medium=member_desktop

Expand full comment

In the spirit of asking the (likely stupid) question that others may also be asking: why is Ilya making a mistake?

Is the concern that, even though they aren't making product right now, eventually the commercials will win and (just like with OpenAI) the safety focus will be dropped in favour of making money?

Or is it simply that "Safe Superintelligence" is still "Superintelligence", i.e. they'll be working on capabilities in a way that's net harmful even if they're making an honest effort to be cautious?

Expand full comment

It's not clear to me that what Perplexity is supposedly doing is actually a violation of the robots.txt standard, which was meant for "web crawlers". To me, a "crawler" is a program that finds many web pages by recursively following links, to build a search index or find training data or whatever. If you give Perplexity a URL and ask for a summary, and it goes and fetches the page so it can summarize it, that doesn't sound like "crawling". Definitions I've seen (e.g. Wikipedia) agree.

I skimmed through the robots.txt standard and didn't see anything that was clearly answering this question. I suspect the authors weren't thinking about this use case.

So one possibility is that Perplexity honestly doesn't think what they do violates the standard, whereas Wired obviously thinks it does.

I think there was one case where the Wired author didn't enter the URL, just the title of the article, and it still found the page, which seems maybe a little sketchier, but I can still see

Expand full comment

to clarify:

I do not think "superintelligence will be NBD."

I think Aschenbrenner constantly equivocates between advanced "tool AI" and "true" superintelligence in the way you and I mean it.

True superintelligence would absolutely change everything, China would be the least of our worries, etc.

But Aschenbrenner doesn't see things that way, he's not an old-school MIRI-ite, he has the (IMO confused) idea that we're headed for a swarm of artificial geniuses who obediently do whatever their owners want.

The place I'm saying I'm unconvinced that AI matters for defense is *in the upcoming China/Taiwan/US war*. People who think there's gonna be a war expect it by 2030. Aschenbrenner's own projections do not actually work for AI meaningfully affecting how warfare works that soon, at least if you imagine realistic vs magic underpants time lags between "the SOTA base model is super good and cheap" and literally *every other part* of development & deployment for specific military purposes.

also most of the off-the-cuff examples he gives of stuff an AI could help an army with are (IMO) bad. which doesn't at all mean AI can't help armies, but does speak poorly of *his* judgment.

Expand full comment