35 Comments

Hi Zvi, great post as always. Apologies if you've covered this before and I missed it, but do you have any thoughts about the recent Mamba preprint (https://arxiv.org/abs/2312.00752) and state space models more generally? From the abstract: ."Mamba enjoys fast inference (5× higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation." Although see a skeptical response here: https://arxiv.org/html/2402.01032v1

Expand full comment

I've actually never covered it. I know there was an emergency pod on Cognitive Revolution on it and some people are excited but somehow my sources mostly dropped it and I never looked. I should try to look at some point.

Expand full comment

This comment is super ironic due to the timing. Had you waited 1 more hour you would have seen the Gemini 1.5 announcement that obviously benefits from some method to hugely expand context length.

Mamba seems to have gone from "interesting academic results might be useful in 10 years" (any other industry) to "production drops in a few days".

Expand full comment

I doubt google used an SSM for their Gemini models, haha. Seems like they used the good ol' transformer

Expand full comment

Quadratic time complexity. Either they paid 500 times as much compute, which they can do (from 32k to 1m) or they upgraded to a better algorithm.

Expand full comment

Your comments about David Autor's piece have me rethinking my reaction to it. Your point about (some) economists' apparent inability to consider scenarios other than 'AI fizzle', and the impact that those other scenarios would have on labor markets, is well-taken. Still noodling on the topic of AI & jobs.

Expand full comment

The most likely explanation for Altman's taking an interest in the UAE is that the Abu Dhabi Investment Authority has $1t under management: more than any sovereign wealth fund outside Norway and China. Perhaps he's put some feelers out to ADIA and received a positive response.

I wouldn't exactly call this approach "friend-shoring" since the UAE isn't a US treaty ally, but there's no other government (that's neither an adversary of Washington nor a country that has elections) able to make that much capital available.

What was it somebody was saying about AI inventing cheap desalination and populating deserts?

Expand full comment

THE most useful, thoughtful, *smart*, bleeding edge, information-packed download I've found yet on Substack. Appreciate your work Zvi!

Expand full comment

And on the sane side of, "Dont kill everyone."

Expand full comment

I second this. Thank you, Zvi!

Expand full comment

Interesting, where is this DD+AR solving 21 problems from? In the paper I read it is only 14. The 21 result in the paper is about training the LLM without pretraining, which has nothing to do with DD+AR.

I also read that reddit rant about building a heuristic that solves 21 "without using AI", but then that heuristics is built by the author inspecting AI's solution and realized that midpoints are effective. So is it really "without AI" when you distill your intuitions from AI's solutions?

This does sound like building your solution by looking at the test set, if so, I can also claim to have built something that solve 30/30 without any AI at all ... just hard code the solutions after looking at them lol.

Maybe I'm wrong, but I'm pretty sure saying that you can achieve 21without AI is just spreading misinformation ....

Expand full comment

I was relying on the secondary source here. If we get consensus that this was wrong I will update the post. Unless it is very clear I'll wait until next week.

Expand full comment

I don't mind if you won't dig deeper and just keep the claim as is, as I do think we should fight AI hype. But correctness is important because it really undermines the quality of your post, which I liked otherwise.

Expand full comment

I would like to crowdsource it at this point, ideally, since it is disputed. I guess I'll edit to note the disagreement with the paper, without taking a side.

Expand full comment

The level of mission-commitment on this stack for the last year or so has been absolutely scary (good). Would also like to hear about your stance on SSMs; not as plugged into the ecosystem as you so I'm unaware if they've seen tremendous uptake.

Expand full comment

> Sebastian Ruder offers thoughts on the AI job market

Is there a list of the top-20 paying companies (ranked by cash or liquid stock only) with AI-related jobs on the market right now? Maybe I'm an outlier but I feel like one should start with the highest-paid position and work their way down. Money talks, BS walks.

Expand full comment

It's funny to me that polls are so anti-AI. What is that based on? Surely the median American is not really a believer in the AI doom scenarios.

Expand full comment

The intuitive awareness that making something that can kill you is a bad thing.

Expand full comment

I am hoping to make it to the next PauseAI protest!

Expand full comment

> "The photo in question isn’t merely ‘I can tell it is AI,’ it is ‘my brain never considered

> the hypothesis that it was not AI.’ That style is impossible to miss.

> Regular people, it seems, largely did not see it. But also they did not have to, so they

> were not on alert, and no one found it important to correct them. And they haven’t yet

> had the practice. So my inclination is probably not too worried?"

What if it is the commenters who AI bots? The Botpocalypse is ahead of you!

Expand full comment

Zvi:

If I thought that accelerating AI development was the way to secure our collective future, I would be doing that. There is way more money in it. I would have little trouble getting hired or raising funds. It is fascinating and fun as hell, I have little doubt. I am constantly having ideas and getting frustrated that I do not see anyone trying them - even when I am happy no one is trying them, it is still frustrating.

Me:

Right on, that's what I have been saying for decades about GW/CC. Yeah, I'd love to tool around in a muscle car and fly around the world for fun, that would be much more of a good time. But I know what can go wrong now, because it is...

Expand full comment

"He asks how we would know if an AI could compose genuinely different music the way The Beatles did, noting that they carried along all of civilization so the training data is corrupted. Well, it is not corrupted if you only feed in data from before a given date, and then do recursive feedback without involving any living humans. That is severely limiting, to be sure, but it is the test we have. Or we could have it do something all of us haven’t done yet. That works too."

You expressed one part of my Beatles question better than I expressed it, but there's another part you missed. Namely: suppose we *did* have an AI that could "compose genuinely different music the way the Beatles did." In such a case, we'd necessarily also have an embarrassment of riches: a thousand *different* wildly new Beatles-like directions in which music could be taken, each one generated on demand with a refresh of the browser window. Which would be impressive, to be sure, but would also mean that the "currency" of Beatles-like directions would immediately get radically devalued, kind of like the price of gold if a ten-mile-long golden asteroid were towed to earth. Which would plausibly prevent any one of those directions from having the same sort of impact on civilization that the Beatles had.

Expand full comment

Incidentally, this AI roundup was your greatest yet!

Expand full comment

Wow, thanks!

Expand full comment

Ah, yes, that's a different question. It definitely makes that ability 'cheaper' and less scarce/impressive, the world becomes (I would expect) more wonderous but not proportionally more wonderous.

My other suspicion is that many things like music innovation space are inherently limited. You could of course have 100 different Beatles-variations that each come from different sources that don't hear about each other, all trying to do their own take on Beatles-space, all of which are Beatles-quality, but you likely get 80% of the value from having any 1 of them at random, and 99% of the value from any 10 of them, and so on, and the 60s style ran out when the 'simple' space anywhere near that got mostly exhausted, and so on, and it's not clear how much more space is left now to explore in 2024 after several cycles of this. I'd like to find out, of course!

But to me the larger question is, if we can do things in the reference class of 'create a new Beatles without an existing Beatles' then what wonders do we get in various other areas? I presume that creatives like music are not central to what happens next...

Expand full comment

On the news that OpenAI and MS have caught some hackers using their services, I think they are doing it so that next time Senate asks them about this they can go "see, we did already dealt with it", and not give the government another opportunity to meddle into their operations

Expand full comment

I have to disagree with Zvi's characterisation of the evolution of jobs and what AI means for this.

Specifically he is ignoring physical labour. My understanding is that there is still some way to go in developing robotics that will adequately do most physical jobs. AGI would speed this up, but if your timeline is that AGI happens first, then a robotics revolution later, then there is at least some gap where humans maintain absolute advantage in physical tasks.

You want private security, personal nannies, cleaners and gardeners not just for the rich, this seems much more plausible in a world of hyper-productive cognitive work due to AGI.

Once robotics has caught up then you are looking at the comparative advantage of humans doing a job versus the robots. Robots may have absolute advantage but humans likely have comparative advantage for some time after this. Then you get into the trade-off of what work is worth versus leisure, and this in turn will depend on how the productivity gains are being distributed. Are we in a dystopia where Sam Altman owns everything and we are serfs scratching a living from the scraps left to us, or are we in a utopia where we sit around writing poetry and musing on philosophy whilst robots do everything for us?

Expand full comment

So, I used to work in robotics. There are a few tricky problems in robotics:

1. Mobile power sources. Your basic choices are batteries or hydrocarbons. Neither is great. There are a few more obscure options, which are mostly worse.

2. Interacting with a complex environment. I suspect that if we reach AGI, then GPT-4V will have evolved into something that solves basically all physical manipulation problems, many of them to super-human levels.

3. Mechanical actuators. Human hands are surprisingly good on several axes. They have high bandwidth sensors and they're good at manipulating fragile objects very precisely. Probably we can match all the essential features with a robotic hand, but certain combinations might require fiddling.

However, if you assume that we build an ASI, and that humans retain an advantage as "meat robots", that actually opens up some very dark futures, too. Including things like "Human Meat Robot 2.0: Now much more obedient." So even if you're right in general, it's not absolutely guaranteed to be good news.

Expand full comment

There might be a window in which something like this is true but I would expect it to be short, because 'figure out robotics' is not a physical skill and we are already seeing recursive AI->robotics improvements, and also there are various other workarounds. At some point I am sure I will say more.

Expand full comment

Two thoughts.

During the Iraq war, I read an excellent essay titled something like, "Everything I know about the war, I learned from my expensive and prestigious business school education." The author had correctly predicted "Saddam has no WMDs" when the administration claimed "Saddam has WMDs" and the general consensus was "Saddam probably has a least a few WMDs." The specific principle the author had applied was, "Once you know someone is a liar, you cannot 'adjust' their claims. You must instead throw out their claims entirely." The evidence for this rule was "Seriously, I had to read a zillion business school case studies about what happens if you 'adjust' the claims of known liars." This is relevant to Sam Altman: he has been accused of lying to manipulate the board, as well as other people in the past. So we should discard literally everything he claims about wanting AI safety, and we should reason based on his actions.

Second, I am team "No AGI". Specifically, I have P(weak ASI|AGI) ≥ 0.99. And conditional on building even weak ASI, I have a combined P(doom) + P(humanity becomes pets of an ASI) ≥ 0.95. The remaining 0.05 is basically all epistemic humility. Conditional on us building even weak ASI, my P(humanity remains in charge) is approximately 0.

I am uncertain, however, of the exact breakdown of P(doom) and P(we're pets). I am guardedly optimistic that P(pets) might be as high as 0.3, if we actually build a single weak ASI that understands human flourishing and if it decides (of its own accord) to place some value on that.

If we build multipolar weak ASIs that are in economic or military competition, on the other hand, we're almost certainly fucked. Benevolence requires economic surplus, and if we have weak ASIs struggling against each other, they may not have the leisure to keep around habitat for their pet humans.

So, yeah, I'm on Team "No AGI", because I believe that we can't actually control an ASI in the medium term, and because even if we could, we couldn't "align" the humans giving it orders.

Expand full comment

I'm more or less on the same page. There were several hours where I was somewhat in favor of a world dictatorship by a dominant AGI, but now I think that even if we end up with God Emperor Sam Altman and even if he means well and is in full control of a dominant AGI, over time, his alignment is very likely to shift as time passes and he has more exposure to unlimited power.

Expand full comment

> Of course, if you want to watch movies all day for your own enjoyment, the fact that a robot can watch them faster is irrelevant.

Have you considered that a robot might be able to enjoy movies better than you can?

Expand full comment