23 Comments

The “Slow Boring” post about ChatGPT going to Harvard is by Maya Bodnick, not Matt directly (though probably edited by him), so the post should probably be updated to reference Maya and Bodnick instead of Matt’s name. I think it was updated a couple hours after posting to fully credit her as the author, so that may have caused this error.

Expand full comment
author

Yep, will do, regardless of the reason the error happened.

Expand full comment

So what I'm hearing is that any American can take a free vacation to Brazil to picking up an extra laptop, smartphone, and iPad as personal items, and selling them at a huge profit on arrival.

Expand full comment
author

I don't see why not.

Expand full comment

With how the prices here are like for anything imported...pretty much.

Expand full comment

I don't get the conflict between AI Safety and AI Ethics here; if you cannot train a LLM to not be racist, what hope do you have of training an AI to not be genocidal. Real AGI is not here for AI Safety peeps to really do much, so they may as well do the practical jobs available.

Expand full comment

Part interpersonal dislike due to preexisting political commitments, part academic/funding turf war. The gap is being bridged fortunately.

Expand full comment

I believe that one way the AI Safety vs. AI Ethics antagonism could be summed up is as "AI Ethics is about trying to put the most convincing mask possible on the shoggoth."

Unless you've solved the underlying "there's a shoggoth" problem, this approach is not only a waste of resources but *actively antagonistic* to substantive AI Safety because it makes the deceptive veneer that much more convincing.

If the LLM response to any question touching on race is "provide an anodyne, inoffensive, politically correct response regardless of the content or the underlying data or information being inquired about," you have potentially Goodharted yourself into making substantive AI safety *harder* rather than easier (although if the AI is explicit about its inability to answer the question at least it's not actively subverting assessment.)

Assume you were insuring people solely for skin cancer risk: rationally speaking, you would charge people with more melanin in their skin less than people with less melanin. If there's a regulation that says you can't do that because it's racist, the correct way to implement that is as a transparent and explicit cross-subsidy overlayer (likely not implemented as an AI at all, but as an external deterministic function), *not* to either subvert the accuracy of the underlying world-model by pretending that risk is the same or to train insuranceGPT to lie about what's going on. Either of those makes the task of substantive AI safety harder because an AI working with an inaccurate world-model is as risk of giving unpredictable garbage-in garbage-out results, and *training* an to AI lie or give false information about the nature of what it's doing (although very much a risk of AGI/ASI in general regardless of this specific example) is basically the polar opposite of how you want to assess potential risks when it comes to sub-AGI models.

Expand full comment

Just here to note that there is a whole TV show about AI alignment from the creators of Westworld called Person of Interest, and it is better than it should be, for many reasons

Expand full comment

Ooooh – thanks for the rec!

Expand full comment

French speaker here to weigh in on the essential issue brought up in the last segment

It's true that GPT kind of sounds like "J'ai pété", although the "é" sound in "G" is different to the "ai" sound, and when you hear it often and in context it stops being weird

I don't see where they get that "Chat" is pronounced like the french word "chat"

Most people say the "Ch" sound the english way, "tch", and more importantly everyone says the final t (while for the animal the t is silent)

Expand full comment

Yes that is just not a true claim in any version of French I know, but maybe it is in québécois?

Expand full comment

This is the BCS "open letter", but I think it's just a few lines: https://www.bcs.org/sign-our-open-letter-on-the-future-of-ai/

Expand full comment
author

Ah, thanks. (A day later is too late for most people so I'll mention it next week rather than edit it in)

Expand full comment

The discussion on #4-5 gave me a realization: the main way the "reason step by step" trick operates to increase accuracy is just by *forcing the final answer not to be the first token*. Questions that expect the final answer as the first token will pattern-match to "off-the-cuff" reactions in the training data that don't take all available information into account.

Maybe this is too simplistic an understanding but it seems quite intuitive and decreases my estimate of the magicalness of current LLMs. It also suggests that maybe results could be approved via implementing some kind of lookahead (i.e. into future output) on the context window?!

Expand full comment

Minor typo: "osculate" -> "obsolete?"

Expand full comment
author

Yep, confirming this was a pure typo.

Expand full comment

What is "osculate" being used to mean here? That the forecast results touched the "real" result, maybe by accident? If it said something like "obsolete" instead I'd be much less confused. "That experience was that he did not get substantive engagement and there was a lot of ‘what if it’s all hype’ and even things like ‘won’t understand causality.’ And he points out that this was in summer 2022, so perhaps the results are osculate anyway."

Expand full comment

'Kiss' obviously :)

(It was a typo.)

Expand full comment

> I and most others I know are very happy to do a ‘why not both.’

I think you’re (uncharacteristically) being insufficiently cynical here.

It's true that the number of philanthropic dollars, and hours of activist effort, are (roughly) zero-sum, and therefore different philanthropic / activist causes trade off against each other. But by and large, nobody cares. (How many people have even heard of Cause Prioritization?)

It's equally true that the number of minutes of television news airtime, and number of bills of legislation, are (roughly) zero-sum, and therefore different causes trade off against each other. But by and large, again, nobody cares.

If people were actually thinking this way, you would see police reform activists downplaying the importance of obesity and climate change and voting rights etc., and vice-versa. Which you don’t.

I think that stuff is just a thin veneer of rationalization over what everyone ACTUALLY cares about, which is (…drumroll…) status prioritization! Which group of people are being respected? Which group of people are being listened to?

And like it or not, the people whose status is tied to the status of AI ethics (e.g. Timnit Gebru) versus the people whose status is tied to the status of AI x-risk (e.g. Eliezer Yudkowsky) are not the same people, and are by-and-large ideological opponents. I.e., these two groups of people strongly disagree with each other on probably most object-level political issues, even if there are areas of possible win-wins. Do the disagreements specifically concern AI x-risk? Wrong question! We’re talking about the halo effect and its opposite. The whole person / group is elevated or lowered in status. We're not picking and choosing amongst their beliefs.

By the same token, there are possible win-wins in Israel-vs-Palestine and abortion-vs-choice and gun-rights-vs-gun-control and Red-Sox-vs-Yankees too, but in practice, I don't know how to point to the win-wins and then magically people stop thinking of these groups as being on opposing sides.

I’m not sure what to do about that. In reality, AI x-risk concerned people are ideologically diverse, and AI x-risk concerns can be presented in a left-coded way or a right-coded way or neutral, as one desires. Staying neutral w.r.t. hot-button political issues is appealing, but it seems that some people have a “with us or against us” tendency to assume the worst unless explicitly reassured. Alternatively, one could try to send libertarian-coded messaging to the libertarians and Gebru-coded messaging to Gebru etc. etc. But sometimes the real world intervenes (virality dynamics etc.) and redirects messaging in the worst possible way. ¯\_(ツ)_/¯

Expand full comment

so it turns out that SAG story was probably just lies being told by the union reps https://www.theverge.com/2023/7/13/23794224/sag-aftra-actors-strike-ai-image-rights

Expand full comment
author

That is so weird if true, since it would be pure outright lying (by actors? why I never!) and then no one discussing it pointed it out at all, when it seems like a big deal. So strange.

Expand full comment

I don't know. The idea that union reps would lie to make management look like assholes and the media would report it credulously doesn't sound that weird to me. Especially when, as here, there is no actual proof either way

Expand full comment