23 Comments

The “Slow Boring” post about ChatGPT going to Harvard is by Maya Bodnick, not Matt directly (though probably edited by him), so the post should probably be updated to reference Maya and Bodnick instead of Matt’s name. I think it was updated a couple hours after posting to fully credit her as the author, so that may have caused this error.

Expand full comment

So what I'm hearing is that any American can take a free vacation to Brazil to picking up an extra laptop, smartphone, and iPad as personal items, and selling them at a huge profit on arrival.

Expand full comment

I don't get the conflict between AI Safety and AI Ethics here; if you cannot train a LLM to not be racist, what hope do you have of training an AI to not be genocidal. Real AGI is not here for AI Safety peeps to really do much, so they may as well do the practical jobs available.

Expand full comment

Just here to note that there is a whole TV show about AI alignment from the creators of Westworld called Person of Interest, and it is better than it should be, for many reasons

Expand full comment

French speaker here to weigh in on the essential issue brought up in the last segment

It's true that GPT kind of sounds like "J'ai pété", although the "é" sound in "G" is different to the "ai" sound, and when you hear it often and in context it stops being weird

I don't see where they get that "Chat" is pronounced like the french word "chat"

Most people say the "Ch" sound the english way, "tch", and more importantly everyone says the final t (while for the animal the t is silent)

Expand full comment

This is the BCS "open letter", but I think it's just a few lines: https://www.bcs.org/sign-our-open-letter-on-the-future-of-ai/

Expand full comment

The discussion on #4-5 gave me a realization: the main way the "reason step by step" trick operates to increase accuracy is just by *forcing the final answer not to be the first token*. Questions that expect the final answer as the first token will pattern-match to "off-the-cuff" reactions in the training data that don't take all available information into account.

Maybe this is too simplistic an understanding but it seems quite intuitive and decreases my estimate of the magicalness of current LLMs. It also suggests that maybe results could be approved via implementing some kind of lookahead (i.e. into future output) on the context window?!

Expand full comment

Minor typo: "osculate" -> "obsolete?"

Expand full comment

What is "osculate" being used to mean here? That the forecast results touched the "real" result, maybe by accident? If it said something like "obsolete" instead I'd be much less confused. "That experience was that he did not get substantive engagement and there was a lot of ‘what if it’s all hype’ and even things like ‘won’t understand causality.’ And he points out that this was in summer 2022, so perhaps the results are osculate anyway."

Expand full comment

> I and most others I know are very happy to do a ‘why not both.’

I think you’re (uncharacteristically) being insufficiently cynical here.

It's true that the number of philanthropic dollars, and hours of activist effort, are (roughly) zero-sum, and therefore different philanthropic / activist causes trade off against each other. But by and large, nobody cares. (How many people have even heard of Cause Prioritization?)

It's equally true that the number of minutes of television news airtime, and number of bills of legislation, are (roughly) zero-sum, and therefore different causes trade off against each other. But by and large, again, nobody cares.

If people were actually thinking this way, you would see police reform activists downplaying the importance of obesity and climate change and voting rights etc., and vice-versa. Which you don’t.

I think that stuff is just a thin veneer of rationalization over what everyone ACTUALLY cares about, which is (…drumroll…) status prioritization! Which group of people are being respected? Which group of people are being listened to?

And like it or not, the people whose status is tied to the status of AI ethics (e.g. Timnit Gebru) versus the people whose status is tied to the status of AI x-risk (e.g. Eliezer Yudkowsky) are not the same people, and are by-and-large ideological opponents. I.e., these two groups of people strongly disagree with each other on probably most object-level political issues, even if there are areas of possible win-wins. Do the disagreements specifically concern AI x-risk? Wrong question! We’re talking about the halo effect and its opposite. The whole person / group is elevated or lowered in status. We're not picking and choosing amongst their beliefs.

By the same token, there are possible win-wins in Israel-vs-Palestine and abortion-vs-choice and gun-rights-vs-gun-control and Red-Sox-vs-Yankees too, but in practice, I don't know how to point to the win-wins and then magically people stop thinking of these groups as being on opposing sides.

I’m not sure what to do about that. In reality, AI x-risk concerned people are ideologically diverse, and AI x-risk concerns can be presented in a left-coded way or a right-coded way or neutral, as one desires. Staying neutral w.r.t. hot-button political issues is appealing, but it seems that some people have a “with us or against us” tendency to assume the worst unless explicitly reassured. Alternatively, one could try to send libertarian-coded messaging to the libertarians and Gebru-coded messaging to Gebru etc. etc. But sometimes the real world intervenes (virality dynamics etc.) and redirects messaging in the worst possible way. ¯\_(ツ)_/¯

Expand full comment

so it turns out that SAG story was probably just lies being told by the union reps https://www.theverge.com/2023/7/13/23794224/sag-aftra-actors-strike-ai-image-rights

Expand full comment