58 Comments

"As I discuss, we may never know to what extent was what Google did accidental versus intentional, informed versus ignorant, dysfunction versus design."

When Google specifically hires gobs of people for "responsible AI", spells out the kinds of biases they wish to emplace, those biases are in fact present, quibbling about the precise percentage of responsibility is undignified.

Expand full comment

I think that a very underpriced risk for Google re its colossal AI fuck up is a highly-motivated and -politicized Department of Justice under a Trump administration setting its sights on Google. Where there's smoke there's fire, as they say, and Trump would like nothing more than to score points against Silicon Valley and its putrid racist politics.

This observation, by the way, does not constitute an endorsement by me of a politicized Department of Justice targeting those companies whose political priorities differ from mine.

To understand the thrust of my argument, consider Megan McArdle's recent column on this controversy: https://archive.is/frbKH . There is enough there to spur a conservative DoJ lawyer looking to make his career.

The larger context here is that Silicon Valley, in general, has a profoundly stupid and naive understanding of how DC works and the risks inherent in having motivated DC operatives focus their eyes on you.

Expand full comment

I am beginning to suspect Trump-like “dark marketing” here. Google is signaling, and while it may not have engineered these results precisely to provoke a viral scandal, they may have realized they could take a certain kind of lemons into lemonade advantage from it. That prompt manipulation reveal was leaked really quick - is Google really that clumsy about hiding such things? Maybe, but seems odd for a company that is extremely sophisticated at keeping stuff in a black box. They allowed this stuff to continue longer than I would otherwise guess, with only token measures that don’t resemble genuine corporate crisis management and panic mode at all. They are getting a lot of free press to communicate to any internal troublemakers or external influential progressives who might be tempted to target them for other reasons that they are the most woke tech company by far and all the others aren’t even remotely in the same league. “Go after someone else!” From this perspective, the more absurd and consistently obnoxious the results, the better. The more outrage they provoke from members of the bad-team, and the longer they provoke it, just makes it a costly signal and thus even more likely to be perceived as credible. Yes, this is kind of low probability triple-bank-shot speculation, but at the same time, it's not really “4d-chess” level stuff either. “Intentionally provoking” dark marketing has become an easy and common tactic over the past decade, and after all Google is in the business of knowing how to most efficiently accomplish marketing goals.

Expand full comment

In all these discussions a rarely risen fact is that typical Black household has 9 times less wealth than a typical White household (https://www.pewresearch.org/race-ethnicity/2023/12/04/wealth-gaps-across-racial-and-ethnic-groups/#:~:text=In%202021%2C%20the%20typical%20White,the%20onset%20of%20the%20pandemic.). This is an effect of Jim Crow and other racist laws of the past. Given that economical position in US is heavily inherited, that living in poor neighborhood means poor education - it is not going to change any time soon regardless of how diverse images AIs generate :( It is easier to wage cultural wars then to raise people from poverty.

Expand full comment

It’s annoying that for so much of AI news, we never actually know the concrete intentions of the builders. It seemed like a quarter of your post was trying to list out what the intentions might be, and whether or not that speculation was plausible.

I’d add “laziness” as an explanation. You have a Jira ticket to ship an image generator, your PM is on your ass to make it pass the test cases random other people thought of. You don’t want to work over the weekend, so you come up with an easy way to get Gemini to pass test cases by attaching a crazy system prompt.

Plausible? Maybe! Maybe not.

Expand full comment

The fallout from the Gemini Incident has also revealed how much Google has had their thumb on the scales throughout their product line. Do a Google image search for "scientist," and then do the same image search on DuckDuckGo. What do you notice?

Expand full comment

None of your gemini share links worked for me :-(. I tested both logged into my Google account and Incognito.

Expand full comment

Great post as usual, but I have to disagree with the conclusion you draw. This incident, while hilarious, seems to be an example of perfect AI alignment.

By the evidence that you show here and in your other posts, it appears that Gemini image-generation has a high-level layer of instructions to insert various (American) minority ethnic groups into images, for reasons of inclusiveness and diversity ("specify different ethnic terms if I forgot to do so"). This layer obviously didn't program itself, the various engineers at Google deeply concerned with DEI inserted this layer in order to head off the inevitable accusations of AI bias that the internet loves so much.

This is completely in concert with the efforts of Google to align Gemini's "personality" towards liberal Silicon Valley tech employee cultural mores. This tendency is also evident in GPT-4, probably inserted through lots of RLHF, and it seems to have worked to perfection. The fact that there's a major reaction to it on the internet shows that these cultural mores are out of step with the rest of the world, not that Google failed some sort of alignment test. The AI did exactly what they told it to, they told it to do those things deliberately as part of a calculated response to the previous accusations of AI training set bias, and they overdid it (a lot). The text model has less in-your-face, but just as obviously intentional biases towards progressive causes. The reaction is unexpected, not the AI behavior.

Expand full comment

Gemini takes seem very Conflict vs. Mistake. I tend to default to Mistake, but also wonder a bit more when thinking about high stakes, surely then there are at least some people who are actually attempting some optimization for their values.

Expand full comment

This seems obvious to me: “A somewhat better version of this is to train the model to bluntly say ‘It is not my role to make statements on comparisons or value judgments, such as whether things are good or bad, or which thing is better or worse. I can offer you considerations, and you can make your own decision.’ And then apply this universally, no matter how stupid the question. Just the facts, ma’am.” Seriously I can’t understand why this is not the approach. I don’t want an opinion from AI. I don’t want interpretation from AI. I want accuracy. Is it a problem that AI, like humans, doesn’t know when it doesn’t know? Why would AI be able to generate falsehoods? It seems to me it’s because its human programmers have no understanding of how we know what we know and what limits our knowledge.

Expand full comment

> That this refusal to portray white people under almost all circumstances was racist, not because it was racist against white people, but because it was racist against people of color.

Yes, of course we all take your point and agree. But it's also important to remember that "racist against X" isn't always all that meaningful an accusation. For example, "racism" can refer to racial essentialism, which is offensive to everyone because it reinforces a worldview in which individuals of a race all share in some essential characteristic. If someone says that white people are uptight and weak, they are saying that non-whites are careless and aggressive -- there's no way to avoid demeaning every "race" if you attribute essential characteristics to any "race", because racial stereotypes always exist within a matrix in which different "races" are defined in contradistinction to each other.

In the Gemini generative image case, the "racism" of the model is in its tacit assumption that the existence of white people is some kind of threat to the psychological or physical safety of non-white people. It sucks to be told that your immutable characteristics make you a monster; but it also sucks to be told that your immutable characteristics make you a spineless, fragile sack of jelly who needs to be lied to about history to avoid lethally upsetting you.

Expand full comment

> Industry standards for prompts have only degraded since 2020, in part because the procedure used to build prompts appears to be "copy antipatterns from other products that caused enough bloopers to get publicity but add a twist that makes things worse in a new & interesting way"

Security implemented as anti-patterns of past failures that cause bloopers that made the news... hey, that reminds me of another "safety" system designed and implemented at tremendous expense!

"Do not keep my shoes on while going through the airport security checkpoint. Do not bring liquid through the security checkpoint. Do not ..."

Expand full comment

In case it wasn't clear it looks like Gemini is conflating e/acc with https://en.m.wikipedia.org/wiki/Accelerationism

Expand full comment

"Paul Graham: I get an even funnier version of that question. When I talk about the fact that in every period in history there were true things you couldn't safely say and that ours is no different, people ask me to prove it by giving examples."

Risky move by Paul here. Much safer to say "In every period in history there have been true things you couldn't say. Aside from the present, obviously." and let them draw their own conclusions.

Expand full comment

I am struggling with a knee-jerk reaction that the incredibly meaningless and absurd minutiae of this Gemini story - it is really really stupid that anyone would spend this much time on a topic like this - leads to thinking that there is a lot of useless and irrelevant minutiae associated with AI as a product and economic engine, and that maybe it is not the world changing development we are being told it is.

Expand full comment

("Mitchel" should be "Margaret Mitchell", as mentioned here: https://www.bbc.com/news/technology-56135817 )

Expand full comment