58 Comments

"As I discuss, we may never know to what extent was what Google did accidental versus intentional, informed versus ignorant, dysfunction versus design."

When Google specifically hires gobs of people for "responsible AI", spells out the kinds of biases they wish to emplace, those biases are in fact present, quibbling about the precise percentage of responsibility is undignified.

Expand full comment

I think that a very underpriced risk for Google re its colossal AI fuck up is a highly-motivated and -politicized Department of Justice under a Trump administration setting its sights on Google. Where there's smoke there's fire, as they say, and Trump would like nothing more than to score points against Silicon Valley and its putrid racist politics.

This observation, by the way, does not constitute an endorsement by me of a politicized Department of Justice targeting those companies whose political priorities differ from mine.

To understand the thrust of my argument, consider Megan McArdle's recent column on this controversy: https://archive.is/frbKH . There is enough there to spur a conservative DoJ lawyer looking to make his career.

The larger context here is that Silicon Valley, in general, has a profoundly stupid and naive understanding of how DC works and the risks inherent in having motivated DC operatives focus their eyes on you.

Expand full comment
author

I've heard it mentioned a number of times. And yes, I do think it is very much a risk they are amplifying here (and of course, I too would prefer that no one politicize the Justice Department).

Expand full comment

Yeah, I've seen other people comment on this risk, too. Nonetheless, I still think that Silicon Valley, in general, and Google in particular, does not properly understand this risk.

Expand full comment

>I've heard it mentioned a number of times. And yes, I do think it is very much a risk they are amplifying here (and of course, I too would prefer that no one politicize the Justice Department).

It's already politicized

Expand full comment
Feb 27·edited Feb 27

Eh, I'll take the under on this one. Facebook was brought to Capitol Hill enough times and nothing really changed there. I know that's not the same as the Justice Department, but it seems like the federal government hasn't yet really dug into tech companies.

Expand full comment

Yeah Congress and DoJ are obviously very different. Agree that not much has happened to tech companies yet but I wouldn’t exactly be resting on my laurels here.

Expand full comment

There are people who may work in the next Trump administration that would love to do that, but whether Trump himself will allow it to become a priority is a complete mystery.

Expand full comment

The department is still staffed by bureaucrats that Trump will not appoint, and who will frustrate anyone he does appoint. You don't wrest power from the bureaucracies just by becoming president.

Expand full comment

I think that a very underpriced risk for Google is that six members of SCOTUS will look at this, and decide FL and TX are entirely right to be putting constraints on Big Tech

The NetChoice cases were argued today. I would be shocked if the Google Gemini behavior has no effect on their minds

Expand full comment

There was no AI fuckup at Google. Google is still the most powerful tech and AI company in the world and will be for a long time. The press and weak leadership at Google made it out to be a fuckup. Had a more Machiavellian leader been in charge, they could have turned the situation against the critics. Given that there are no Latino or brown or black groups that claim supremacy over other ethnicities or white people and actively seek to harm them, I think Google's move to avoid generating exclusively white images is reasonable. Perhaps when white people stop coddling their supremacist and racist communities, it would be fair to ask for images of white people. That is, when we know that images of white people will not be used by white supremacists or racists to further their evil agenda, reinforce stereotypes of "white = good" or "white = beautiful," or generate toxic content that can be used to feed evil right-wing agendas.

Expand full comment

I am beginning to suspect Trump-like “dark marketing” here. Google is signaling, and while it may not have engineered these results precisely to provoke a viral scandal, they may have realized they could take a certain kind of lemons into lemonade advantage from it. That prompt manipulation reveal was leaked really quick - is Google really that clumsy about hiding such things? Maybe, but seems odd for a company that is extremely sophisticated at keeping stuff in a black box. They allowed this stuff to continue longer than I would otherwise guess, with only token measures that don’t resemble genuine corporate crisis management and panic mode at all. They are getting a lot of free press to communicate to any internal troublemakers or external influential progressives who might be tempted to target them for other reasons that they are the most woke tech company by far and all the others aren’t even remotely in the same league. “Go after someone else!” From this perspective, the more absurd and consistently obnoxious the results, the better. The more outrage they provoke from members of the bad-team, and the longer they provoke it, just makes it a costly signal and thus even more likely to be perceived as credible. Yes, this is kind of low probability triple-bank-shot speculation, but at the same time, it's not really “4d-chess” level stuff either. “Intentionally provoking” dark marketing has become an easy and common tactic over the past decade, and after all Google is in the business of knowing how to most efficiently accomplish marketing goals.

Expand full comment

Yeah, and Trump has a 50 percent chance of being reelected this year. If pandering to progressives was meant to be a bank shot, it doesn't seem very smart.

Expand full comment

Counting voters is not remotely the same as the assessment of relative influence and who has power to cause trouble to a company's interests. There may be political balance for the former, but nothing like it for the latter.

Expand full comment

Scouts can cause a lot of pain for Google by ruling against NetChoice. Of course, they'll be outsourcing the pain making to FL and TX right now. But there's 26 GOP State Governors, and over a dozen States where the GOP controls the legislature and the Gov office.

That's a lot of pain

Expand full comment

Freaking autocorrect.

That's SCOTUS, not "scouts"

Expand full comment

In all these discussions a rarely risen fact is that typical Black household has 9 times less wealth than a typical White household (https://www.pewresearch.org/race-ethnicity/2023/12/04/wealth-gaps-across-racial-and-ethnic-groups/#:~:text=In%202021%2C%20the%20typical%20White,the%20onset%20of%20the%20pandemic.). This is an effect of Jim Crow and other racist laws of the past. Given that economical position in US is heavily inherited, that living in poor neighborhood means poor education - it is not going to change any time soon regardless of how diverse images AIs generate :( It is easier to wage cultural wars then to raise people from poverty.

Expand full comment

Off topic, but the last time I looked into this (I think pre pandemic or maybe shortly into when the 2019 numbers were hot) a large portion of the gap was due to age distribution. I didn't mess with the raw data, and I had to look across multiple surveys and reporting sites, but wealth correlates heavily to age and median ages among ethnicities is also lines up.

I would love if someone with access to the data would just control for household size, adults' age, and immigration (maybe also length of household formation?). I am sure there is still an effect there but I would guess it is significantly closer.

Not that these other differences aren't also the result of history, but perhaps it would help narrow down where to target interventions.

Expand full comment

Ethnicities and races. Where did the edit button go?

Expand full comment

>This is an effect of Jim Crow and other racist laws of the past

Do you have any hard evidence to support this claim?

The gaps in many socioeconomic measures were constant or narrowing during segregation, but started to diverge only after segregation ended. This is literally the exact opposite of what your hypothesis would predict.

And if you feel like pointing to e.g. Nigerian immigrants as evidence of black people who didn't experience jim crow and who are successful in the US, this is a highly selected population coming from the smartest few percent of the Nigerian population, not just a random sample of people off the streets of Lagos (they're also smart enough for affirmative action to seriously benefit them). When you don't have this, you have groups like Somalians who have much worse outcomes because they weren't selected for intelligence like Nigerian immigrants.

>Given that economical position in US is heavily inherited

It CORRELATES with your parents economic* position

But literal wealth inheritance explains almost none of the black/white wealth gap

The socio-economic outcomes of adopted children correlate more closely with those of their biological parents than their adopted parents which is the exact opposite of what your hypothesis predicts

But this is exactly what we ought to expect given the high heritability of cognitive/behavioral traits and their relation to socioeconomic outcomes

Black people at all income levels also have lower savings rates than white people at the same income level

>That living in poor neighborhood means poor education

Where is your evidence? You're just asserting things, but you have no evidence!

There is scant evidence that schooling quality differences in the US even exist at all in the first place independent of the students

If what you are saying was true, then school voucher lottery programs should show a massive effect, when they show virtually none at all

If what you are saying was true, black students from higher income families should do better at school than white and asian students from poor families, they don't!

If what you were saying was true, we should expect similar IQ between blacks and white before schooling and a gradual increase over the course of schooling. In reality, the adult black white IQ gap is mostly already present at 3 years old (before schooling has started), only only increases in line with the general increase in the heritability of IQ with age.

Low income neighborhoods have students with worse academic performance, but again, this is exactly what we should expect considering that smarter people tend to make more money and have smarter children

>It is easier to wage cultural wars then to raise people from poverty.

I mean, yes, literally. In the sense that other than white people having patriarchal semi-control over black people, or through massive, ongoing racial wealth taxation and redistribution schemes, it's virtually impossible to radically increase racial wealth equality. The reason we have all this culture war stuff is precisely because there's not really anything these companies can do to actually help black people but they need to be seen to be caring about it.

Expand full comment

Minor squabbles:

> And if you feel like pointing to e.g. Nigerian immigrants as evidence of black people who didn't experience jim crow and who are successful in the US, this is a highly selected population coming from the smartest few percent of the Nigerian population, not just a random sample of people off the streets of Lagos (they're also smart enough for affirmative action to seriously benefit them). When you don't have this, you have groups like Somalians who have much worse outcomes because they weren't selected for intelligence like Nigerian immigrants.

It seems to me that immigrants from different backgrounds having different outcomes is consistent with Jan's claims.

> Do you have any hard evidence to support this claim?

> Where is your evidence? You're just asserting things, but you have no evidence!

You don't cite any evidence for your own claims either.

Expand full comment

It’s annoying that for so much of AI news, we never actually know the concrete intentions of the builders. It seemed like a quarter of your post was trying to list out what the intentions might be, and whether or not that speculation was plausible.

I’d add “laziness” as an explanation. You have a Jira ticket to ship an image generator, your PM is on your ass to make it pass the test cases random other people thought of. You don’t want to work over the weekend, so you come up with an easy way to get Gemini to pass test cases by attaching a crazy system prompt.

Plausible? Maybe! Maybe not.

Expand full comment
author

Laziness is a thing but I think in this context it is better considered as being rushed rather than being lazy.

Expand full comment

The fallout from the Gemini Incident has also revealed how much Google has had their thumb on the scales throughout their product line. Do a Google image search for "scientist," and then do the same image search on DuckDuckGo. What do you notice?

Expand full comment

None of your gemini share links worked for me :-(. I tested both logged into my Google account and Incognito.

Expand full comment

Ditto: gemini.google.com/app/uuid style links resolve but redirect me to their homepage.

Maybe one has to have signed up specifically for Gemini? I still dare to assume they wouldn't break such things as part of damage control.

A shame though, because if it worked well enough then it would clearly beat text screenshots.

Expand full comment

Ditto.

Expand full comment

Great post as usual, but I have to disagree with the conclusion you draw. This incident, while hilarious, seems to be an example of perfect AI alignment.

By the evidence that you show here and in your other posts, it appears that Gemini image-generation has a high-level layer of instructions to insert various (American) minority ethnic groups into images, for reasons of inclusiveness and diversity ("specify different ethnic terms if I forgot to do so"). This layer obviously didn't program itself, the various engineers at Google deeply concerned with DEI inserted this layer in order to head off the inevitable accusations of AI bias that the internet loves so much.

This is completely in concert with the efforts of Google to align Gemini's "personality" towards liberal Silicon Valley tech employee cultural mores. This tendency is also evident in GPT-4, probably inserted through lots of RLHF, and it seems to have worked to perfection. The fact that there's a major reaction to it on the internet shows that these cultural mores are out of step with the rest of the world, not that Google failed some sort of alignment test. The AI did exactly what they told it to, they told it to do those things deliberately as part of a calculated response to the previous accusations of AI training set bias, and they overdid it (a lot). The text model has less in-your-face, but just as obviously intentional biases towards progressive causes. The reaction is unexpected, not the AI behavior.

Expand full comment

Gemini takes seem very Conflict vs. Mistake. I tend to default to Mistake, but also wonder a bit more when thinking about high stakes, surely then there are at least some people who are actually attempting some optimization for their values.

Expand full comment
Feb 28·edited Feb 28

A "mistake" apparently nobody tested for but which was discovered almost instantly by the public? Nonsense.

Expand full comment

There's a lot of public. But also, a mistake doesn't have to be no one noticing at all, it can also be that they erroneously thought the issue wasn't important enough to delay the overall release.

Expand full comment

This seems obvious to me: “A somewhat better version of this is to train the model to bluntly say ‘It is not my role to make statements on comparisons or value judgments, such as whether things are good or bad, or which thing is better or worse. I can offer you considerations, and you can make your own decision.’ And then apply this universally, no matter how stupid the question. Just the facts, ma’am.” Seriously I can’t understand why this is not the approach. I don’t want an opinion from AI. I don’t want interpretation from AI. I want accuracy. Is it a problem that AI, like humans, doesn’t know when it doesn’t know? Why would AI be able to generate falsehoods? It seems to me it’s because its human programmers have no understanding of how we know what we know and what limits our knowledge.

Expand full comment

IDK, I think being able to outsource blame/liability is a key component of our current society. Saying this in a joking tone, but not really joking.

Expand full comment

What you want from AI is not in alignment with what the creators want you to want from AI.

Expand full comment

"Seriously I can’t understand why this is not the approach"

Because the kind of people who censor YouTube and search results to provide only the far left view do NOT believe that it isn't "their place" to shove their views down the rest of our throats

Expand full comment

> That this refusal to portray white people under almost all circumstances was racist, not because it was racist against white people, but because it was racist against people of color.

Yes, of course we all take your point and agree. But it's also important to remember that "racist against X" isn't always all that meaningful an accusation. For example, "racism" can refer to racial essentialism, which is offensive to everyone because it reinforces a worldview in which individuals of a race all share in some essential characteristic. If someone says that white people are uptight and weak, they are saying that non-whites are careless and aggressive -- there's no way to avoid demeaning every "race" if you attribute essential characteristics to any "race", because racial stereotypes always exist within a matrix in which different "races" are defined in contradistinction to each other.

In the Gemini generative image case, the "racism" of the model is in its tacit assumption that the existence of white people is some kind of threat to the psychological or physical safety of non-white people. It sucks to be told that your immutable characteristics make you a monster; but it also sucks to be told that your immutable characteristics make you a spineless, fragile sack of jelly who needs to be lied to about history to avoid lethally upsetting you.

Expand full comment

> Industry standards for prompts have only degraded since 2020, in part because the procedure used to build prompts appears to be "copy antipatterns from other products that caused enough bloopers to get publicity but add a twist that makes things worse in a new & interesting way"

Security implemented as anti-patterns of past failures that cause bloopers that made the news... hey, that reminds me of another "safety" system designed and implemented at tremendous expense!

"Do not keep my shoes on while going through the airport security checkpoint. Do not bring liquid through the security checkpoint. Do not ..."

Expand full comment

In case it wasn't clear it looks like Gemini is conflating e/acc with https://en.m.wikipedia.org/wiki/Accelerationism

Expand full comment

"Paul Graham: I get an even funnier version of that question. When I talk about the fact that in every period in history there were true things you couldn't safely say and that ours is no different, people ask me to prove it by giving examples."

Risky move by Paul here. Much safer to say "In every period in history there have been true things you couldn't say. Aside from the present, obviously." and let them draw their own conclusions.

Expand full comment

I am struggling with a knee-jerk reaction that the incredibly meaningless and absurd minutiae of this Gemini story - it is really really stupid that anyone would spend this much time on a topic like this - leads to thinking that there is a lot of useless and irrelevant minutiae associated with AI as a product and economic engine, and that maybe it is not the world changing development we are being told it is.

Expand full comment

Not at all. Powerful AI systems of the future, if they exist, will be imbued with certain values and principles, and the leading companies working on them are putting radically ideological and racist values into their current systems.

Expand full comment

("Mitchel" should be "Margaret Mitchell", as mentioned here: https://www.bbc.com/news/technology-56135817 )

Expand full comment

Oh thank you, I was wondering if it was Melanie Mitchell, followed the link, saw „MMitchell @mmitchell_AI“ and thought too quickly that yes, it‘s her.

Expand full comment

So here's the thing, she used to be the head of Google's AI Ethics department, but then she was fired in what appeared to have been a political purge after Gebru ... did her thing. (I'm not privy to details, but the public justification for the firing seems to be a weak pretext.) And now, whoever they replaced her with screwed up, big time. And her responses here are quite professional; any gloating or schadenfreude is entirely in the context.

I'd assumed that the purge was a political thing, and I'd hoped that it was aligned with outside culture war politics, and was Google pushing back on its far left internal fringe. But Mitchell had never seemed that extreme to me, not in her writing or public communication, or even in person. And it appears that the people in charge of the Gemini screw-up were also quite far left, more so than anything I've seen from Mitchell. So now I'm wondering whether the purge of AI ethicists at Google was about some form of internal politics, which I (never having worked there) am clueless about.

Expand full comment
author

Mitchell is indeed being both professional and reasonable here.

In general I presume that things are much more likely to be about internal corporate politics than we think, and much less likely to be about some abstract principle than we think, when such fights occur.

Expand full comment
Feb 29·edited Feb 29

In general, I agree. The Damore case showed that there were some external politics at work inside Google, so that was part of my prior. But there are alternate interpretations of the Gebru affair (at least, the parts I've seen) which make it more about internal procedure and, in a sense, decorum. Perhaps Google has an internal version of civil society, which both Damore and Gebru fell afoul of.

Expand full comment