17 Comments
Jun 6·edited Jun 6

Hey boss, I work for a research agency adjacent company and I recently heard about a focus group testing for a new product, instead of taking 3 months of brainstorming, reformulating, and focus group testing, accelerating it all with AI.

They created 4 “personas” of customers. Customers that haven’t bought the product in a while, customers that buy it occasionally, customers that buy it for their spouse, customers of a specific demographic, and loyal users. They asked ChatGPT to simulate a focus group, pausing when they needed an additional question from the moderator.

They presented 10 marketing campaign ideas and asked each customer persona what they would rank the idea on a 1-10 on how compelling it was. They also asked what it would take to close the gap to 10. They took the top 3 ideas, refined them with feedback and re-presented to the focus groups. The focus group rated them highly.

Normally marketing campaigns don’t pass the muster of the VP of marketing of the end client and several revisions need to be made. This time, all three ideas were seen as excellent - the discussion was which of the three would be used. Using image generation they mocked them up.

The whole process from start to finish took 2 weeks. And, I could be wrong, but I feel like any of us could do it. When this research normally costs $50k for 3 months of work, the incredible reduction in cost, manpower, organization, while having high quality results is the first time AI has impacted my (I thought relatively protected) job. Happy to answer any questions.

Expand full comment

How do you think that we can use this to better accelerate AI safety marketing? I think that's the thing that is most necessary now.

Expand full comment

I remain amazed by the defense that the "little guys" are the ones who are throwing around 10+ million dollars on their training run. Its very much Big Tech overreach, once again, and its somethng we need to keep fighting.

Expand full comment

My favourite proposal is still the carbon fee and dividend. It has been a bill proposed in congress for a long time, garnering some significant support but never coming close to pass. I am not saying that it would solve all AI problems, but it would certainly be a step in the right direction. And, in principle, easy to do.

Expand full comment
author

Carbon taxes are both obviously correct and valuable, and also alas stupidly unpopular.

Expand full comment

Do you know somebody who argued that a carbon fee and dividend would usefully double as an AI fee and dividend?

Expand full comment

The government doesn't and never will understand the technical part. They just want to tax it. Use that as leverage.

Expand full comment

I'm actually a bit concerned about the "with the necessary mental state" clarification as it seems to potentially make the bill much broader.

Consider someone who simply uses an AI in place of Google to learn how to commit a crime. We generally tend to think merely providing basic information that might be used to commit a crime isn't a problem but, of course, if the individual supplying the information had the intent to help the criminals commit a crime that would potentially be criminal. Indeed, I worry that it's a potentially huge expansion of coverage because there is a huge range of actions that become crimes if you insert some mental state. Indeed, I might suggest that it's impossible to think of almost anything that isn't a crime with some mental state. Maybe it's fine if you understand autonomously narrowly enough so as not to include merely informing a person of something but such a definition of autonomously would make the bill far too narrow (then any output filtered through a human, even lying about the state of a reactor to induce a human to flip a switch, becomes uncovered).

As long as we have some dumb early test cases it will probably be worked out in a reasonable way in the courts but the risk happens if you get a first test case that's some mass shooting or something with someone arguing the AI should have guessed the bad motives.

Expand full comment

AI can indeed encourage and deceive beyond search engines, so I think its not really quite the same thing.

Expand full comment

It may not always be the same thing, but if the goal is to write a law which isn't too broad you don't want it to cover the provision of any information that could be used to commit a crime because you can come up with some mental state a human could have under which providing that information becomes a crime.

More generally, it risks covering literally everything because the law is often written quite broadly to make almost any act done with certain intents illegal. For instance, suppose an AI provides an entirely correct answer to an engineer building a damn about the strength of a specific kind of material or the safety factor required by law. Later the damn breaks and kills people and it turns out the failure was that the engineer miscopied the units correctly reported by the AI.

Obviously shouldn't be covered. The AI didn't act any differently than a lookup table in a reference book. However, if a human had given that answer with the specific intent of confusing the engineer so people died they'd be guilty of murder.

Basically the problem is that much of the law operates based on pragmatic rules of folk psychology where we assume that people don't have certain kinds of intents unless we have evidence of them. When you replace those rules with a rule that says there is some mental state which would make the act a crime you end up covering pretty much everything.

Expand full comment

What about an AI system that recommends that you kill others, like one convinced someone to kill himself?

https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-

Expand full comment
author

I very very strongly assume that this was not intended that way and any judge would laugh in your face if you tried it.

I take it to cover things like mens rea, ability to tell right from wrong, etc.

Expand full comment

I get that it's not intended that way, but the problem is giving it a coherent reading that doesn't have this problem. I mean how do you distinguish these cases from the other ones? What's different about a situation where mens rea is one element from the situations I described? It's literally the case that what distinguishes the illegal behavior from the legal behavior in my examples is the mens rea component.

My concern is that this kind of law is likely to end up being tested in the light of a tragedy at which point there is a tendency to search out an interpretation that allows blame to be assigned. That's why I said it wouldn't concern me much if I believed it would be tested in court under boring fact patterns first.

Expand full comment

I think it is really the opposite that concerns me: that liability does not apply to inadvertent damage

Expand full comment

Why not? It seems like the bill says that if there is a mental state that would make it a crime for a human to do then ...

Expand full comment

But the AI has to "autonomously" engage in the conduct. This fix is important because, e.g. even breaking someone's leg or worse isn't per se illegal - breaking someone's leg "negligently," "intentionally," "with malice aforethought," etc are all illegal, in various ways. So this is just saying that we aren't going to get into any arguments about whether autonomous AI has the capacity for negligence or intent or malice or whatever - but it still has to do the damage "autonomously."

Expand full comment

Right, that was what I raised at the end. I don't see how one defines autonomously in any way that doesn't either make an AI uncovered provided it doesn't actually directly control any physical machinery or allows the choice of which information to provide a human to count as autonomous action.

Basically does this clause essentially apply to nothing because autonomous is read narrowly or everything because it's read broadly?

I'm fine with making it very narrow so it's only about direct control of physical devices but that doesn't seem to be how most people are understanding the law.

Expand full comment