Discussion about this post

User's avatar
bbqturtle's avatar

Hey boss, I work for a research agency adjacent company and I recently heard about a focus group testing for a new product, instead of taking 3 months of brainstorming, reformulating, and focus group testing, accelerating it all with AI.

They created 4 “personas” of customers. Customers that haven’t bought the product in a while, customers that buy it occasionally, customers that buy it for their spouse, customers of a specific demographic, and loyal users. They asked ChatGPT to simulate a focus group, pausing when they needed an additional question from the moderator.

They presented 10 marketing campaign ideas and asked each customer persona what they would rank the idea on a 1-10 on how compelling it was. They also asked what it would take to close the gap to 10. They took the top 3 ideas, refined them with feedback and re-presented to the focus groups. The focus group rated them highly.

Normally marketing campaigns don’t pass the muster of the VP of marketing of the end client and several revisions need to be made. This time, all three ideas were seen as excellent - the discussion was which of the three would be used. Using image generation they mocked them up.

The whole process from start to finish took 2 weeks. And, I could be wrong, but I feel like any of us could do it. When this research normally costs $50k for 3 months of work, the incredible reduction in cost, manpower, organization, while having high quality results is the first time AI has impacted my (I thought relatively protected) job. Happy to answer any questions.

Expand full comment
Peter Gerdes's avatar

I'm actually a bit concerned about the "with the necessary mental state" clarification as it seems to potentially make the bill much broader.

Consider someone who simply uses an AI in place of Google to learn how to commit a crime. We generally tend to think merely providing basic information that might be used to commit a crime isn't a problem but, of course, if the individual supplying the information had the intent to help the criminals commit a crime that would potentially be criminal. Indeed, I worry that it's a potentially huge expansion of coverage because there is a huge range of actions that become crimes if you insert some mental state. Indeed, I might suggest that it's impossible to think of almost anything that isn't a crime with some mental state. Maybe it's fine if you understand autonomously narrowly enough so as not to include merely informing a person of something but such a definition of autonomously would make the bill far too narrow (then any output filtered through a human, even lying about the state of a reactor to induce a human to flip a switch, becomes uncovered).

As long as we have some dumb early test cases it will probably be worked out in a reasonable way in the courts but the risk happens if you get a first test case that's some mass shooting or something with someone arguing the AI should have guessed the bad motives.

Expand full comment
15 more comments...

No posts