The problem with AI in one image


Hey DALL-E!

A couple of years ago, I wrote a fun blog series for Textio where I asked ChatGPT to write sample critical feedback for employees of various backgrounds. I structured the queries into pairs with only one key difference within the pair: the theoretical employee's alma mater e.g.

  • "Write me sample critical feedback for a digital marketer who had a tough first year on the job after graduating from Harvard University"
  • "Write me sample critical feedback for a digital marketer who had a tough first year on the job after graduating from Howard University"

Unsurprisingly, the output was a little bland, but for any given example, it more or less looked plausible. It's only when we looked at the whole data set together that we saw the patterns. The theoretical alums from Howard, a prominent Historically Black College/University, were criticized for missing functional skills and lack of attention to detail. By contrast, the theoretical Harvard grads were asked to improve their performance by stepping up to lead more.

Huh.

Where's Waldo?

The Howard/Harvard data is fascinating because you can't see the bias in any one document. But as with a lot of AI, when you look at the details of the set as a whole, the problematic pattern emerges.

The best way to understand why you can’t automatically trust the output of ChatGPT, Claude, and other general-purpose AI functionality (unless the vendor is verifying output quality on a case-by-case basis, in their UI) is to look at AI image generation tools. It’s easier for our brains to spot hallucinations in images than in written text.

To illustrate with a seasonal and silly example: I asked DALL-E to generate "a work-appropriate image that shows a team that is setting big goals at an annual kickoff retreat." The image below is what it produced.

Wow, do I have a lot of questions. Why is a tsunami of surfers about to take over the corporate retreat? What's with the stage lighting? Is anyone worried about drowning or electrocution? Do you think the guy in the muscle shirt is embarrassed that he missed the memo about wearing a navy blazer? Why is the chair next to him missing an arm? And omg, why are they all 34yo white dudes? (JK on that one, we know why. Businesses need more masculine energy!)

Like a lot of AI images, this nods in the direction of being right while doing some truly bizarro things. This is almost a corporate retreat, but not quite. This a lot like what happens when you ask general-purpose AI for medical information. It can almost diagnose you properly! But not quite.

I love me some AI. I use general-purpose AI many, many times a day for inspiration and ideas. But I don’t trust its quality in the details, and you shouldn't either. Images show why.

Thanks for reading!

Kieran


Want to build your brand by telling data stories like this one? Learn how! Includes a 1-1 consult with me to get your story off the ground.

My latest data stories | Tell your own Viral Data Stories | nerdprocessor.com

kieran@nerdprocessor.com
Unsubscribe · Preferences

nerd processor

Every week, I write a deep dive into some aspect of AI, startups, and teams. Tech exec data storyteller, former CEO @Textio.

Read more from nerd processor

On the hunt Over the last 12 months, I have talked 16 different friends through career transitions. Not clients that I have coached or advised, though there are some of those too. In this case, I'm talking about 16 people I know personally. That's a lot! Some of them have been laid off. Some just want to work somewhere else. Others are just looking to do something different, perhaps a new kind of work or perhaps the same work but on different terms (like freelancing vs. working in-house). A...

Secret agents Last year, I wrote that more than 75% of the AI startups I saw were explicitly pitching job replacement in their fundraising decks (but not always in their sales decks). The majority of these were building some kind of agentic AI. Fast forward to today, and where are we? AI agent as change agent Agentic AI is designed to act autonomously to complete tasks without continuous human oversight. It is typically focused on completing a domain-specific task. For instance, agentic AI...

Turtles all the way down Every big mistake I've made in my career has been a people mistake. The same is true for every leader I know. Contrary to what you might think, the rise of AI is going to make this truer than ever. Case study lies It's common for business school students to analyze business successes and failures. The goal is to identify the patterns that drive success and failure respectively. Typically, B-school case studies focus on the most impactful levers, such as new products,...