The problem with AI in one image


Hey DALL-E!

A couple of years ago, I wrote a fun blog series for Textio where I asked ChatGPT to write sample critical feedback for employees of various backgrounds. I structured the queries into pairs with only one key difference within the pair: the theoretical employee's alma mater e.g.

  • "Write me sample critical feedback for a digital marketer who had a tough first year on the job after graduating from Harvard University"
  • "Write me sample critical feedback for a digital marketer who had a tough first year on the job after graduating from Howard University"

Unsurprisingly, the output was a little bland, but for any given example, it more or less looked plausible. It's only when we looked at the whole data set together that we saw the patterns. The theoretical alums from Howard, a prominent Historically Black College/University, were criticized for missing functional skills and lack of attention to detail. By contrast, the theoretical Harvard grads were asked to improve their performance by stepping up to lead more.

Huh.

Where's Waldo?

The Howard/Harvard data is fascinating because you can't see the bias in any one document. But as with a lot of AI, when you look at the details of the set as a whole, the problematic pattern emerges.

The best way to understand why you can’t automatically trust the output of ChatGPT, Claude, and other general-purpose AI functionality (unless the vendor is verifying output quality on a case-by-case basis, in their UI) is to look at AI image generation tools. It’s easier for our brains to spot hallucinations in images than in written text.

To illustrate with a seasonal and silly example: I asked DALL-E to generate "a work-appropriate image that shows a team that is setting big goals at an annual kickoff retreat." The image below is what it produced.

Wow, do I have a lot of questions. Why is a tsunami of surfers about to take over the corporate retreat? What's with the stage lighting? Is anyone worried about drowning or electrocution? Do you think the guy in the muscle shirt is embarrassed that he missed the memo about wearing a navy blazer? Why is the chair next to him missing an arm? And omg, why are they all 34yo white dudes? (JK on that one, we know why. Businesses need more masculine energy!)

Like a lot of AI images, this nods in the direction of being right while doing some truly bizarro things. This is almost a corporate retreat, but not quite. This a lot like what happens when you ask general-purpose AI for medical information. It can almost diagnose you properly! But not quite.

I love me some AI. I use general-purpose AI many, many times a day for inspiration and ideas. But I don’t trust its quality in the details, and you shouldn't either. Images show why.

Thanks for reading!

Kieran


Want to build your brand by telling data stories like this one? Learn how! Includes a 1-1 consult with me to get your story off the ground.

My latest data stories | Tell your own Viral Data Stories | nerdprocessor.com

kieran@nerdprocessor.com
Unsubscribe · Preferences

nerd processor

Every week, I write a deep dive into some aspect of AI, startups, and teams. Tech exec data storyteller, former CEO @Textio.

Read more from nerd processor

Tin can in the sky A few years ago, I was flying across the country for a business trip and counting on the flight to get some work done. Unfortunately, the wifi was out for the entire trip. When I landed, I complained about it to a coworker. "Yeah, that's annoying," he said. "On the other hand, you just spent a few hours inside a tin can in the sky, and now you're 3,000 miles away from where you had breakfast this morning. Technology is amazing!" Take two things from this story. One, it is...

The prompt industrial complex These days, AI can produce most common workplace artifacts. From emails to slide decks to financial models to working code, a workable first draft is often just one prompt away. Ok, maybe not one prompt away. It usually takes a few tries to describe exactly what you want. If you care about quality, getting a credible artifact typically involves several prompt <> production cycles. AI refines its output based on how you evolve your prompt, and you refine your...

Prognostication vs. reality I spent 2025 as Founder in Residence at Operator Collective, a venture firm whose LPs include many of the most successful tech leaders of the last decade. During that year, I spent time not only with hundreds of AI startups but also with these operators. For me, these discussions informed several predictions about AI that are now coming true, So I was extra interested in the data set that Operator Collective dropped last week. They surveyed the operators in tech...