The problem with AI in one image


Hey DALL-E!

A couple of years ago, I wrote a fun blog series for Textio where I asked ChatGPT to write sample critical feedback for employees of various backgrounds. I structured the queries into pairs with only one key difference within the pair: the theoretical employee's alma mater e.g.

  • "Write me sample critical feedback for a digital marketer who had a tough first year on the job after graduating from Harvard University"
  • "Write me sample critical feedback for a digital marketer who had a tough first year on the job after graduating from Howard University"

Unsurprisingly, the output was a little bland, but for any given example, it more or less looked plausible. It's only when we looked at the whole data set together that we saw the patterns. The theoretical alums from Howard, a prominent Historically Black College/University, were criticized for missing functional skills and lack of attention to detail. By contrast, the theoretical Harvard grads were asked to improve their performance by stepping up to lead more.

Huh.

Where's Waldo?

The Howard/Harvard data is fascinating because you can't see the bias in any one document. But as with a lot of AI, when you look at the details of the set as a whole, the problematic pattern emerges.

The best way to understand why you can’t automatically trust the output of ChatGPT, Claude, and other general-purpose AI functionality (unless the vendor is verifying output quality on a case-by-case basis, in their UI) is to look at AI image generation tools. It’s easier for our brains to spot hallucinations in images than in written text.

To illustrate with a seasonal and silly example: I asked DALL-E to generate "a work-appropriate image that shows a team that is setting big goals at an annual kickoff retreat." The image below is what it produced.

Wow, do I have a lot of questions. Why is a tsunami of surfers about to take over the corporate retreat? What's with the stage lighting? Is anyone worried about drowning or electrocution? Do you think the guy in the muscle shirt is embarrassed that he missed the memo about wearing a navy blazer? Why is the chair next to him missing an arm? And omg, why are they all 34yo white dudes? (JK on that one, we know why. Businesses need more masculine energy!)

Like a lot of AI images, this nods in the direction of being right while doing some truly bizarro things. This is almost a corporate retreat, but not quite. This a lot like what happens when you ask general-purpose AI for medical information. It can almost diagnose you properly! But not quite.

I love me some AI. I use general-purpose AI many, many times a day for inspiration and ideas. But I don’t trust its quality in the details, and you shouldn't either. Images show why.

Thanks for reading!

Kieran


Want to build your brand by telling data stories like this one? Learn how! Includes a 1-1 consult with me to get your story off the ground.

My latest data stories | Tell your own Viral Data Stories | nerdprocessor.com

kieran@nerdprocessor.com
Unsubscribe · Preferences

nerd processor

Every week, I write a deep dive into some aspect of AI, startups, and teams. Tech exec data storyteller, former CEO @Textio.

Read more from nerd processor

A modern way to fail Here's the stereotype about your worst manager: They're harsh, impatient, lose their temper. They yell a lot. They are defensive and dismissive. Intolerant of others. Sometimes they're just mean. If you've had a manager like that, I'm sorry. I haven't seen very many of them. In most workplaces, abusive managers aren't allowed to stick around. By contrast, most modern managers care about you and want you to grow. They’re understanding when you have personal emergencies....

Easy as A-B-C In my first job at Microsoft, I made it possible for Windows, Office, and other applications to put words from any language in alphabetical order. I know, you're thinking: That was a whole job? Any eight-year-old can put words in order! But at the time, Microsoft was expanding into languages that had never been encoded on computers before. Many of them didn't have traditional dictionaries and had no unified concept of alphabetical order. I'll never forget the weekend I spent in...

Like a broken clock that's right twice a day Last month, I wrote a nerd processor called Three things I was completely wrong about. In the piece, I shared three times that my guesses about data were absolutely wrong: #1: I was wrong about remote work #2: I was wrong about who says "I told you so" #3: I was wrong about small talk One thing I was right about, however, is that nerd processor readers would like reading about my mistakes. You shared the piece in record numbers. So for this week, I...