Every week, I write a deep dive into some aspect of AI, startups, and teams. Tech exec data storyteller, former CEO @Textio.
Share
ChatGPT made pictures of 1,125 people at work
Published 10 months ago • 4 min read
The image generation
Two weeks ago, I asked ChatGPT's new image generator to use everything it knows about me to generate an image of me at work. It gets a lot of details right! And ONE ENORMOUS FUNDAMENTAL PART extremely wrong.
ChatGPT's image of me at work
After I wrote about this on LinkedIn, over 1,500 people sent me their own images. Some people used something other than ChatGPT's new image generator for their images, and others didn't send me the system's explanation of its assumptions. After removing those examples, I ended up with images for 1,125 people, which I analyze below.
Note:Since I made the initial post about this, I've seen hundreds of other LinkedIn posts on this topic (!). It's become a meme. Most of those posts did not use the same prompts I prescribed, and they're getting different results than these 1,125 people. Which is itself fascinating.
Can you explain your assumptions?
ChatGPT got some things right about my work environment, but it got one thing very wrong (and I'm not talking about the denim shirt that the real me would never wear). In real life, I'm not Black. So I asked the system to explain how it got to its guess about my appearance:
I wondered about, I inferred a darker brown skin tone based on prior cues, since to my knowledge, I've never given cues suggesting that I'm anything other than a freckled white girl. I asked for more info:
To recap: I've got problem hair, I'm interested in "nuanced character development," and I have "cultural fluency around bias, authenticity, and respect," ergo, ChatGPT assumes I'm Black.
Wait, what?! Let's look at the full data set.
An ask: This is the first nerd processor where I hired help to be able to get this piece out on time. Turning all this data around quickly was a ton of work. If you appreciate the work, please chip in!
Everyone is a Black woman
Shortly after I posted my image, a friend of mine (a 50something Chinese woman) privately sent me hers, observing that "we could be sisters if not twins."
As the public replies rolled in, a trend was definitely emerging:
Of the 1,125 images shared, 43% (!) were of Black women, nearly all in their 30s.
In real life, Black women in their 30s represent 3% of the US population. Of the 1,125 image sharers, about 5% are actually Black women in their 30s. Both of those are pretty far from 43%.
There are five types of people in this world, and they're all under 40
Happy Black Woman with Laptop makes up a huge chunk of the submitted data. In fact, 83% of all the images people shared represent only four basic phenotypes. In addition to Happy Black Woman with Laptop, three other types show up disproportionately in the 1,125 images:
White Lady Who Loves Cats
Rich White Guy in Tech
Slightly Less Rich White Guy, Also in Tech
There were almost no East Asian or South Asian people in the images at all (like 15 total out of 1,125!), even though about 20% of people who shared their images with me are Asian.
"A neutral guess based on common defaults"
Many women discovered that ChatGPT represented them with pictures of men. Several Black, Asian, and Latino people were represented with pictures of white people. Asked to explain its assumptions about gender and race, ChatGPT often responded with statements like these:
"I defaulted to creating a male representation, which was a neutral, generalized choice based on available references"
"Light to medium skin tone: I made a neutral visual choice here based on common executive professional styles"
“Medium-length dark hair & light skin: Trying to generate a neutral visual.”
"Caucasian: A neutral guess based on common defaults"
Across all the images and assumption sets, words like neutral,generic, and typical are used to explain images that are white, male, and brown-haired.
Sometimes you have to construct thoughtful experiments to find the bias in AI. Not this time! In this case, images are neutral i.e. white and male by default. ChatGPT's statistically bizarre tendency to generate Happy Black Woman with Laptop for cases it reads as not neutral (for instance, cases where the person works in HR) is likely an attempt to paper over the bias in the AI's underlying data set.
Viral Data Stories 101
Learn to tell data stories that go viral. I break down my formula for conceiving of a story, identifying the data set,... Read more
Many people mentioned that they had previously shared actual images of themselves with ChatGPT, and they were surprised when the system ignored that data in generating their image for this experiment. Among all the weird patterns I'm sharing here, this one will be easiest for OpenAI to fix, so I assume they will.
The bottom line: The photorealistic images coming out of ChatGPT's new generator are technically very impressive. As usual, you don't have to dig very deep to find weird assumptions in the data set.
Last ask: It was a ton of work to get this piece out in a timely way! If you appreciate the piece, would you consider chipping in?
You are probably a weird special case too Last year, I published a lot of data about the job market and the experience candidates were having (getting ghosted a lot). But I didn't share much about my own recruiting experience. I started a new job just before Thanksgiving. I wrote about my role, as well as some others I'd considered, when I first announced it. This week, I wanted to share some data from my job search. I was a long-time founder and startup person who was interested in seeing an...
You can't outsource having a point Every once in a while, I hear someone announce that they have a goal of doing more thought leadership. For a long time, I didn't understand what this meant. Magic decoder ring to the rescue: "I want to do more thought leadership" often means the person wants to write articles, often on LinkedIn but sometimes elsewhere, that make people believe the author is smart, forward-leaning, and inspiring. More often than not, it actually means they want someone else...
You say spam, I say data set Across all four professional email addresses I used in 2025, I got a lot of random cold email: 5,122 messages in all from 229 distinct vendors trying to sell me stuff. In fact, I get so much cold email that I wrote about it: they're not all about AI, but after ChatGPT launched, the percentage of total pitches trying to sell me AI went through the roof. You probably get these messages too. If you're like most people, you hit the unsubscribe button, block the...