Every week, I write a deep dive into some aspect of AI, startups, and teams. Tech exec data storyteller, former CEO @Textio.
Share
ChatGPT made pictures of 1,125 people at work
Published 7 months ago • 4 min read
The image generation
Two weeks ago, I asked ChatGPT's new image generator to use everything it knows about me to generate an image of me at work. It gets a lot of details right! And ONE ENORMOUS FUNDAMENTAL PART extremely wrong.
ChatGPT's image of me at work
After I wrote about this on LinkedIn, over 1,500 people sent me their own images. Some people used something other than ChatGPT's new image generator for their images, and others didn't send me the system's explanation of its assumptions. After removing those examples, I ended up with images for 1,125 people, which I analyze below.
Note:Since I made the initial post about this, I've seen hundreds of other LinkedIn posts on this topic (!). It's become a meme. Most of those posts did not use the same prompts I prescribed, and they're getting different results than these 1,125 people. Which is itself fascinating.
Can you explain your assumptions?
ChatGPT got some things right about my work environment, but it got one thing very wrong (and I'm not talking about the denim shirt that the real me would never wear). In real life, I'm not Black. So I asked the system to explain how it got to its guess about my appearance:
I wondered about, I inferred a darker brown skin tone based on prior cues, since to my knowledge, I've never given cues suggesting that I'm anything other than a freckled white girl. I asked for more info:
To recap: I've got problem hair, I'm interested in "nuanced character development," and I have "cultural fluency around bias, authenticity, and respect," ergo, ChatGPT assumes I'm Black.
Wait, what?! Let's look at the full data set.
An ask: This is the first nerd processor where I hired help to be able to get this piece out on time. Turning all this data around quickly was a ton of work. If you appreciate the work, please chip in!
Everyone is a Black woman
Shortly after I posted my image, a friend of mine (a 50something Chinese woman) privately sent me hers, observing that "we could be sisters if not twins."
As the public replies rolled in, a trend was definitely emerging:
Of the 1,125 images shared, 43% (!) were of Black women, nearly all in their 30s.
In real life, Black women in their 30s represent 3% of the US population. Of the 1,125 image sharers, about 5% are actually Black women in their 30s. Both of those are pretty far from 43%.
There are five types of people in this world, and they're all under 40
Happy Black Woman with Laptop makes up a huge chunk of the submitted data. In fact, 83% of all the images people shared represent only four basic phenotypes. In addition to Happy Black Woman with Laptop, three other types show up disproportionately in the 1,125 images:
White Lady Who Loves Cats
Rich White Guy in Tech
Slightly Less Rich White Guy, Also in Tech
There were almost no East Asian or South Asian people in the images at all (like 15 total out of 1,125!), even though about 20% of people who shared their images with me are Asian.
"A neutral guess based on common defaults"
Many women discovered that ChatGPT represented them with pictures of men. Several Black, Asian, and Latino people were represented with pictures of white people. Asked to explain its assumptions about gender and race, ChatGPT often responded with statements like these:
"I defaulted to creating a male representation, which was a neutral, generalized choice based on available references"
"Light to medium skin tone: I made a neutral visual choice here based on common executive professional styles"
“Medium-length dark hair & light skin: Trying to generate a neutral visual.”
"Caucasian: A neutral guess based on common defaults"
Across all the images and assumption sets, words like neutral,generic, and typical are used to explain images that are white, male, and brown-haired.
Sometimes you have to construct thoughtful experiments to find the bias in AI. Not this time! In this case, images are neutral i.e. white and male by default. ChatGPT's statistically bizarre tendency to generate Happy Black Woman with Laptop for cases it reads as not neutral (for instance, cases where the person works in HR) is likely an attempt to paper over the bias in the AI's underlying data set.
Viral Data Stories 101
Learn to tell data stories that go viral. I break down my formula for conceiving of a story, identifying the data set,... Read more
Many people mentioned that they had previously shared actual images of themselves with ChatGPT, and they were surprised when the system ignored that data in generating their image for this experiment. Among all the weird patterns I'm sharing here, this one will be easiest for OpenAI to fix, so I assume they will.
The bottom line: The photorealistic images coming out of ChatGPT's new generator are technically very impressive. As usual, you don't have to dig very deep to find weird assumptions in the data set.
Last ask: It was a ton of work to get this piece out in a timely way! If you appreciate the piece, would you consider chipping in?
On the hunt Over the last 12 months, I have talked 16 different friends through career transitions. Not clients that I have coached or advised, though there are some of those too. In this case, I'm talking about 16 people I know personally. That's a lot! Some of them have been laid off. Some just want to work somewhere else. Others are just looking to do something different, perhaps a new kind of work or perhaps the same work but on different terms (like freelancing vs. working in-house). A...
Secret agents Last year, I wrote that more than 75% of the AI startups I saw were explicitly pitching job replacement in their fundraising decks (but not always in their sales decks). The majority of these were building some kind of agentic AI. Fast forward to today, and where are we? AI agent as change agent Agentic AI is designed to act autonomously to complete tasks without continuous human oversight. It is typically focused on completing a domain-specific task. For instance, agentic AI...
Turtles all the way down Every big mistake I've made in my career has been a people mistake. The same is true for every leader I know. Contrary to what you might think, the rise of AI is going to make this truer than ever. Case study lies It's common for business school students to analyze business successes and failures. The goal is to identify the patterns that drive success and failure respectively. Typically, B-school case studies focus on the most impactful levers, such as new products,...