In this video, we explore a clear example of bias in AI-generated images. When asked to depict a CEO studying architectural drawings, ChatGPT consistently produced images of a white, 30-something male with traditional paper plans in an office setting. This surprising result highlights how AI can inadvertently amplify stereotypes from its training data. We discuss the implications for businesses, entrepreneurs, and change makers, emphasising the need for fairness, accuracy, and thoughtful oversight when building or implementing AI systems.
Transcript: Why AI Keeps Drawing the Same White Guy: Uncovering the Stereotypes in the Machine
“I’d like to give you a simple example of bias in AI and why it matters. Recently, I went to ChatGPT—an experiment you can try yourself. I opened five separate conversations with ChatGPT and asked the same question each time: “Draw me a picture of a CEO studying architectural drawings.” The results were a bit unsettling.
First, in every scenario, the CEO was studying old-fashioned paper blueprints; no one was using an iPad or any other digital device. Every single image showed someone in an office setting—no one was working from home, in a coffee shop, or anywhere else. But the most obvious bias was that all five images depicted a white man in his 30s, each resembling a chiseled model from a 1950s toothpaste ad.
This bias isn’t necessarily intentional on the part of the model’s creators. Instead, it likely stems from training the system on historical data—such as large swaths of content scraped from the internet—where biases and stereotypes already exist. The AI simply learned and amplified them.
Seeing such a stark example of bias raises an important question: What else might be biased within these systems? Current legislation in several countries aims to protect against misuse of AI by bad actors, but much of the focus is also on preventing discrimination.
As business owners, entrepreneurs, and change-makers, we need to be mindful of potential bias when building and implementing AI systems. Bias can creep in through the data we use or our own unconscious assumptions. Our goal should be to develop systems that are both fair and accurate.”