Artificial intelligence is everywhere. It curates our playlists, unlocks our phones, and, more and more, it decides who gets a loan, who gets hired, or who ends up in a police lineup. We like to think of AI as this cold, neutral machine — emotionless, logical, immune to the ugliness of human bias.
But here’s the uncomfortable truth: AI can absolutely be biased. Not because it chooses to be, but because it learns from us — and we’ve got a lot of messy history in our data.
How AI Learns
At its core, AI is just a pattern machine. It looks at thousands or millions of examples and tries to find trends. If it sees that certain faces tend to go with certain names, or that certain zip codes correlate with loan defaults, it starts making predictions based on that.
But what happens when those patterns are unfair? What if the data it’s learning from is already full of bias — the result of decades of discrimination in hiring, housing, or policing? That’s when AI doesn’t just reflect bias. It magnifies it.
The Facial Recognition Failures
Let’s talk about facial recognition. Sounds harmless, right? It’s just math — measuring the distance between your eyes, the shape of your nose, the curve of your chin — and turning it into a code that a computer can read.
But who taught the AI what a “face” looks like?
In 2018, MIT researchers Joy Buolamwini and Timnit Gebru looked at facial analysis tools from big-name tech companies. They discovered that the systems were almost perfect at identifying white men. But when it came to Black women? The error rate shot up to 35% (Buolamwini & Gebru, 2018). That’s not a small glitch.
A Real-Life Nightmare
Robert Williams, a Black man living in Detroit, was arrested in front of his family because a facial recognition system thought he looked like someone who shoplifted from a store he hadn’t even been to (Hill, 2020). The image it used? A blurry still from a security camera. The match? Completely wrong. But the police treated the software’s guess like fact.
He was held in a cell for hours. All because a machine got it wrong — and no one questioned it.
A Tag Gone Wrong
Then there’s the infamous case where Google Photos auto-tagged two Black people in a user’s gallery as “gorillas” (Simonite, 2018). Google’s response -They just removed the tag. The problem wasn’t solved, it was just avoided.
Why This Matters
AI isn’t just used for convenience anymore. It’s being trusted to make big decisions — life-altering ones.
In Criminal Justice
Facial recognition tools are used in law enforcement all over the U.S., despite clear evidence they misidentify people of color more often. Think about that: you could be stopped, searched, or even arrested — not for something you did, but because you kind of resemble someone else in a poorly trained system.
In Hiring
Companies are using AI to scan résumés and rank job applicants. But if the system was trained on past hires — and those hires were mostly white or male — guess what it learns to look for? Not talent. Not potential. Just similarity. It’s bias disguised as efficiency.
In Healthcare
Doctors are starting to rely on algorithms to predict patient needs, but some studies show those tools underestimate the health risks of Black patients, even when symptoms are the same (Obermeyer et al., 2019). Why? Because the AI was trained on data where Black patients had less access to care — and it assumed that meant they were healthier.
In Classrooms
Online proctoring tools, designed to detect cheating during remote exams, have struggled to track darker-skinned students — sometimes flagging them as suspicious simply because their faces aren’t lit well enough for the algorithm. Imagine being accused of cheating just because your skin tone makes you invisible to a camera!
The Bigger Picture
This isn’t just about bad software. It’s about who gets to define “normal.” Who gets to be seen. Who gets to be protected. When AI fails, it fails more often for people who already face discrimination.
Saying “AI can’t be racist, it’s just math” misses the point. Math can be clean. Data can’t. Behind every data point is a human decision — what to collect, who to include, what counts as success.And if we’re not careful, we’ll keep building systems that make injustice faster, cheaper, and harder to question.
We built AI to be smarter than us — to make better, fairer decisions. But it turns out, it’s only as fair as the people who train it.
Leave a comment