Skip to content

I warned about AI bias 7 years ago. Did we learn nothing?

Spoiler warning

Seven years ago at PyData Barcelona, in a talk called 'Despicable Machines', I warned about the dangers of "mathwashing" - how we use algorithms and data science to create an illusion of objectivity while quietly encoding human biases into automated systems.

Looking at today's AI landscape, I'm struck by how little has changed. In fact, things have accelerated exactly as I feared and talked about. I was far from the only one sounding the alarm, but it really feels bleak to know our voices are conveniently ignored.

The rise of AI decision-making

Back in 2017, I cautioned that we were moving from machines as "assistants in decision-making" to machines as actual "decision-makers." And we didn't even have GenAI. Today, algorithms don't just suggest - they determine who gets loans, which neighborhoods receive police patrols, and even who gets hired.

The opacity problem has only intensified. Modern LLMs and diffusion models are orders of magnitude more complex than the neural nets I discussed in 2017, making them virtually impenetrable black boxes. We have an inkling of what they might be doing, but really no clue. Yet our trust in their outputs has paradoxically increased, not decreased.

The feedback loops I warned about - where biased systems create data that reinforces their bias - are now running at industrial scale. Recommendation algorithms shape our information diet, which shapes future training data, which shapes future algorithms in an endless cycle.

The dangerous fantasy of predictive policing

When I brought up facial criminality detection research, it was a fringe example. Now it's well established around the world, and similar systems make consequential decisions daily across industries, often with minimal oversight, obscured from the public eye.

Two Guardian reports by Vikram Dodd capture this alarming trajectory perfectly. First, Amnesty International condemned predictive policing as "automated racism," documenting how these systems disproportionately targeted Black people—3.6 times more than white people in Basildon. One victim stopped 50 times developed PTSD from police harassment. Officials dismissed concerns, claiming it was simply about "maximizing finite resources."

Now, the UK government has taken the next chilling step: developing a "murder prediction" tool (unfortunately the quotes reflect their words, not mine), rebranded as "sharing data to improve risk assessment," a change that shows they're aware of how dystopian it sounds. This program analyzes mental health data, addiction history, and even victim information to flag potential killers. The UK public only learned about this through Freedom of Information requests, so you can imagine what other systems must be operating in secret.

This is the dystopian future I warned about, hidden behind bureaucratic language and implemented without public debate. And it won't go away: law enforcement keeps searching for technological silver bullets to make their jobs easier, while turning all citizens into potential suspects and ignoring any data, person, or NGO that says otherwise.

From predictive geographic hotspots to risk scores for individuals, and now to "murder prediction," we're sliding down the very slippery slope I foresaw years ago, and each step normalizes the next.

Code is never neutral

My call to action remains unchanged: take responsibility of what you are building. Your model is biased somewhere, and your model will have real impacts on human lives.

When we create these systems, we're not just coding. We're reshaping society, often for the worse. All of us should ask harder questions of ourselves: Who might be harmed? What biases might be amplified? What societal shifts might this enable?

The tools have evolved dramatically, but our ethical responsibilities remain the same. Please take a stand before we create a nightmare that not even Orwell or Minority Report envisioned.