Improve your data literacy

Jim Stolze and Nicolas Lierman in conversation: The problem of biased algorithms

In this 5-part series, two innovation heavyweights go head-to-head to discuss the current and future state of affairs. AI entrepreneur and author Jim Stolze and MultiMinds’ Head of Innovation Nicolas Lierman have an in-depth conversation on innovation and technology. Part 3: are algorithms biased? And how can we resolve that?

Many philosophers rejoice in the rise of AI. Finally, they have something useful to think about. Ethical dilemmas concerning the development of AI technology, for example. And no wonder: AI poses massive ethical questions. So companies are turning to ethicists to deliberate on the development of this technology. AI entrepreneur Jim Stolze and Nicolas Lierman discuss one of the most pressing ethical issues in AI today: biased algorithms.

Jim, in a contribution to the Dutch newspaper De Volkskrant, you argue that tech companies only have ethicists on their payroll for show. Could you elaborate on that?

Jim Stolze: “In the article, I compare AI with those trick mirrors that deform reality, making you look cartoonish. If AI training data isn’t handled carefully, algorithms can deform reality in just the same way. AI companies are aware of this danger, but most of them are going about it all wrong. The ethical experts they hire are not involved from the start, but called up afterwards to add some superficial ethical afterthoughts. I call it ‘ethics washing’. They’re toothless watchdogs.”

So what would be a better approach to ethical reflection in the digital world?

Jim: “I think we need independent supervision to challenge the companies. Who collected the data? What’s the goal of the algorithm? Who built it, and did they do a thorough job with the data? These questions are critical and should be judged by independent experts.”

The ethicists are not at the drawing board in AI companies. I call it ethics washing. They're toothless watchdogs. - Jim Stolze

Nicolas, do you agree that we need independent ethical supervision?

Nicolas Lierman: “I agree with Jim that ethics in big tech are often more about PR. But I don’t think an external committee will solve the problem. The issues are rooted in the fact that the people making the algorithms are a very homogenous group: white, heterosexual men.”


“This is a problem in engineering in general, but it’s amplified dramatically in AI. Having another ethical committee of the same homogenous group won’t change much. I believe the answer lies in promoting diversity in tech companies. We’re fighting hard for the inclusion of minorities almost everywhere, but the tech world is still lagging behind.”

The issues are rooted in the fact that the people making the algorithms are a very homogenous group: white, heterosexual men. - Nicolas Lierman

Jim: “It’s true that algorithms often reflect society, problems and biases included. Diversity is certainly an issue. This is exactly the point of the mirror metaphor. One example illustrates this perfectly. A university recently experimented with a facial recognition algorithm to grant access to the buildings. The system worked perfectly … for white people.”


“There was one black professor who couldn’t get in. It turns out they fed the algorithm with loads of data of white faces, so it could only distinguish white people. There are plenty of examples of AI chatbots making racist or sexist remarks because it’s what they pick up from human interactions.”

Biases in our data should be an opportunity to reflect. You shouldn't get mad at the mirror for having a bad hair day. - Jim Stolze

Nicolas: “I saw another example recently of a vision API that had to describe profile pictures used on resumes. The most common description for men was ‘professional’. For women, it was ‘smiling’. The engineers recreate their own worldview in the algorithms.”

Don’t these biases also teach us something about ourselves?

Jim: “Precisely. We should use this to our advantage. We are confronted with biases that we may not have been aware of. If we see a bias in our data, it should be an opportunity to reflect. The HR department at Amazon used an algorithm to help preselect who to hire based on resumes. It turned out the algorithm was biased against women.”


“What did Amazon do? They just decided to kill the project. Such a missed opportunity! What they should have done was go back to the drawing board and figure out why the algorithm favoured resumes of male candidates over those of female candidates. It’s a learning process. You shouldn’t get mad at the mirror for having a bad hair day. Fix the problem instead!”