The AI Diversity Problem: How to Build Algorithms for Everyone
When Harper Reed gave a speech on AI at a SAS conference last week, his main thesis was about using AI for good. When allowing AI to make critical decisions with full autonomy, there are massive ethical implications. He used a story of a Chinese start-up to illustrate the effects that AI systems can have when teams neglect to consider diversity and ethics.
The start-up was using AI to look at an image of a human face and using only that image, predict that person’s age. They were very excited about the technology and boasted that it had an accuracy of over 90%. The team started their demo using the founders faces first. Sure enough, the AI predicted their age perfectly. Next, they passed it over to Harper (click the link to see what he looks like), and the AI predicted his age to be 140 years old. He tried again and got the same result.
The founders were so confused! Why did it work well in testing, but not during the demo? After asking some questions about their data set, Harper found that the training set only contained images of mid-twenties, Asian males. The training set perfectly represented the co-founders because they collected images from friends and a network of people they knew. This led the AI to work really well for young Asian males, and terribly for the red-haired Harper Reed.
There are many examples of AI outputs being misogynistic or racist. Joy Buolamwini is a researcher at the MIT Media Lab and has been fighting bias in algorithms since 2015. Joy was experimenting with an AI facial recognition software and noticed that it was not working for her. She initially assumed the technology was still in its infancy and would have some bugs, but noticed that her white colleagues did not have any issues. Joy made a white mask (that looks nothing like a human face), put it on, and the software detected the mask immediately.
Facial recognition for iPhones is one thing, but imagine if the recognition software of a self-driving car was more likely to hit black pedestrians than white. Joy has created the Algorithmic Justice League to help companies and researchers avoid building bias in their algorithms and ensure that AI isn’t unfairly benefiting those who are represented in the datasets.
The examples above do not show that AI researchers are inherently racist and only building solutions for those who look like them. They instead open up a conversation of how our unconscious biases can lead to creating programs and algorithms that serve only those who are like us. It’s important to build the training dataset to represent a diverse group of people. In order to build diverse datasets, it is important to have diverse AI and Machine Learning teams.
Diversity is a complex and increasingly political subject. Companies are aware of the benefits of diversity, but often fall short on their commitments. When it comes to AI, diversity is not an option, it’s a requirement. The product will not work and companies will not succeed if their algorithms are biased against a large part of the customer base.
In the examples above, the algorithms immediately show when they do not respond to people of different colors. With this feedback, we need leaders in all fields of AI to pivot towards diversity and ensure that organizations are focused on building a fair and free society for all.