Ethics in Tech: Artificial Intelligence

Categories

Archives

Graphic of a human looking at a target with binary numbers in the background

Jason Borenstein, Director of Graduate Research Ethics Programs and Associate Director of the Center for Ethics and Technology at Georgia Tech

Jason Borenstein HeadshotArtificial intelligence can be used to help humans make fairer decisions and reduce the impact of human biases—but what comes out is only as good as what goes in. So what happens when the algorithms that are used in AI systems are biased?

In 2016, a study found that a criminal justice algorithm used in Florida mislabeled Black defendants as “high–risk” at almost twice the rate it mislabeled white defendants. Studies today reveal that algorithms used for facial recognition software successfully identify white faces more so than faces of people of color.

According to Jason Borenstein, who serves as the director of Graduate Research Ethics Programs and associate director of the Center for Ethics and Technology at Georgia Tech, AI systems learn to make decisions based on training data, and it’s possible that the data sample could be flawed, where certain groups are over- or under- represented. But biases can also come from people who design or train the systems and create algorithms that reflect unintended prejudices. That’s why Borenstein believes that diversifying the field of AI could make a difference.

“AI needs to be included in the broader diversity efforts that are happening across the country,” he says. “A more diverse community would be better able to spot bias.”
But ridding AI of biases doesn’t negate the need for humans to step in sometimes.

“We assume that technology can make better decisions than people,” Borenstein says. “But we can still be good at some decisions, which is why it’s important for humans to be involved in interpreting data.”

Besides bias, there are other ethical conversations surrounding AI: The more sophisticated the technology, the more AI systems “learn,” and the more unpredictable they become. Borenstein also brings up what’s known as the Black Box problem: We know a piece of technology works, but we don’t necessarily know how it works. Should we be using things when we can’t understand them? And finally, as AI is integrated more and more into our society, how will that impact our interactions? Will people want to interact with technology more than with one another?