Zachary Tidler, Graduate Student in the College of Sciences
Zachary Tidler credits Ellen DeGeneres for his first exposure to deepfakes, or videos that use artificial intelligence to replace the likeness of one person with another. About five years ago, the talk show host posted a video of Pope Francis pulling the cloth off an altar, leaving everything on top still standing. And Tidler bought it.
While the Pope Francis deepfake was lighthearted, there are plenty of examples of the darker side of the technology. The faces of celebrities have been imposed on porn stars’ bodies, and politicians have been featured in videos saying words they never actually said in real life.
To make a deepfake video, a creator first trains a neural network on many hours of real footage of the person, then combines the trained network with computer-graphics techniques to superimpose a copy of the person onto a different actor. (This is how the late actor Paul Walker appeared in Fast & Furious 7.) Deepfake technology was originally only available to a high-level computer science community, but Tidler says that now it has been packaged in such a way that anyone with a moderately powerful computer can make a video, which means a lot of people have the ability to manipulate others and spread false information.
And many will believe that misinformation. Tidler conducted research for his master’s thesis on who is most susceptible to believing deepfakes. He found a strong correlation between affect detection ability—a person’s ability to read cues in another person’s eyes, face, or body language to determine how that person is feeling—and deepfake detection ability. “If you’re bad at spotting emotions in people’s faces, you’re more likely to be bad at spotting a deepfake video,” says Tidler.
The computer science community is trying to combat the problem. Last year, social media platforms including Facebook and Twitter banned deepfakes from their networks. Tidler says that Microsoft has a tool that gives a video a deepfake score, but that hasn’t completely solved the problem.
“It becomes something of an arms race because the deepfake networks and algorithms get a little better, and then the algorithms and neural networks trying to identify deepfakes get a little better, and so on,” Tidler explains.