Internet users must be more discerning when it comes to the videos they watch and share online, especially in the era of deepfakes.
A portmanteau of the terms “deep learning” and “fake,” deepfakes are videos manipulated using Artificial Intelligence (AI) in order to make it appear as if someone is saying or doing something they never actually did.
Also called “high profile edits,” deepfakes first entered the public sphere in 2017, when several users of the online platform Reddit uploaded manipulated videos that swapped the faces of adult entertainers with those of Hollywood celebrities.
Reporter Samantha Cole, who was the first to publish a piece on the subject after an AI-manipulated adult video that appeared to star Hollywood actress Gal Gadot came out in 2017, said deepfake creators would use open-source machine learning tools, such as Keras and TensorFlow, to create the videos.
According to Cole, these tools are made freely available by Google for “…researchers, graduate students and anyone with an interest in machine learning.”
Writer Ian Sample detailed the process of creating a typical face-swap deepfake:
First, a deepfake creator would run thousands of face shots of two people through an AI algorithm called an encoder. This encoder finds and learns similarities between the two faces before reducing them to their shared common features. After that, a second AI algorithm called a decoder is taught to recover the faces from the compressed images. However, because the faces are different, a creator must train one decoder to recover the first person’s face, and another decoder to recover the second person’s face. In order to perform the face swap, a deepfake creator would feed encoded images into the “wrong” decoder, i.e. feed a compressed image of Person A’s face into the decoder trained on Person B. This will result in the decoder reconstructing the face of Person B with the facial expressions and physical orientation of Person A. This process can take anywhere from a couple of days to a matter of hours, depending on the complexity of the clip and the hardware and software being used.
The practice of generating deepfakes went into overdrive in 2019, when several creators uploaded manipulated videos mocking popular television shows, celebrities and even political figures. This uptick in deepfake creation eventually led to action from the U.S. House of Representatives, whose members held an unprecedented hearing amid concerns that the new technology could be exploited by bad actors to spread deliberate misinformation and thus potentially threaten national security.
Researchers, however, are now finding ways to push back.
The researchers developed their own AI with support from Google, Microsoft and the Defense Advanced Research Projects Agency (DARPA), and trained it to pore over a subject’s identifying traits and mannerisms — including the subtle way a person tilts his head or moves his mouth — in order to develop what they call a “soft biometric” profile.
“To contend with this growing threat, we describe a forensic technique that models facial expressions and movements that typify an individual’s speaking pattern. Although not visually apparent, these correlations are often violated by the nature of how deep-fake videos are created and can, therefore, be used for authentication,” the researchers said in their paper.
The researchers claim their AI has shown initial success in its goal to spot fake videos, noting in a paper that it identified deepfakes with 92 percent accuracy.
Among the deepfakes the AI identified were those of President Trump, Hillary Clinton and Senators Bernie Sanders and Elizabeth Warren.
In addition, the algorithm even identified fake videos with degraded image quality due to high compression.
Want to try your hand at spotting deepfakes without using software? Here’s a checklist of what you should look out for:
For more stories on potentially dangerous technology, visit Glitch.news.