It’s 3 a.m., and to no surprise, I’m still watching TikTok. As I scroll through videos, the familiar grin of Tom Cruise fills my screen.
"I didn’t know he was on TikTok," I think to myself. "Good for him."
Then I notice the username: @deeptomcruise. Puzzled, I stare as my brain struggles to recognize the account as artificial. The comment section shares my disbelief, brimming with both awe and terror.
Has deepfake technology really evolved this far? I ask myself.
Yes. Yes it has.
What is a deepfake, anyway?
The term "deepfake" encompasses any media altered by artificial intelligence which relies on machine learning. In the case of Cruise, a deepfake algorithm was fed hundreds of images of him to map his face onto the body of someone else. However, this arduous process may not be necessary for long.
What magic lies behind the scenes?
GAN, otherwise known as Generative Adversarial Networks, is the architecture that will likely be the predominant framework for hyper-realistic deepfakes going forward. Computers — more specifically, neural networks — use it to make a phony Tom Cruise look a lot more convincing.
Within GAN, two neural models, the generator and discriminator, wage war against one another to prove which is the superior technological being. (It’s totally as cool as it sounds.)
The job of the generator model is to produce deepfake media from an existing dataset, like a collection of photos, to fool the discriminator model into believing it’s the real deal. If it fails, the generator receives updated parameters, and the discriminator prevails.
However, if the generator tricks the discriminator enough times, it can reach the exciting "convergence."
This is when the discriminator has no clue if the generator model is lying — at which point it is cast aside, and the generator gets the green light to start creating bonafide deepfakes.
Why should I care?
Although humorous in their creation and occasionally in their application, deepfake generator models have the potential to destroy the lives of politicians, celebrities and even ordinary individuals like you and me.
By utilizing GAN architecture, deepfakes can become increasingly more realistic to the point of imperception by even a trained eye. The term “fake news” could take on a darker twist, incorporating videos, photos or soundbites which could fundamentally alter the perception of famous figures to unknowing viewers.
People love fake narratives. Researchers at MIT found that tweets that included false details were 70 percent more likely to be retweeted than those containing genuine information. Even if an individual can decipher a deepfake’s invalidity, they may still distribute it purely for shock value to those who are less observant. Examples of this phenomenon are already starting to emerge.
In 2018, Emma González, a survivor of the Parkland shooting and gun control advocate, was a victim of a deepfake alteration. In a video that accompanied her article in Teen Vogue, Emma ripped up a shooting target. However, a deepfake video soon surfaced, depicting her tearing up the U.S. Constitution instead. This video was circulated across social media, eliciting hate from offended viewers.
This month, three teenage cheerleaders in Pennsylvania were allegedly harassed by the mother of a teammate who created deepfakes depicting them drinking, vaping and nude. This incident highlights several major cases of abuse that can stem from deepfake technology: sabotage to reputation, the opportunity for exploitation and the creation of nonconsensual pornography.
In fact, around 96 percent of the deepfakes which currently exist online are pornographic. This calls into question the course of litigation viable for individuals who find their identities have been stolen and sexually violated.
What can be done?
Lawmakers often prefer to act retrospectively rather than preemptively concerning the policies governing their citizens.
Only three states — California, Virginia and Texas — have any laws in place enacting penalties for distributing deepfakes, and the Texas law only pertains to deepfakes that could prove harmful to election integrity, similar to the current federal legislation. California and Virginia’s laws allow victims of non-consensual pornographic deepfakes to press charges against those responsible for their distribution.
Several other states are working to implement similar statutes; however, such legislation is already overdue.
Section 230 of the Communications Decency Act allows websites to operate without liability for hosting damaging content as long as it doesn’t violate federal law. Created to prevent censorship and private regulation, Section 230 undermines deepfake prosecution, as platforms are not incentivized to remove such content. Major companies like Facebook and Google have been petitioned to update their Terms of Service agreements to properly combat deepfake media.
However, they have only gone so far as to label deepfake videos with a disclaimer. This policy fails to address issues with deepfake detection, as generator models create more complex deepfakes each year.
If lawmakers and platforms fail to properly address the threat of deepfake media, we as internet users are left to fend against a neural network that grows more convincing every day.
In many ways, we are the discriminator model, deciphering between what is real and fake. Here’s to hoping we never reach convergence.
To get the day's news and headlines in your inbox each morning, sign up for our email newsletters.