AI-generated fake photographs, videos and voice cloning are improving at a rapid pace, and are perhaps becoming too convincing. Should we worry about this new phenomenon?
YoungMi Lamine | Ed Peter Traynor | 10 February 2021
Channel 4, Deepfake Queen: 2020 Alternative Christmas Message (2020) © Courtesy of the artist.
Why seeing isn’t believing?
From early photoshops to the recent flood of deep fakes – videos that look and sound like the real thing – have been around for some time. These technologies used to be easily detectable to the human eye, turning them into a fun moment rather than a convincing replica.
Nowadays, the ability to synthesise human moving images has accelerated rapidly and the quality has improved to the extent that it is no longer always possible to tell the fake from the real. The face-swap apps and the integration of deep fake technology into Snapchat confirm that those technologies are so still mainly used for entertainment purposes.
From Hollywood studio to political life, is our democracy in danger?
First seen in Forrest Gump movie (1994), Hollywood is the brilliant incubator for visual special effects to support their stories. Face swaps require adjustment frame by frame and a small fortune that only big production companies can afford.
If the news media is the fourth pillar of democracy, be it in print, radio, TV or the internet, then what is the impact of social media in spreading fake news?
For a long time, most deep fakes were (over-) used to misrepresent female celebrities by adding their faces onto the bodies of porn stars, or published as so-called “revenge porn”, usually by men who sought to humiliate ex-partners or other women. Until relatively recently, however, there was little concern about this form of cyberbullying.
Then, the video showing Elon Musk smoking pot crashed Tesla’s stock shares in a day, and the CEO of Symantec tricked into transferring $10 m to an unknown perpetrator who used voice fake technology. These events raised a question: will plausible deep fakes shift stock prices, influence voters and provoke religious tension or nuclear war? Maybe.
In June 2019, Facebook refused to delete the fake Pelosi video tweeted by Trump and shared by his online supporters. A week later, Zuckerberg had his own deepfake problem. More recently, Facebook banned deepfake videos that are likely to mislead viewers into thinking someone “said words that they did not actually say” – but only during the run-up to the 2020 US election. It is important to note that this policy covers only misinformation produced using AI, meaning “shallow fakes” are still allowed on the platform. Shallow fakes do not use AI deep-learning technology and are created using simple and easily accessible video editing tools. Nonetheless, the results of shallow or deep fake are almost indifferentiable.
What solution do we have to spot counterfeits?
Ironically, the answer to the use of AI in the creation of deep fakes is another form of AI named GAN or Generative Adversarial Network. Two Artificial intelligences are run at the same time. One to create a realistic fake while the other measures the level of effort involved. Of course, to compare different material, you need hours of reliable footage, meaning that they work best for celebrities seeking to detect scams.
Tech firms, like ZeroFox, are now working on detection systems that aim to flag up fakes whenever they appear, with the support of NML (Neural Machine Learning). Another strategy focuses on the provenance of the media. Digital watermarks are not foolproof, but a blockchain online ledger system could hold a tamper-proof record of videos, pictures and audio so their origins and any manipulations can always be checked.
Behind the Scenes: Dali Lives (2019) © Courtesy of the Dali Museum.
Are deep fakes always malicious?
Not always. Voice-cloning deep fakes can restore people’s voices when they lose them to disease. A recent example, the hologram of her father turned out to be the best birthday present for Kim Kardashian.
The Dali Museum in St. Petersburg, Florida, USA partnered with Goodby Silverstein to create a groundbreaking Artificial Intelligence (AI) experience. “Dali Lives” provides the museum visitors with the opportunity to learn more about Dali’s life from the artist himself.
Do you want to test your skills? Here are some of the best deep fake software.
- Lend Me Your Face – Go DeepFake Yourself! by Tamiko Thiel and /p
- DeepFaceLab Best for: research purposes.
- Faceswap Best for: training purposes.
- Deep Art Effects Best for: creative use.
- REFACE Best for: fans of GIFs and memes.
- Morphin Best for: anyone who uses GIFs in their daily communication.
- Jiggy Best for: anyone who doesn’t take themselves too seriously.