Lucene search

K
threatpostLindsey O'DonnellTHREATPOST:1A63205DD8F6EE3AD36956AA8B2624E5
HistoryOct 15, 2019 - 12:00 p.m.

A Deepfake Deep Dive into the Murky World of Digital Imitation

2019-10-1512:00:31
Lindsey O'Donnell
threatpost.com
67

About a year ago, top deepfake artist Hao Li came to a disturbing realization: Deepfakes, i.e. the technique of human-image synthesis based on artificial intelligence (AI) to create fake content, is rapidly evolving. In fact, Li believes that in as soon as six months, deepfake videos will be completely undetectable. And thatā€™s spurring security and privacy concerns as the AI behind the technology becomes commercialized ā€“ and gets in the hands of malicious actors.

Li, for his part, has seen the positives of the technology as a pioneering computer graphics and vision researcher, particularly for entertainment. He has worked his magic on various high-profile deepfake applications ā€“ from leading the charge in putting Paul Walker into _Furious 7 _after the actor died before the film finished production, to creating the facial-animation technology that Apple now uses in its Animoji feature in the iPhone X.

But now, ā€œI believe it will soon be a point where it isnā€™t possible to detect if videos are fake or not,ā€ Li told Threatpost. ā€œWe started having serious conversations in the research space about how to address this and discuss the ethics around deepfake and the consequences.ā€

The security world too is wondering about its role, as deepfakes pop up again and again in viral online videos and on social media. Over the past year, security stalwarts and lawmakers say that the internet needs a plan to deal with various malicious applications of deepfake video and audio ā€“ from scams, to misinformation online, to the privacy of footage itself. Questions have arisen, such as whether firms like Facebook and Reddit are prepared to stomp out imminent malicious deepfakes ā€” used to spread misinformation or for creating nonconsensual pornographic videos, for instance.

And while awareness of the issues is spreading, and the tech world is corralling around better detection methods for deepfakes, Li and other deepfake experts think that it may be virtually impossible to quell malicious applications for the technology.

How Does Deepfake Tech Work?

Deepfakes can be applied in various ways ā€“ from swapping in a new face onto video footage of someone elseā€™s facial features (as seen in a Vladimir Putin deepfake created by Li), to creating deepfake audio imitating someoneā€™s voice to a tee. The latter was seen in a recently-developed replica of popular podcaster Joe Roganā€™s voice, created using a text-to-speech deep learning system, which made Roganā€™s fake ā€œvoiceā€ talk about how he was sponsoring a hockey team made of chimpanzees.

At a high level, both audio and video deepfakes use a technology called ā€œgenerative adversarial networksā€ (GANs), which consists of two machine-learning models. One model leverages a dataset to create fake video footage, while the other model attempts to detect the fake footage. The two work together until one canā€™t detect the other.

deepfake

Credit: Jonathan Hui

GANs were first introduced in a 2014 paper by Ian Goodfellow and researchers at the University of Montreal. The concept was hailed as useful for various applications, such as improving astronomical images in the science industry or helping video game developers improve the quality of their games.

While video manipulation has been around for years, machine learning and artificial intelligence tools used for GAN have now brought a new level of reality to deepfake footage. For instance, older deepfake applications (such as FakeApp, a proprietary desktop application that was launched in 2018) require hundreds of input images in order for a faceswap to be synthesized, but now, new technologies enable products ā€“ such as the deepfake face-swapping app Zao ā€“ to only utilize one image.

ā€œThe technology became more democratized afterā€¦video-driven manipulations were re-introduced to show fun, real-time [deepfake] applications that were intended to make people smile,ā€ said Li.

Security Issues

From a security perspective, there are various malicious actions that attackers could leverage deepfake for ā€“ particularly around identity authentication.

ā€œDeepfakes are becoming one of the biggest threats to global stability, in terms of fake news as well as serious cyber risks,ā€ Joseph Carson, chief security scientist with Thycotic, told Threatpost. ā€œDeepfakes are getting to the point that any digital audio or video online can be questioned on its authenticity and integrity, and can be used to not only steal the online identity of a victim but now the voice and face. Identity theft has now entered a new phase.ā€

The ability to simulate someoneā€™s image and behavior can be used by spam callers impersonating victimsā€™ family members to obtain personal information, or criminals gaining entrance to high-security clearance areas through impersonating a government official.

Already, an audio deepfake of a CEOā€™s fooled a company into making a $243,000 wire transfer in the first known case of successful financial scamming via audio deepfake.

But even beyond security woes, far more sinister applications exist when it comes to deepfake technology.

At a more high-profile level, experts worry that deep fakes of politicians could be used to manipulate election results or spread misinformation.

In fact, already deepfakes have been created to portray former president Donald Trump saying ā€œAIDS is over,ā€ while another deepfake replaced the face of Argentine president Mauricio Macri with that of Adolf Hitler.

> ā€œAIDS is overā€. The first fake news that could become real.#Treatment4all #endAIDS pic.twitter.com/KBxJoKanDM
>
> ā€” SolidaritĆ© Sida (@SolidariteSida) October 7, 2019

ā€œThe risk associated with this will be contextual. Imagine a CEO making an announcement to his company, that ended up being a deepfake artifact,ā€ said Kothanath. ā€œSame could go to sensitive messages between country leaders that could be the beginning of a conflict.ā€

Privacy Scares

In September, the Chinese deepfake app Zao (see video below) went viral in China. The app ā€“ which lets users map their faces over various clips of celebrities ā€“spurred concerns about user privacy and consent when it comes to the collection and storage of facial images.

https://twitter.com/AllanXia/status/1168049059413643265?ref_src=twsrc^tfw|twcamp^tweetembed|twterm^1168049059413643265&ref_url=https%3A%2F%2Fwww.news18.com%2Fnews%2Fbuzz%2Fzao-a-new-chinese-ai-app-lets-you-swap-your-face-with-any-celebrity-in-8-seconds-2295115.html

The idea of seamlessly mapping someoneā€™s online face onto anotherā€™s body is also provoking concerns around sexual assault and harassment when it comes to deepfake pornography.

Several reports of deepfake porn in real-life situations have already emerged, with one journalist in 2018 coming forward with a revenge porn story of how her face was used in an sexually explicit deepfake video ā€” which was developed and spread online after she was embroiled in a political controversy.

Deepfake porn also emerged on Reddit in 2017 after an anonymous user posted several videos, and in 2018, Discord shut down a chat group on its service that was being used to share deepfaked pornographic videos of female celebrities without their consent. In 2019, a Windows/Linux application called DeepNude was released that used neural networks to remove clothing from images of women (the app was later shut down).

ā€œDeepfake gives an unsophisticated person the ability to manufacture non-consensual pornographic images and videos online,ā€ said Adam Dodge, executive director with EndTAB, in an interview with Threatpost. ā€œThis is getting lost in the conversationā€¦we need to not just raise awareness of the issue but also start considering how this is targeting women and thinking of ways which we can address this issue.ā€

Thereā€™s also a privacy concern that dovetails with security. ā€œThere could be many ways an individualā€™s privacy is compromised in the context of a media asset such as video data that is supposed to be confidential (in some cases not),ā€ Arun Kothanath, chief security strategist at Clango, told Threatpost. ā€œUnauthorized access to those assets leads me think nothing but compromise on security breaches.ā€

Deepfake Detection

On the heels of these concerns, deepfakes have come onto the radar of legislators. The House Intelligence Committee held a hearing in June examining the issue; while Texas has banned deepfakes that have an ā€œintent to injure a candidate or influence the result of an election. Virginia has outlawed deepfake pornography, and just last week, California also passed a law that bans the use of deepfake technology in political speech, and for non-consensual use in adult content.

When it comes to adult content, the California law requires consent to be obtained prior to depicting a person in digitally produced sexually explicit material. The bill also provides victims with a set of remedies in civil court.

But even as regulatory efforts roll out, there needs to also be a way to detect deepfakes ā€“ and ā€œunfortunately, there arenā€™t enough deepfake detection algorithms to be confident,ā€ Kothanath told Threatpost.

deepfake database

Images from Googleā€™s deepfake database

The good news is that the tech industry as a whole is beginning to invest more in deepfake detection. Dessa, the company behind the aforementioned Joe Rogan deepfake audio, recently released a new open-source detector for audio deepfakes, which is a deep neural network that uses visual representations of audio clips (called spectrograms, used to train speech synthesis models) to sniff out real versus fake audio.

Facebook, Microsoft and a number of universities have meanwhile joined forces to sponsor a contest promoting research and development to combat deepfakes. And, Google and other tech firms have released a dataset containing thousands of deepfake videos to aid researchers looking for detection techniques.

Deepfakeā€™s Future

Despite these efforts, experts say that many of the threats posed by deepfakes ā€“ from disinformation to harassment ā€“ are existing problems that the internet is already struggling with. And thatā€™s something that even a perfect deepfake detector wonā€™t be able to solve.

For instance, tools may exist to detect deepfakes, but how will they stop the video from existing on ā€“ and spreading on ā€“ social-media platforms? Li said pointed out that already, fake pictures and news have spread out of control on social-media platforms like Twitter and Facebook, and deepfake is just more of the same.

ā€œThe question is not really detecting the deepfake, it is detecting the intention,ā€ Li said. ā€œI think that the right way to solve this problem is to detect the intention of the videos rather than if they have been manipulated or not. There are a lot of positive uses of the underlying technology, so itā€™s a question of whether the use case or intention of the deepfake are bad intentions. If itā€™s to spread disinformation that could cause harm, thatā€™s something that needs to be looked into.ā€

Itā€™s a question that social-media sites are also starting to think about. When asked how they plan to combat deepfakes, Reddit and Twitter both directed Threatpost toward their policies against spreading misinformation (Facebook didnā€™t respond, but announced in September that it ramping up its deepfake detection efforts).

Twitter said that its policies work toward ā€œgoverning election integrity, targeted attempts to harass or abuse, or any other Twitter Rules.ā€

On Redditā€™s end, ā€œRedditā€™s site-wide policies prohibit content that impersonates someone in a misleading or deceptive manner, with exceptions for satire and parody pertaining to public figures,ā€ a Reddit spokesperson told Threatpost. ā€œWe are always evaluating and evolving our policies and the tools we have in place to keep pace with technological realities.ā€

But despite these efforts, deepfake prevention efforts at this point are still reactive rather than proactive, meaning that once the bad deepfakes are live, the damage will still be done, according to Kothanath. Until that issue can be fixed, he said, the extent of damage that a deepfake can cause remains to be seen.

ā€œMy worry will be the ā€˜fear of the unknownā€™ that leads in to a breach and to a privacy violation,ā€ Kothanath said.

_What are the top cyber security issues associated with privileged account access and credential governance? Experts from Thycotic will discuss during our upcoming free _Threatpost webinar_, ā€œHackers and Security Pros: Where They Agree & Disagree When It Comes to Your Privileged Access Security.ā€ _Click here to register.

References