About a year ago, top deepfake artist Hao Li came to a disturbing realization: Deepfakes, i.e. the technique of human-image synthesis based on artificial intelligence (AI) to create fake content, is rapidly evolving. In fact, Li believes that in as soon as six months, deepfake videos will be completely undetectable. And thatās spurring security and privacy concerns as the AI behind the technology becomes commercialized ā and gets in the hands of malicious actors.
Li, for his part, has seen the positives of the technology as a pioneering computer graphics and vision researcher, particularly for entertainment. He has worked his magic on various high-profile deepfake applications ā from leading the charge in putting Paul Walker into _Furious 7 _after the actor died before the film finished production, to creating the facial-animation technology that Apple now uses in its Animoji feature in the iPhone X.
But now, āI believe it will soon be a point where it isnāt possible to detect if videos are fake or not,ā Li told Threatpost. āWe started having serious conversations in the research space about how to address this and discuss the ethics around deepfake and the consequences.ā
The security world too is wondering about its role, as deepfakes pop up again and again in viral online videos and on social media. Over the past year, security stalwarts and lawmakers say that the internet needs a plan to deal with various malicious applications of deepfake video and audio ā from scams, to misinformation online, to the privacy of footage itself. Questions have arisen, such as whether firms like Facebook and Reddit are prepared to stomp out imminent malicious deepfakes ā used to spread misinformation or for creating nonconsensual pornographic videos, for instance.
And while awareness of the issues is spreading, and the tech world is corralling around better detection methods for deepfakes, Li and other deepfake experts think that it may be virtually impossible to quell malicious applications for the technology.
Deepfakes can be applied in various ways ā from swapping in a new face onto video footage of someone elseās facial features (as seen in a Vladimir Putin deepfake created by Li), to creating deepfake audio imitating someoneās voice to a tee. The latter was seen in a recently-developed replica of popular podcaster Joe Roganās voice, created using a text-to-speech deep learning system, which made Roganās fake āvoiceā talk about how he was sponsoring a hockey team made of chimpanzees.
At a high level, both audio and video deepfakes use a technology called āgenerative adversarial networksā (GANs), which consists of two machine-learning models. One model leverages a dataset to create fake video footage, while the other model attempts to detect the fake footage. The two work together until one canāt detect the other.
Credit: Jonathan Hui
GANs were first introduced in a 2014 paper by Ian Goodfellow and researchers at the University of Montreal. The concept was hailed as useful for various applications, such as improving astronomical images in the science industry or helping video game developers improve the quality of their games.
While video manipulation has been around for years, machine learning and artificial intelligence tools used for GAN have now brought a new level of reality to deepfake footage. For instance, older deepfake applications (such as FakeApp, a proprietary desktop application that was launched in 2018) require hundreds of input images in order for a faceswap to be synthesized, but now, new technologies enable products ā such as the deepfake face-swapping app Zao ā to only utilize one image.
āThe technology became more democratized afterā¦video-driven manipulations were re-introduced to show fun, real-time [deepfake] applications that were intended to make people smile,ā said Li.
From a security perspective, there are various malicious actions that attackers could leverage deepfake for ā particularly around identity authentication.
āDeepfakes are becoming one of the biggest threats to global stability, in terms of fake news as well as serious cyber risks,ā Joseph Carson, chief security scientist with Thycotic, told Threatpost. āDeepfakes are getting to the point that any digital audio or video online can be questioned on its authenticity and integrity, and can be used to not only steal the online identity of a victim but now the voice and face. Identity theft has now entered a new phase.ā
The ability to simulate someoneās image and behavior can be used by spam callers impersonating victimsā family members to obtain personal information, or criminals gaining entrance to high-security clearance areas through impersonating a government official.
Already, an audio deepfake of a CEOās fooled a company into making a $243,000 wire transfer in the first known case of successful financial scamming via audio deepfake.
But even beyond security woes, far more sinister applications exist when it comes to deepfake technology.
At a more high-profile level, experts worry that deep fakes of politicians could be used to manipulate election results or spread misinformation.
In fact, already deepfakes have been created to portray former president Donald Trump saying āAIDS is over,ā while another deepfake replaced the face of Argentine president Mauricio Macri with that of Adolf Hitler.
> āAIDS is overā. The first fake news that could become real.#Treatment4all #endAIDS pic.twitter.com/KBxJoKanDM
>
> ā SolidaritĆ© Sida (@SolidariteSida) October 7, 2019
āThe risk associated with this will be contextual. Imagine a CEO making an announcement to his company, that ended up being a deepfake artifact,ā said Kothanath. āSame could go to sensitive messages between country leaders that could be the beginning of a conflict.ā
In September, the Chinese deepfake app Zao (see video below) went viral in China. The app ā which lets users map their faces over various clips of celebrities āspurred concerns about user privacy and consent when it comes to the collection and storage of facial images.
The idea of seamlessly mapping someoneās online face onto anotherās body is also provoking concerns around sexual assault and harassment when it comes to deepfake pornography.
Several reports of deepfake porn in real-life situations have already emerged, with one journalist in 2018 coming forward with a revenge porn story of how her face was used in an sexually explicit deepfake video ā which was developed and spread online after she was embroiled in a political controversy.
Deepfake porn also emerged on Reddit in 2017 after an anonymous user posted several videos, and in 2018, Discord shut down a chat group on its service that was being used to share deepfaked pornographic videos of female celebrities without their consent. In 2019, a Windows/Linux application called DeepNude was released that used neural networks to remove clothing from images of women (the app was later shut down).
āDeepfake gives an unsophisticated person the ability to manufacture non-consensual pornographic images and videos online,ā said Adam Dodge, executive director with EndTAB, in an interview with Threatpost. āThis is getting lost in the conversationā¦we need to not just raise awareness of the issue but also start considering how this is targeting women and thinking of ways which we can address this issue.ā
Thereās also a privacy concern that dovetails with security. āThere could be many ways an individualās privacy is compromised in the context of a media asset such as video data that is supposed to be confidential (in some cases not),ā Arun Kothanath, chief security strategist at Clango, told Threatpost. āUnauthorized access to those assets leads me think nothing but compromise on security breaches.ā
On the heels of these concerns, deepfakes have come onto the radar of legislators. The House Intelligence Committee held a hearing in June examining the issue; while Texas has banned deepfakes that have an āintent to injure a candidate or influence the result of an election. Virginia has outlawed deepfake pornography, and just last week, California also passed a law that bans the use of deepfake technology in political speech, and for non-consensual use in adult content.
When it comes to adult content, the California law requires consent to be obtained prior to depicting a person in digitally produced sexually explicit material. The bill also provides victims with a set of remedies in civil court.
But even as regulatory efforts roll out, there needs to also be a way to detect deepfakes ā and āunfortunately, there arenāt enough deepfake detection algorithms to be confident,ā Kothanath told Threatpost.
Images from Googleās deepfake database
The good news is that the tech industry as a whole is beginning to invest more in deepfake detection. Dessa, the company behind the aforementioned Joe Rogan deepfake audio, recently released a new open-source detector for audio deepfakes, which is a deep neural network that uses visual representations of audio clips (called spectrograms, used to train speech synthesis models) to sniff out real versus fake audio.
Facebook, Microsoft and a number of universities have meanwhile joined forces to sponsor a contest promoting research and development to combat deepfakes. And, Google and other tech firms have released a dataset containing thousands of deepfake videos to aid researchers looking for detection techniques.
Despite these efforts, experts say that many of the threats posed by deepfakes ā from disinformation to harassment ā are existing problems that the internet is already struggling with. And thatās something that even a perfect deepfake detector wonāt be able to solve.
For instance, tools may exist to detect deepfakes, but how will they stop the video from existing on ā and spreading on ā social-media platforms? Li said pointed out that already, fake pictures and news have spread out of control on social-media platforms like Twitter and Facebook, and deepfake is just more of the same.
āThe question is not really detecting the deepfake, it is detecting the intention,ā Li said. āI think that the right way to solve this problem is to detect the intention of the videos rather than if they have been manipulated or not. There are a lot of positive uses of the underlying technology, so itās a question of whether the use case or intention of the deepfake are bad intentions. If itās to spread disinformation that could cause harm, thatās something that needs to be looked into.ā
Itās a question that social-media sites are also starting to think about. When asked how they plan to combat deepfakes, Reddit and Twitter both directed Threatpost toward their policies against spreading misinformation (Facebook didnāt respond, but announced in September that it ramping up its deepfake detection efforts).
Twitter said that its policies work toward āgoverning election integrity, targeted attempts to harass or abuse, or any other Twitter Rules.ā
On Redditās end, āRedditās site-wide policies prohibit content that impersonates someone in a misleading or deceptive manner, with exceptions for satire and parody pertaining to public figures,ā a Reddit spokesperson told Threatpost. āWe are always evaluating and evolving our policies and the tools we have in place to keep pace with technological realities.ā
But despite these efforts, deepfake prevention efforts at this point are still reactive rather than proactive, meaning that once the bad deepfakes are live, the damage will still be done, according to Kothanath. Until that issue can be fixed, he said, the extent of damage that a deepfake can cause remains to be seen.
āMy worry will be the āfear of the unknownā that leads in to a breach and to a privacy violation,ā Kothanath said.
_What are the top cyber security issues associated with privileged account access and credential governance? Experts from Thycotic will discuss during our upcoming free _Threatpost webinar_, āHackers and Security Pros: Where They Agree & Disagree When It Comes to Your Privileged Access Security.ā _Click here to register.
ai.googleblog.com/2019/09/contributing-data-to-deepfake-detection.html
arxiv.org/abs/1406.2661
intelligence.house.gov/news/documentsingle.aspx?DocumentID=657
media.threatpost.com/wp-content/uploads/sites/103/2019/10/11154525/1BmMF9f6mzOXxen3l6XvVQA.jpeg
media.threatpost.com/wp-content/uploads/sites/103/2019/10/11155730/new_gif1.gif
medium.com/dessa-news/detecting-audio-deepfakes-f2edfd8e2b35
medium.com/dessa-news/real-talk-speech-synthesis-5dd0897eef7f
register.gotowebinar.com/register/9029717654543174147?source=ART
register.gotowebinar.com/register/9029717654543174147?source=ART
t.co/KBxJoKanDM
threatpost.com/california-bans-deepfakes-elections-porn/148950/
threatpost.com/deep-fake-of-ceos-voice-swindles-company-out-of-243k/147982/
threatpost.com/facebook-microsoft-challenge-industry-to-detect-prevent-deepfakes/148066/
threatpost.com/newsletter-sign/
threatpost.com/pumping-the-brakes-on-artificial-intelligence/137838/
twitter.com/hashtag/endAIDS?src=hash&ref_src=twsrc%5Etfw
twitter.com/hashtag/Treatment4all?src=hash&ref_src=twsrc%5Etfw
twitter.com/SolidariteSida/status/1181176922941853696
twitter.com/SolidariteSida/status/1181176922941853696?ref_src=twsrc%5Etfw
twitter.com/techreview/status/1174316798700924929?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1174316798700924929&ref_url=https%3A%2F%2Fwww.businessinsider.com%2Fvladimir-putin-deepfake-mit-ai-technology-hao-li-2019-9
www.businessinsider.com/discord-closes-down-deepfakes-server-ai-celebrity-porn-2018-1
www.huffingtonpost.co.uk/entry/deepfake-porn_uk_5bf2c126e4b0f32bd58ba316?guccounter=1
www.law360.com/articles/1195514/texas-law-could-signal-more-state-federal-deepfake-bans
www.redditinc.com/policies/content-policy
www.theverge.com/2019/7/1/20677800/virginia-revenge-porn-deepfakes-nonconsensual-photos-videos-ban-goes-into-effect
www.theverge.com/2019/9/2/20844338/zao-deepfake-app-movie-tv-show-face-replace-privacy-policy-concerns
www.youtube.com/watch?v=M8t6hGRtDac