Technology & Trust: The Rise of Deep fake Videos
Technological advances have given rise to tools that, in the coming years, have the increasing potential to spread disinformation. This calls into question how much control everyday individuals, organizations and public officials have over their public image.
Nowhere is this more clear than with the steady rise of “deep fake” videos. Using advanced, but surprisingly simple-to-use, computer-generated imagery software, deep fake videos produce extremely realistic audio and visual recordings of people saying or doing anything the video editor wishes.
The technology to create these misleading videos is becoming increasingly widespread, with programs and apps available for download and use by virtually anyone with access to a mobile phone or computer. In short, we’re losing our ability to trust anything—photo or video—that we see shared.
Applications of Deep fake
The technology powering the creation of deep fake wasn’t developed with malicious intent, but rather to aid in the creation of cinematic film footage featuring an actor who died while filming was still in progress. The fundamentals of the technology, however, have spread and become simplified, allowing users to create videos with intents ranging from comedic effect to outright sabotage.
The most recent deep fake video to make headlines wasn’t malicious at all. Featuring the opening introduction to the 1990s-era family sitcom “Full House,” all the faces of actors were instead replaced with the likeness of comedic actor Nick Offerman. These “fake Offermans” displayed emotions and movements that were seamless and consistent with the mannerisms of the original video, despite Offerman having no involvement in the production. While undoubtedly a farce, the virality of the video illustrates the impact of the technology.
More chilling applications in recent years include entire speeches seemingly given by world leaders, including former U.K. Prime Minister Theresa May, former U.S. President Barack Obama and Russian President Vladimir Putin. To even the most suspicious viewer, the realistic nature of these videos is uncanny.
Building a Deep fake Defence
With deep fake technology so easily accessible, there is a risk that every person and organization is one malicious video away from disaster. It will be increasingly difficult to build trust if more and more deep fake videos muddy the waters between fact and fiction. Likewise, accidentally sharing a deep fake video has the potential to backfire and erode audiences’ trust and respect.
As deep fake technology spreads, however, technologists are working to create other tools and methods to fight back. One potential solution relies on anyone who steps in front of the camera utilizing certain mannerisms and movements that would hinder the ability of CGI technology to use it as a source for creating deep fakes.
Blinking, for example, is something of a weak spot for deep fake technology. While humans in real videos blink frequently and rapidly, deep fake depictions blink much more slowly and with less frequency. It’s not unfeasible to project that traditional media training for company leaders and politicians will evolve to include coaching on specific blinking and body language to defend against digital impersonation.
While this defence is clearly proactive, the unpredictable nature of the technology will require a well-prepared reactive defence as well. News organizations will undoubtedly need to develop ready-made plans for correcting and retracting inadvertent spread of deep fakes. And for their part, public relations professionals will be called to develop a robust crisis response if a deep fake video threatens their brand.
Perhaps most importantly, the rise of deep fake videos will underscore the importance of vigilant monitoring of videos in the news and on social media. A vigorous and thorough monitoring process will allow for the expeditious identification of misinformation and deployment of a defence as soon as possible to preserve public trust.