Simply put, a deepfake is a phony video or audio file that looks and sounds real. 

More specifically, deepfakes are created using technology that studies photographs and videos of a real person, learns to mimic that person’s speech and behavioral patterns and then creates a new, fake video or audio file. After a multi-step process that eliminates any flaws, this fake video is almost indistinguishable from the original, authentic file to the naked eye.

Because of these capabilities, deepfakes can be used to create fake news and malicious hoaxes to target influential people in mainstream news, and influence public opinion.

The term is a combination of the words deep learning and fake. Its name comes from a Reddit user known as deepfakes, who used deep learning technology to add the faces of celebrities to pornographic video clips in 2017. 

Since then, deepfakes have rapidly developed from porn to politics. Indeed, many people think politicians are the only ones who need to be aware of deepfakes. This is not true. Corporations, beware too. 

With deepfakes, PC Magazine noted, anybody can be made to look like they love or hate anything. “As deepfake video techniques improve, there are ominous implications for the future. Videos are highly persuasive when they ‘supposedly’ come from prominent people.

So far this year, fake videos emerged of House Speaker Nancy Pelosi, Facebook CEO Mark Zuckerberg and Game of Thrones character Jon Snow. They quickly went viral.

Are deepfakes new?

Fake videos aren’t new; they were once part of Hollywood special effects studios and even some intelligence agencies. The difference these days is that anyone can easily create convincing deepfakes – for only $20. 

Specifically, AI researcher Alex Champandard told BusinessInsider that the process of creating a deepfake could take just a few hours with a consumer-grade graphics card. 

Zignal Labs’s Josh Ginsberg, who has long worked at the intersection of politics, government, and matters of media veracity, agrees. 

“I did not think it would get to the point of technology that it’s at this quickly,” he told the Observer. “By that, I mean the ability for really anyone to be able to create a deepfake for $20 on their home computer, that could literally shift the narrative in a presidential election, is really quite scary. And that is the reality that we’re looking at right now.”

House Intelligence Committee Chairman Adam Schiff also concurs. He told a congressional hearing that deepfakes are a “nightmarish” threat to the 2020 presidential election.

But, as mentioned, the potential harm of deepfakes goes beyond politics. On the As Far As We Know podcast, Ginsberg cautioned that video can just as easily be edited to discredit a business or a brand. 

“Imagine someone manipulates a video where a CEO is talking about how sales are going down or what’s happening with stock prices. It’s possible to take real video or audio of an executive from years ago, and slightly alter a few words, which makes a policy sound completely different.”

Is technology available to detect and combat deepfakes?

Detection technology is still relatively new, and while strides have been made in detecting deepfakes in a research lab, a solution has not yet been developed in a real-life scenario. 

Millions of video and audio files are uploaded every second, creating a challenge to respond at scale and in time. “A deepfake story can move quickly and it is very hard to wrangle,” Ginsberg said.

Until technology advances, awareness by the general public might be the greatest protection available. CSO Online warned, “If we are unable to detect fake videos, we may soon be forced to distrust everything we see and hear.”

If you want to know if a video can be trusted, look at the source and determine its credibility by asking these questions:

  • Is it from a mainstream news outlet that’s been around for decades, or from a website you’ve never heard of before?
  • Is it from a reporter or an influencer whom you trust, or someone you’ve never heard of?
  • Are there any obvious typos in the text explaining the video?

In other words, have a healthy skepticism as you digest information and then verify it. 

Are there steps you can take to protect your brand against deepfakes?

Archive, archive, archive. The best way to combat deepfakes is to fight fire with fire, according to Ginsberg. Record and archive everything said publicly by senior leadership and spokespeople, whether it’s a speech, an interview or news conference. 

In addition, update your crisis communications plan with new infrastructure, so these archived files are catalogued and easily accessible to quickly respond to any deepfakes.

“Providing the original video is the quickest way to verify with the public that a deepfake is not real,” Ginsberg said. “So, set up archives and an infrastructure within a crisis communications plan for a rapid response.”

At the center of this plan, your company’s media analytics platform remains a critical resource. Use your analytics program to help you be prepared. Posts with negative sentiment can warn of a fake video or audio file. Alerts can capture these posts so they can be responded to quickly. (For specific tips on steps to take within your media analytics platform, see my previous post, So You’re In a Communications Crisis – Now What?)

According to Ginsberg, Zignal Labs is “bringing in every single publicly available media data point from social media, traditional media and television, showing it in a very easy-to-use dashboard in real time. So, you can see every single mention as it’s coming in, and how it’s spreading throughout the media spectrum.”

Ginsberg also cautioned that deepfakes have now evolved into a reputational risk. “This is a point in time, where you have to get ahead of these crises, before they transpire. So how are you, as an organization, business, executive and leader, preparing for those moments to occur before they come?”

Unfortunately, not that many are ready yet. In a July study of 1,020 communications professionals, The Plank Center for Leadership in Public Relations found that many companies are not prepared to identify and manage fake news. “Few organizations have in place policies, technical systems and processes to detect and manage fake news and misinformation.”

The time to get ready is now. Until further advances in technology, an updated crisis communications plan, with a deepfakes infrastructure and an in-depth media analytics program, is crucial to manage reputational risk and combat deepfakes. 


Many people think porn and politics are the main victims of this particularly destructive type of disinformation called deepfakes. But businesses and brands can be attacked too.

With limited detection technology available at this point, education, awareness, media analytics and archives of original audio and video files are essential to try to dispute deepfakes. Make sure your senior leadership team, analytics team, spokespeople and all communications managers are aware of what deepfakes are and the best steps to take to protect your brand.