How to Protect your Organization from the Emerging Deepfake Threat

How to Protect your Organization from the Emerging Deepfake Threat
Written by Brian Thomas
Manager, Content Marketing

Mimicking reality is the latest frontier of cybercrime and it’s a growing threat. Cyber criminals are increasingly deploying AI and machine learning to fool unsuspecting victims into believing that they’re seeing or hearing something that they’re not--and pulling off deepfake scams in the process.

Deepfakes involve manipulation of video footage or voice cloning to make a person appear to say or do something that they never said or did. Here’s a deepfake video of Facebook CEO, Mark Zuckerberg talking about how Facebook “owns” users and their data. The inaccurate claim plays on consumer concerns about data privacy on Facebook.

The hidden menace in fake video and audio scams

Aside from pushing conspiracy theories, deepfakes can also pose significant cybersecurity threats to your organization. In one of the earliest examples of this menace, cyber criminals used AI software to mimic the voice of a CEO (also known as “vishing”), demanding that an employee release $243,000 in funds to a supplier. The fraudulent transaction went through.

Deepfake technology is also troubling since it attracts a particularly smart and creative breed of cybercriminals who are keen to cover their tracks. A recent investigation by The New York Times followed a team of young engineers who use their part-time talents to develop the perfect deepfake. The endeavor, which is intended to warn the public about the dangers of such scams, found that innovative AI algorithms are making these scams more realistic and harder to detect.

Deepfakes are also ringing alarm bells in Congress. Senator Marco Rubio compared such scams to the modern equivalent of nuclear weapons.

Indeed, the disturbing rise of this cyber threat led us to include it as one of the top five cybersecurity trends for 2020 that security leaders must prepare for. But how?

Technology is only part of the solution to preventing deepfake fraud

While security performance management as a practice emphasizes due diligence around employee behavior, it can only do so much. Deepfake scams succeed by playing on a deep understanding of human behavior and what it takes to manipulate it via social engineering.

Knowledge sharing--not technology--should be the first line of defense against deepfakes. In the face of these increasingly sophisticated attacks, security leaders must step outside the security operations center (SOC) and communicate the risk of deepfake scams to business leaders across the organization. From there, they can work collaboratively to create a culture of awareness and protect the organization against risk.

Bitsight Executive Report Example

New! The Security Ratings report is now the Executive Report. Request your report to see enhanced analysis such as your rating, likelihood of ransomware incidents, and likelihood of data breach incidents.

Combating deepfake: it’s time to get creative

When it comes to social engineering and deepfake scams, people are the number one weak spot. As such, any attempt your company makes to educate employees on the evolving threat must go beyond obligatory PowerPoint-based cybersecurity awareness training programs.

Instead, get creative and find ways to really engage employees, so that when confronted with a potential cyber scam, a little voice at the back of their head tells them to “trust but verify”.

This might take the format of a 30-45-minute education session outside of normal security training. Keep the session short, stick to the point, focus on the top threats you need to communicate and the action employees should take when confronted with them.

To help communicate best practices, do some research. Curiously, deepfake threats are fascinating in their ingenuity and success and make for engaging subject matter. Assemble a few examples of what form such scams take, their motives and outcomes. The New York Times article referenced earlier includes an interesting two-minute video that shows deepfake creators at work that you may choose to include. You could even show them how easy it is to create a deepfake of your own voice using this tool.

Next, have a discussion around basic procedures that deepfake scammers try to manipulate. Make sure employees understand that the CEO is never going to call them and instruct them to take XYZ action, such as giving John Doe access to a critical business system. Most organizations have protocols in place for requests like this.

Deepfake scams: if it doesn’t smell right, verify

While a “trust but verify” approach may introduce small additional hurdles into day-to-day business, it’s a fair price to pay for better protection. If something doesn’t smell right, encourage your employees to ask. After all, no one ever got fired for taking an extra 30-minutes to find out if a request from the C-suite to release funds, authorize a password reset, or enter into a new vendor contract, is authentic or if a person is really who you think they are.

The good news is that the good guys are racing to invent new techniques to identify manipulated audio and video. Let’s hope that, together, we can all stay one step ahead of the deepfake scam industry.