Why deepfakes are set to be one of 2024’s biggest cyber security dangers

How businesses can protect themselves

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

Artificial Intelligence (AI) has revolutionized the creation of visual content. Multiple AI-image generator platforms have become available in recent years, and now, new platforms such as Sora,OpenAI’s flagshipAI video editor, are making their way to market.

AI imageand video platforms have allowed individuals and businesses to create content with limitless creativity and scalability, while also improving efficiency in cost and time. However, the swift evolution of this technology has outpaced regulatory measures, leaving a gap for its misuse by malicious individuals or groups.

In recent years, the proliferation of deepfake images and videos has surged - media that has been digitally manipulated to replace one person’s likeness whether voice, face or their body. The technology has been cast into the spotlight by the recent targeting of high-profile figures including deepfake audio of Keir Starmer, deepfake pornographic images of Taylor Swift and a computer generated video of Martin Lewis. Advances in AI technology mean that deepfakes are becoming increasingly sophisticated, difficult to spot, and with the right equipment, can be broadcast live, meaning individuals could have a conversation with somebody in real-time who look and sound completely different to how you are seeing and hearing them on your screen.

Recent figures show that deepfake fraud material reportedly increased by 3,000% in 2023. And, with the technology now quick, cheap and easy for virtually anyone to use, threat actors have quickly begun to adopt this technology into their arsenal of cyber-attack techniques.

Co-Founder of Ecliptic Dynamics.

The cybersecurity risk of deepfakes to businesses

The cybersecurity risk of deepfakes to businesses

Deepfake technology introduces several cyber risks to businesses. Over the years, deepfakes have been used to spread misinformation, deceive audiences, manipulate public opinion, and defame individuals. So, understanding the potential risk is crucial.

Financial damage

The financial implications of deepfake attacks pose a significant threat to businesses, primarily through fraud and scams used to impersonate high-ranking decision making executives who staff trust and respect.

Are you a pro? Subscribe to our newsletter

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Cybercriminals can create highly convincing audio or video recordings of a CEO, for example, instructing employees to transfer funds or share sensitive information. These deepfakes can bypass traditional security measures, leading to substantial financial losses. In 2019, a UK-based energy firm lost $243,000 after cybercriminals utilised voice-generating artificial intelligence software to imitate the CEO of the brand’s German parent company to enable an illicit transfer.

Operational risks

Deepfakes can increase the efficacy of social engineering and phishing attacks, which pose significant operational concerns for businesses. Traditional phishing attempts often rely on poorly written or generic emails, but deepfakes add a new layer of believability. Attackers can create personalized emails or calls from trusted individuals within the organization, making it harder foremployeesto spot malicious activity.

Earlier this year, a finance employee at a multinational corporation in Hong Kong was deceived into transferring $25 million to cybercriminals. The criminals used deepfake technology to impersonate the company’s chief financial officer in a video conference call. The elaborate scam involved the employee participating in what appeared to be a meeting with several other staff members, all of whom were actually deepfake recreations. This sophisticated attack successfully gained the employee’s trust, leading to huge financial losses for the company.

Reputational harm

Deepfakes also have the potential to destroy a brand or individual’s reputation. For example, a deepfake showing a CEO doing and/or saying something harmful or controversial could significantly impact trust, business continuity, and market stability, leading to a crash in share prices and an online witch-hunt.

By the time evidence of a deepfake becomes public, it may be too late to stop significant damage being done to your company’s reputation.

Whatever its form, such an attack on your organization could have significant consequences. So, what can you do to address these risks?

Spotting deepfakes and mitigating risk

As deepfake attacks grow it is critical for organizations to take proactive action in safeguarding their environment. By creating a strong, security-focused culture, and updating security procedures to account for the rise in these tactics, organizations can work to mitigate their risk.

Educate employees and partners

Regular training sessions should be held to inform employees about deepfake technology and its possible consequences for the organization. Teach staff how to recognize the indicators of a deepfake, such as unusual facial movements or inconsistencies in audio-visual synchronization.

Strengthen identity verification

This is essential, especially for transactions involving money or sensitive information. Traditional verification methods, such as passwords and PINs, can be easily compromised, which is why implementing multi-factor authentication (MFA) is crucial. This adds an extra layer of security by requiring multiple forms of verification before granting access. You could also create trusted phrases to confirm someone’sidentity, acting as a last line of defense when attempting to ward off attacks. This multi-layered approach ensures that even if one security process is compromised, additional measures are in place to prevent unauthorized access and to protect sensitive information.

Include deepfakes in incident response planning

Finally, businesses should update their incident response plans to include scenarios involving deepfakes, ensuring that there are clear protocols for verifying the authenticity of suspicious communications and for responding to potential threats.

This year the sophistication and prevalence of deepfakes are expected to reach new heights, with more than 95,000 deepfakes circulating online in 2023, up 550% since 2019. As AI and deepfake technology continue to evolve and become more accessible to malicious groups, having robust measures in place will help your business take proactive steps to safeguard against these threats.

We list the best online cybersecurity courses.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here:https://www.techradar.com/news/submit-your-story-to-techradar-pro

Tom Kidwell, Co-Founder of Ecliptic Dynamics.

New fanless cooling technology enhances energy efficiency for AI workloads by achieving a 90% reduction in cooling power consumption

Samsung plans record-breaking 400-layer NAND chip that could be key to breaking 200TB barrier for ultra large capacity AI hyperscaler SSDs

NYT Strands today — hints, answers and spangram for Sunday, November 10 (game #252)