We can’t fight AI with AI
AI and social engineering attacks unite
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
The biggest cybersecurity threat facing people and enterprises today isn’t AI, but social engineering and other forms of identity-based attacks. Artificial intelligence, with its ability to automate attacks and enable deep fake impersonations, is going to accelerate the threat that these threat vectors pose by substantially lowering the cost and effort it takes to launch them. And, while ‘fight AI with AI’ makes for great headlines; it is not the long-term strategy to solving the problem of containing these threats.
So, what do we do then in the face of the rapidly accelerating AI technology landscape? The answer, surprisingly, is the same as it ever was. AI should be governed by the samezero trust access,identity management, and policy strategies that enforce access control for other humans and non-human identities.
CEO, Teleport.
AI is the easy button for social engineering attacks
Imagine a scenario where your employer has announced an update to the payroll system in an all-staff meeting. Your ‘boss’ calls and says ‘by the way, could you give me your password for something real quick?’
You just walked out of the meeting, and it’s your boss, so you’ve got no immediate reason to distrust this interaction. Except, as it turns out, it’s not your boss, but a deepfake. The hacker found out about the meeting by monitoringsocial media. This type of interaction used to be expensive in terms of time and human labor, but with GenAI, this cost effectively becomes zero. A kid in Nebraska could launch hundreds of these attacks a day.
Sadly, this isn’t a hypothetical. Generative AI is adding new dimensions to the risk of these social engineering exploits. New tools like WormGPT – the ‘hackbot-as-a-service’ – are being used to design more convincing phishing campaigns or deepfake impersonations, while lowering the time and cost to launch cyberattacks using these methods.
The acceleration of attack frequency by multiple orders of magnitude is indeed concerning. The mistake some make here, though, is assuming the root cause of a successful AI-powered breach is different from that of an ordinary social engineering attack.
Identity fragmentation is your enemy
Most successful breaches result from a bad actor targeting some form of privilege, or ‘secrets’ that exist in the vein of credentials like passwords, browser cookies, or API keys.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
We know this, as confirmed by a recent Verizon report which shows 68% of cyberattacks involve the human element. Credentials pop up in 86% of security breaches related to web-based applications and platforms. Another report by Unit 42 shows 83% of organizations have hard-coded credentials into their code base.
Regardless of how much bad actors take advantage of AI for phishing campaigns, social engineering will never go away. Someone will always find a way to fool us into handing over our credentials (we humans are just that unreliable). Our security goal in the age of AI innovation therefore has to be to get rid of secrets. Take a good look at modernIT infrastructureand it quickly becomes obvious how dangerous secrets have become. They’re scattered across the many disparate layers that make up enterprises’ technology stacks, from Kubernetes and servers to cloud APIs, IoT, specialized dashboards and databases.
These all manage security in different ways and it has led to a severe fragmentation of identity – that is to say, identity silos. These silos inevitably create more doors for bad actors to enter, and as more organizations experiment with adding AI agents to their workloads, AI technology becomes yet another silo for hackers to exploit.
Don’t make AI a security silo
The primary issue with introducing AI agents into workloads is that it can create a situation where you’re leaking data, and you now need to find out: 1) what data the AI agent was trained on, 2) what data the AI agent had access to, and 3) which employees had access to the AI agent.
All of these are critical questions. The ease of answering them comes down to how your company governs data. Your AI agent shouldn’t be treated as a separate technology and security silo. It should be governed by the same rules and policies as everything else within your enterprise environment. In practice, this means you have to consolidate identities for your AI agents and all other enterprise resources – e.g. yourlaptops,servers,databases, microservices, etc. – into one inventory that provides a single source of truth for identity and access relationships.
The best next step enterprises can take to shield themselves from AI-led social engineering attacks is to make sure the identities for employees are never presented as digital information. The goal should always be to materially reduce the attack surface that threat actors can target with social engineering strategies. You therefore have to secure identities cryptographically, basing them on physical world attributes that cannot be stolen, like biometric authentication, and enforcing access based on ephemeral privileges that are granted only for the period of time that work needs to be completed.
You can think of cryptographic identity for employees as comprising a three-point criteria: 1) the machine identity of the device the employee is using, 2) the employee’s biometric marker, and 3) a personal identification number (PIN). This isn’t some new concept – it’s the core security model the iPhone operates on, where the biometric marker is facial recognition, the personal identification number is your PIN code, and the Trusted Platform Module (TPM) chip inside the phone governs ‘machine identity.’
If you’re still not quite sold on consolidating identities, there’s more to it than just thecybersecuritybenefits. Yes, it shrinks attack surface, but consolidating identities (including for AI agents) also massively streamlines how much a company provisions its resources. From a workforce point of view, that will only increase productivity, and it’s exactly the sort of thing many teams need to reduce the friction often felt between security and adopting new tech.
None of this is to say AI lacks any utility for threat prevention and remediation. Will AI be useful in analyzing threat activity and spotting anomalies in an organization’s system? Absolutely, but it’s not going to fix the fact that humans are ultimately fallible. We leave secrets around. We share passwords freely. We forget our laptops at the train station. As AI supercharges the volume of social engineering attacks, ‘fight AI with AI’ doesn’t quite cut it as a strategy.
The factor deciding the success of AI-led social engineering attacks will be the same as it ever was: not elaborate viruses or software vulnerabilities, but human error. Human behaviour exposing infrastructure to data leaks is what we need to learn to defend against. If we can do that, then social engineering attacks – AI-powered or otherwise – will be prevented from wreaking the havoc they have been causing of late.
We list the best identity theft protection.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here:https://www.techradar.com/news/submit-your-story-to-techradar-pro
Ev Kontsevoy, CEO, Teleport.
Cisco issues patch to fix serious flaw allowing possible industrial systems takeover
7 myths about email security everyone should stop believing
Lego will let you build Sir Ernest Shackleton’s iconic lost ship, the Endurance, in its next Icons set