Glitching videos. Spelling mistakes. No blinking. Robotic audio. Unnatural tone.
Historical hallmarks of a deepfake cyberattack may seem apparent to the well-informed individual, but if recent news events have taught us anything, humans can no longer be relied upon to correctly spot AI-generated content such as deepfakes.
However, many online security frameworks still rely on human intervention as a crucial defense mechanism against attacks. For example, employees are expected to spot phishing emails and scams after completing corporate cybersecurity training. Remote identity verification often relies on a manual check of uploaded imagery, or a person-to-person video call to verify a user’s identity.
The reality today is that humans can no longer detect generative AI content and should no longer remain a central defense mechanism. A new approach is urgently needed.
Founder and CEO of iProov.
The threat landscape is changing two-fold
AI-powered fraud and cyberattacks have captured several headlines recently. One notable example was global engineering firm Arup, which fell victim to a £20m deepfake scam after a finance employee was duped into sending cash to criminals, following a series of hoax AI-generated video calls from senior officials.
On the incident, Arup’s Global CIO, Rob Greig, said: “Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes. What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months.”
Here, Greig underscores the two biggest changes AI is driving in the threat landscape today: attacks are rising in volume and sophistication. Generative AI tools that can create video, audio, and messages are now widely available and are accelerating the speed and scale at which attacks can be launched. What’s more, the technology has become so sophisticated that humans cannot be reasonably expected to detect AI-powered attacks.
Other organizations are starting to worry too; a recent iProov survey of technology decision-makers found that 70% believe AI-generated attacks will significantly impact their organizations, while more than two-thirds (62%) worry that their organization isn’t taking the threat seriously enough.
There are several ways AI is transforming traditional attacks.
Phishing gets a power-up
Despite widespread awareness of social engineering techniques, this method remains highly effective in cyberattacks. Verizon’s 2023 Data Breach Investigations report revealed phishing was involved in 36% of breaches in 2022, making it the most common attack type.
It’s commonplace for organizations to teach employees about spotting phishing attacks, like looking for typos, grammar errors, or awkward formatting. But with AI able to quickly create, polish, and scale personalized phishing messages, those training sessions have become redundant.
Tools like WormGPT, a malicious cousin of ChatGPT, are enabling bad actors to create convincing, personalized phishing messages quickly without any errors and in any language.
AI is also helping make spear-phishing (highly targeted social engineering attacks) even more impactful and scalable. Traditional social engineering attacks become even more convincing when coupled with a deepfake phone call or voice note from a relative or colleague, for example, just like the Arup incident.
With AI creating convincing content that no longer requires a high technical skillset, the number of potential attackers is greatly expanded. The barrier to entry to create these attacks is also far lower than before, with generative AI tools now easily accessible from Crime-as-a-Service marketplaces.
Onboarding becomes a top target
Remote onboarding, the point where a user first sets up access to a system or service and verifies their identity, is a high-risk point in user journeys for any organization, and is another area AI-attacks are targeting. Allowing a criminal access to an organization's systems or accounts can cause significant, uncontrolled damage that can quickly spiral. Consider how easily a criminal could borrow money fraudulently, steal identities or weaponize company data once they have been given an account or granted access.
KnowBe4, a US cybersecurity company, recently shared details of an attack it faced that illustrates this risk all too well. They unwittingly hired a North Korean hacker who used AI and a stolen ID to trick hiring teams and the identity verification process. Once onboard, the imposter almost immediately tried to upload malware before being detected.
Verifying identities remotely is far more commonplace in today’s global and digital age. Whether it’s hiring a new employee like the KnowBe4 example, creating a bank account, or accessing government services; people are far more accustomed to verifying their identity remotely. However, traditional methods like video calls with human operators are clearly no longer able to defend against deepfake imposters. As KnowBe4’s CEO said: “If it can happen to us, it can happen to almost anyone. Don’t let it happen to you.”
So, how can organizations stop it from happening to them?
Fighting fire with fire
No organization can ignore the emerging AI threats – the KnowBe4 and Arup examples should ring alarm bells for any enterprise. They also underscore how vulnerable humans are as a defense method. Employees can’t be expected to spot every cleverly disguised phishing email, nor can human operators flawlessly manage remote identity verification. Bad actors knowingly exploit human vulnerabilities.
Our recent deepfake detection study found only 0.1% out of 2,000 participants could accurately distinguish real from fake content, and despite this poor performance over 60% of people were confident in the accuracy of their detection skills. Regardless of how confident someone may feel scrutinizing a deepfake photo or email message during a cyber training session, the chances of detecting these when a notification is received in real life during a busy workday or while out running errands is considerably lower.
The good news is that AI is both sword and shield. And thankfully, tech leaders are recognising the power of technology as the solution, with 75% turning to facial biometric systems as a primary defense against deepfakes and the majority acknowledging the crucial role of AI in defending against these attacks.
Biometric verification systems are transforming remote online identity verification, enabling organizations to verify that the user at the other end of the screen is not only the right person, but a real person. Liveness assurance, as this is called, prevents attackers using stolen or shared copies of victims’ faces, or forged synthetic imagery.
Organizations must be aware that what truly differentiates biometric systems is the quality of this liveness assurance and not all liveness assurance systems are created equal. While many solutions claim to offer robust security, organizations need to dig deeper and ask critical questions.
Does the solution offer liveness assurance that continuously adapts to evolving threats like deepfakes through AI-powered learning? Does it create a unique challenge-response mechanism to make every authentication unique? Does the provider have a dedicated security operations center that provides proactive threat hunting and incident response to keep pace with emerging threats and ensure that defenses remain robust?
Implementing these more advanced solutions are critical to staying ahead of the ever-evolving attacks, reducing the burden on individuals and ultimately bolstering organizational security in the AI-driven threat landscape.
We list the best identity management software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro