Fighting AI with AI, finance firms prevented $5 million in fraud - but at what cost?

5 hours ago 16
blue-dark55gettyimages-2170273841
d3sign/Getty Imageds

When most people think of AI, the first thing that probably comes to mind isn't superintelligence or the promise of agents to boost productivity, but scams. 

There've always been fraudsters among us, that small percentage of the population who'll use any means available to swindle others out of their money. The proliferation of advanced and easily accessible generative AI tools in recent years has made such nefarious activity exponentially easier.

Also: Meet ChatGPT agent, a new AI assistant ready to carry out complex tasks for you - try it now

In one memorable incident from early last year, a finance employee at a firm based in Hong Kong wired $25 million to fraudsters after being instructed to do so on a video call with what they believed to be company executives, but were in fact AI-generated deepfakes. And earlier this month, an unknown party used AI to imitate the voice of US Secretary of State Marco Rubio on calls that went out to a handful of government officials, including a member of Congress. 

And yet, counterintuitively, AI is also being deployed by financial services companies to prevent fraud.

In a recent survey conducted by Mastercard and Financial Times Longitude (a marketing agency and a subsidiary of Financial Times Group), 42% of issuers and 26% of acquirers said that AI tools have helped them to save more than $5 million from attempted fraud in the past two years.

In the financial sector, an issuer is a firm that provides debit or credit cards (think Chase or another major bank), while acquirers are those that accept payments (think Stripe and Square).

Also: Anthropic's Claude dives into financial analysis. Here's what's new

Many of these organizations have begun using AI tools to enhance their digital security in conjunction with more traditional methods, like two-factor authentication and end-to-end encryption, according to a report of the survey findings published last month.

Survey respondents reported using a variety of AI-powered techniques to boost their cybersecurity and protect against fraud. The most commonly cited technique was anomaly detection -- that is, an automated alarm that flags unusual requests. Other use-cases included scanning for vulnerabilities in cybersecurity systems, predictive threat modeling, "ethical hacking" (another form of searching for system vulnerabilities), and employee upskilling.

The vast majority of respondents (83%) also said "that AI has significantly reduced the time needed for fraud investigation and resolution," while reducing customer churn. Even more (90%) agreed that unless their use of AI for fraud prevention increases in the coming years, their "financial losses will likely increase."

Also: Researchers from OpenAI, Anthropic, Meta, and Google issue joint AI safety warning - here's why

Several barriers, however, are preventing the financial services companies surveyed from adopting fraud-preventing AI tools at scale. Chief among these are the technical complexities of integrating new AI systems with existing software and data that's already deployed within an organization. That's closely followed by concerns about the rapid pace at which fraud tactics themselves are evolving, which many fear will quickly outpace any attempt to use AI-powered fraud prevention. 

Want more stories about AI? Sign up for Innovation, our weekly newsletter.

Read Entire Article