Fraud in the Age of AI: Faster, Smarter, and Impossible to Ignore
“Is fraud really worse today than five years ago?”, that was one of the very first questions I asked during our session at the Digital Finance Day. And it set the tone for a discussion that quickly revealed both the scale and the speed of change in the fraud landscape.
Social bookmarks
Sandy Lavorel (Vyntra) speaks at Digital Finance Day
To unpack these developments, four fraud experts shared their insights, revealing what became unmistakably clear during the session:
Fraud has entered a new era, one defined by industrialised crime networks, behavioural manipulation, and AI-driven attacks.
How has the threat landscape changed?
Right from the start, the experts agreed: fraud is rising everywhere. Reported cases are up, attacks are better coordinated, and social engineering remains the most effective entry point. What stood out to me was how strongly the panel emphasised the industrialisation of fraud. As one participant put it: “What used to be lone-wolf scammers is now industrialised.”
Entire cross-border syndicates now specialise in logistics, recruitment, crypto-laundering, and even the technical infrastructure behind scams. And AI accelerates this shift, enabling scammers to produce convincing messages, target victims with personalised phishing, create deepfakes, and scale operations at almost no cost.
So what are banks doing to defend customers?
The discussion highlighted that banks approach the challenge from multiple angles. One representative described it as a triad of prevention, detection, and response, supported by strong authentication and clear customer communication.
But the emphasis was on keeping things high-level: “Our systems continuously evolve, but we don’t publicly detail operational layers. What matters is that we invest heavily in detection models, customer education, and strong callback and verification procedures.”
Another expert added that collaboration across the industry is becoming essential, from joint fraud-scoring models to neutral, nationwide awareness campaigns.
And what role does AI play on the defense side?
Interestingly, AI is both the problem and the solution. On the defensive side, AI now helps detect emotional manipulation in payment messages, flags unusual behaviour patterns, identifies deepfakes, and automates investigation processes.
One expert phrased it in a way that stuck with me: “It’s never AI or humans. The future is hybrid.”
More intelligent systems, better-trained employees and more informed customers together form the emerging defence model.
Who is actually responsible when things go wrong?
This was one of the most debated audience questions. The panel agreed that responsibility is shared: scammers exploit psychology, platforms enable reach, and customers often underestimate risks. Banks can and do build barriers, but these barriers are not impenetrable. As one expert noted: “We have to stop pretending anyone is immune. Emotion overrides logic. That’s what scammers target.”
Looking ahead
The session closed on a sobering yet pragmatic note: AI will make fraud more sophisticated, but it also offers unprecedented defensive capabilities. The race isn’t about eliminating fraud entirely, it's about staying faster, smarter, and more coordinated than the criminals.
As one participant summarised:
“Our strongest defence is awareness on every layer: systems, employees, and customers.”