AI arms race: Banks and fraudsters battle for the upper hand

Deepfakes, spearfishing, FraudGPT: AI is accelerating the world of financial fraud at a dizzying pace. Can banks use AI to fight AI? And do they have the knowhow to implement the solutions?

One of the stranger moments in the 2024 presidential primary process took place this week: Voters in New Hampshire received a “robocall” featuring President Joe Biden’s voice, discouraging them from voting in the primary, and using Biden’s pet phrase “What a bunch of malarkey.”

But Biden didn’t record the call; it appears to be a “deepfake” generated by artificial intelligence (AI). Authorities are investigating the call as a potentially illegal attempt to suppress votes.

It’s not clear how many voters received the call or were genuinely deceived by it, but the episode illustrates how sophisticated the world of AI fraud has become—and banks are an increasingly popular target. Last summer, The New York Times published a story featuring instances in which customers’ voices were synthesized in an attempt to get bank employees to transfer money.

The phenomenon is so new that it’s hard to determine how widespread it is, and of course synthesizing voices is only one way that AI can be applied to financial sector fraud. Most experts think AI represents a small fraction of the $8.8 billion lost annually to financial fraud. But there is no doubt that the field is growing.

 

continue reading »