Credit unions must share data to fight new AI fraud risks

In March, the Department of the Treasury issued a troubling report warning financial institutions they are at risk of emerging AI fraud threats. The culprit is a failure to collaborate. The report warns that lenders are not sharing “fraud data with each other to the extent that would be needed to train anti-fraud AI models.”

This report should be a wake-up call. As any fraud-fighting veteran knows, combating fraud is a perpetual arms race. And when new technologies like generative AI emerge, the status quo is disrupted. Right now, the fraudsters are gaining the upper hand. According to a recent survey by the technology firm Sift, two-thirds of consumers have noticed an increase in fraud scams since November 2022, when generative AI tools hit the market.

How is AI changing the fraud landscape? According to the Treasury report, new AI technologies are “lowering the barrier to entry for attackers, increasing the sophistication and automation of attacks, and decreasing time-to-exploit.” These technologies “can help existing threat actors develop and pilot more sophisticated malware, giving them complex attack capabilities previously available only to the most well-resourced actors. It can also help less-skilled threat actors to develop simple but effective attacks.”

The same generative AI technology that helps people create songs, draw pictures, and improve their software coding is now being used by fraudsters. For example, they can purchase an AI chatbot on the Dark Web, called Fraud GPT, to create phishing emails and phony landing pages. AI technology can help produce human-sounding text or images to support impersonation and generate realistic bank statements with plausible transactions.

 

continue reading »