You are exposed to Generative AI risks – whether you use it or not

$25 million is wired out by a bank employee who thinks he is receiving instructions from his CFO on a video call. It turns out he was duped by Generative AI’s (GenAI) Deepfake technology. And none of the bank’s security protocols detected the fraud. Here is the full story on CNN: https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/.

GenAI is the new shiny thing; it is constantly evolving and holds big promises to deliver efficiency. Some of the most cited GenAI tools are:

  1. ChatGPT – Free access to basic AI content development
  2. GitHub Copilot – A code completion Artificial Intelligence tool
  3. DALL-E 2 – Converts text prompts into images

IDC (International Data Corp) analysts predict that financial services companies will spend $11 billion on AI tools to prevent fraud, streamline underwriting, and improve overall customer experience. In case you are not fully versed in this new branch of technology, it is AI’s next level of evolution. While AI uses data to predict what is to come using sophisticated algorithms, GenAI uses data to create new text, images, audio, videos and various data formats. It can create imaginative graphics like Figure 1 – I personally like the original self-portrait of Van Gogh:

But it can also create images like Figure 2:

And it is the latter image that should make us pause … If you cannot guess why, it is okay, I will explain in a bit.

The point is that with everything good comes also the bad. In the case of GenAI the bad is that FIs are now exposed to confidential data leakage into the worldwide web, deepfakes and hallucinations. Worst of all, an FI’s exposure to risk is not based on participation – meaning that the GenAI risks are not dependent on us actively or consciously using the applications or platforms. The sheer existence of GenAI presents risks. Your employees having access to the internet at work exposes an FI to risk. The more we understand about the various risk types, the better we can hope to mitigate them. At ProcessArc, we think of GenAI risks on 3 levels, as shown in figure 3:

 

Most credit unions and banks need to focus their attention on Outbound and Inbound Risks – those are imminent. Application Risk becomes relevant only if you are actively pursuing an AI strategy and implementing AI technology.

Outbound Risk is about data inadvertently leaving your institution and here FIs have 2 main risk mitigation strategies:

  1. The easiest but highly effective one is training your employees on what GenAI is and how to use it safely. Awareness & usage protocols are critical.
  2. The more systematic risk mitigation plans are to create usage security protocols – that manage access to GenAI tools & auditing usage. And/or developing an on-prem instance of GenAI

When it comes to Inbound Risk – our focus shifts to fraud detection and cybersecurity. Deepfakes play a big role here – this is the risk of being presented with an image or artifact that seems ‘real’ only for it to be fake. The caliber and quality of ‘deep fakes’ are improving daily, and most observers cannot differentiate between an authentic or fake image. This means that it may be time to review your authentication and know-your-customer protocols. Every image, every artifact, even sound and images on conference calls can be false.

We cannot change how technology evolves, but by staying aware we can actively examine how these changes impact our business, members and risk profile.

If you want to learn more about our AI advisory services or GenAI Risk Mitigation employee training contact us at info@processarc.com or 414.232.3623.

Sheila Shaffie

Sheila Shaffie

Sheila is the co-founder of ProcessArc, a consulting and training company focused on client experience and transformation.  Her company is a trusted partner of financial institutions globally including: CUNA Mutual ... Web: https://www.processarc.com Details