A touch of reality: Financial services cybersecurity

In the trenches with a veteran CTO/CIO

In this “Age of Covid” financial services organizations need to find successful ways to harden security for a growing attack surface brought about, in part, by digitalization and the need for work-from-home accommodations. And given this growing challenge, security consultants and security product vendors are busy sharing their thoughts and their offerings to all of us. But as I read through the myriad of recommendations and offerings for products and/or support, I’m often left wondering “how much of this is new, and how much of it is actually used or usable?”

So, at the risk of appearing cynical, I thought I would take one such article, of seemingly reasonable recommendations, and ask a veteran CTO/CIO’s thoughts on their relevance and applicability. For many of us, we need help separating the “theory” from the “practice” when asked to make decisions on plans and resources. It can help to listen to those who have been “in the weeds” for a long time. For this, I turned to Jim Mazarakis, current COO of OnSystem Logic, and a long-time CTO/CIO for several bank and investment management firms, including WSFS Bank, T. Rowe Price, J.P. Morgan, and more.

In the “advice article” I shared with Jim the author wrote about “hardening security efforts” in all sizes of institutions.  And she listed a series of initial steps to take to “catch up”, to improve one’s baseline security posture. But she ended by recognizing the difficulty many FIs have in finding the resources to do even these tasks. And that’s why I went looking for Jim’s views on basic cybersecurity tasks such as these.

Jim’s initial comment: “all the suggestions made are valid but, honestly, although the vulnerabilities have been expanded with Covid, the vulnerabilities talked about here have never been well controlled, ever.”

Hardening Security Efforts (with Jim’s comments in italics)

Security awareness training – always and often. CUNA (Section 748, NCUA Rules and Regulations, NCUA Letter 02-CU-1), GLBA Rule, 16 CFR 314.4, FDIC and PCI DSS r. 12.6 all require security awareness training because we know that insiders are the first line of defense and the weakest link against social engineering tactics like phishing and whaling. Like all cybersecurity, awareness is not just an annual box to tick, but an ongoing initiative. For most, security awareness training is consistently done but, unfortunately, this will continue to be the biggest risk because people get carried away in their emails and become careless. We had branch managers retrieving emails from their spam folders and responding to “customers” who ultimately scammed the bank. Human nature is what it is.

Increase vulnerability scanning frequency. Most vulnerability scanning is not done frequently enough, which limits security and IT teams’ understanding of their security posture and fails to help them with remediation prioritization. We recommend going beyond the compliance-required quarterly cadence and scan at least monthly and on-demand if your vulnerability management (VM) platform has that capability. Vulnerability scanning gives you a vulnerability score (typically in the millions) showing all the vulnerabilities you haven’t patched. However, most customers are months behind in applying all these patches. They need to be applied one at a time and it typically takes 3-4 weeks to roll them out to all associates. Since there are dozens of these coming out every week, it takes a while and you’re always behind. The score tells you how badly off you are, but it doesn’t help you solve the problem.

Do your due diligence on third-party cloud vendors connecting to your infrastructure, including having full visibility to entrance and exit points; investigate their APIs; ask for their SOC2 report or in the case of PCI DSS, their Report on Compliance or Attestation of Compliance; and require third parties who are going to access your infrastructure to take security awareness training to communicate that third-party employees are held to the same set of expectations as internal employees. Absolutely a good suggestion, but these infrastructures are huge and constantly changing. Also, these cloud infrastructures are typically much better protected than anything their customers do to protect their own endpoints, hence the endpoint is the most vulnerable vector (a key reason why we focused on it at OnSystem Logic).

Rely on network-based (agentless) scanning and supplement with agent-based scanning to ensure all network-connected assets are scanned and secured. Agentless scanning provides a much lighter footprint, less negative performance impact and drastically reduces false positives. However, security and IT teams have less oversight and control over remote endpoints, which places them at greater risk as they connect to corporate networks. Installing agents on laptops, mobile devices or even applications hosted in the cloud can fill in those gaps to provide a comprehensive real-time view of at-risk systems and ensures they have the right patches, security controls, software, etc., to protect them from being compromised and spread the infection to corporate networks. Rely on network-based (agentless) scanning and supplement with agent-based scanning to ensure all network-connected assets are scanned and secured? Sure, this will help, and it’s needed. But remind people that scanning doesn’t protect anything. You still need to ensure you have the right protections. Scanning is a task; protection is an outcome.

Use the advanced features in their VM platform, including the threat intelligence and machine learning functionality available to them, so they can focus on weaponized vulnerabilities that exist inside their networks first. Yes, a good idea. Most next-gen antivirus products say that they do this. I’m not confident they do all they claim, however. We need much more in the way of visibility and tools to combat the vulnerabilities in our network, including run-time tools to ensure how application software is only running trusted operations.

Consider a “zero trust” approach and provide only application-level access to cloud and premises-based solutions and applications; and lock them down to only the access the user needs. Yes. Absolutely a best practice. But most shops don’t do this well at all, if they even try. There is real value in a “zero trust” architecture that limits access to what users’ need, but it’s not enough. This doesn’t stop events like “SolarWinds.” We must also treat app software with the same zero trust attitude. Unfortunately, the software vendor space isn’t doing that. They choose to rely upon detection of known threats, constant monitoring (in hopes of catching threats before catastrophic damage), and mitigation (where there is lots of money to scoop up). As I wrote earlier, we need to seek out run-time tools and solutions that protect software from malicious code hiding within it.

Given all the advice we receive from consultants and vendors, I hope these observations from Jim Mazarakis help to remind us of both the size and difficulty of the task and of our need to “keep it real.”

Greg Crandell

Greg Crandell

Greg Crandell provides strategy, market planning, business development, and management consulting to financial technology firms and their clients – Credit Unions and Banks. For more years than he wishes to admit, ... Web: queryconsultinggroup.com Details