Pendulums tend to swing, and the Fair Lending pendulum is no exception. Over the last few years, with the transition of HMDA rulemaking to the Consumer Financial Protection Bureau, and with the CFPB’s new final rule expanding HMDA data collection requirements, it’s clear that the pendulum has been swinging toward a tougher regulatory environment. With the recent change in administration, however, it appears that the pendulum is going to be heading back the other way. Precisely when and how far it will swing remains to be seen, but if you’ve been in the industry long enough, one thing seems certain: the Fair Lending pendulum doesn’t sit still for very long.
That said, certain requirements have been put in place, and until they are revoked or changed, the prudent course of action remains to be in compliance and not rely on “what might be.”
The Fair Housing Act (FHA) and its younger sibling, the Equal Credit Opportunity Act (ECOA), have strong roots. The laws date back to 1968 and 1974, respectively, and were enacted during a time when civil unrest was at the forefront in this country. Fast-forward to today, and we still find a nation amidst civil unrest to the point where most would agree that FHA and ECOA are here to stay in one form or another.
At the leading edge of today’s FHA/ECOA is data analytics. The Interagency Fair Lending Examination Guidelines  released in 2009 centered on using analytics as part of an overall evaluation of an institution’s performance. Performance was rated relative to risk factors and aspects of discrimination, such as redlining, steering and other forms of overt discrimination, disparate treatment and/or disparate impact. Many of these aspects have evolved as institutions have further automated some or all of the loan decisioning and pricing systems in an effort to both expedite the application process and promote fair and equal treatment.
To help evaluate whether the automated decisioning and pricing systems are fair, many financial institutions have also implemented Fair Lending systems models. These crosscheck models are tasked with evaluating an organization’s decisioning and pricing relative to the prohibited factors, such as race, gender, religion, age, etc. Their aim was to determine whether FHA/ECOA “fairness” was in place or whether there were forces at work that adversely impacted decisioning and pricing.
But do the models used by the Fair Lending systems work, and have they been set up correctly? Assuming the answer to these two questions is yes has gotten many a Fair Lending examination off to a rough start when regulators do a little digging and find out otherwise.
Hence, central to regulatory guidance on Fair Lending data analytics is the expectation that lenders prove or conduct validations to verify any implemented model’s accuracy and effectiveness to reduce model risk.  When an institution uses both a decisioning/pricing system and a monitoring system, it needs to evaluate both models. Regulatory agencies advise that organizations using any model “should be attentive to the possible adverse consequences of decisions based on models that are incorrect or misused,” and stress the need to “address those consequences through active model risk management.”
For institutions with Fair Lending Systems aimed at evaluating the merits of a decisioning/pricing model, it is essential to periodically complete model validation. Even if you’re not using such a system, but have a decisioning/pricing model or system in place, it is prudent to complete an analysis of the system’s use of its models.
The principle of validating a model for fair lending is similar to validating AML systems. The fair lending system, like the AML system, draws from data supplied from another source, such as the core system. Those data are then run through a series of algorithms and comparative data from a number of sources to provide an understanding of the data and to identify areas that may merit further investigation. In the world of BSA/AML, areas that need further investigation are often referred to as alerts. In the fair lending world, these alerts could be credit files identified for comparative file review.
All pricing/decisioning models as well as Fair Lending models are based on select analytical methods, principles and algorithms based on regression analysis. Those who are familiar with such elements know that they are subject to potential manipulation that can vary outcome due to settings and interpretations. The manipulation may be fully justifiable and supported, but there could also be anomalies that don’t appear to be rational and may require further explanation or revision. As such, periodic independent validation of the model is critical as it provides feedback that aims to provide statistical support for decisioning and pricing as well as forming the primary basis for comparative file selection. This can be crucial evidence supporting an institution’s lending fairness.
There are two steps to completing validation. The first step is to review the output from the system’s model and compare it back to the source information based on the known algorithms. We call this a back-to-front effort; meaning that the validation is taking the output of the system (the back) and comparing it to the sources (the front). However, such an approach by itself does not fully validate the model. It requires a second step, which is to independently take the source information and assess whether the model has properly handled that information. We refer to this as a front-to-back process.
Additionally, models should also be evaluated for system effectiveness. It is important that the validation team process the data using an independent set of algorithms to confirm the outcome relative to decisioning and pricing. When a Fair Lending system also generates candidate applications for comparative review, it is important that that part of the process is likewise validated independently by the validation team.
There is much at stake when using an automated decisioning/pricing system or a Fair Lending system. Financial institutions count on them to be doing what they have tasked them to do. As such, independent testing of the models used by these systems is an integral part of a sound Fair Lending program.