Artificial Intelligence (AI) and Machine Learning (ML) are highly transformational technologies that have the capacity to revolutionize antiquated processes across industries. Business leaders who can properly assess the opportunities, challenges, and limitations of AI and ML will be able to provide tremendous value to their customers and set themselves apart from the competition in the coming years.
Understanding Bias in AI
One key component of understanding the basics of AI and ML is how bias can be introduced in a training algorithm. Business leaders want to be sure if, and how, it is possible to prevent bias before trusting machines with specific tasks. The core of preventing bias in AI is essentially the same as doing so in any other system by measuring and testing for the presence of bias.
Whether it is AI or human-based, any decision system can be subject to bias. It is vital to always have a solid measuring and testing program for bias during development and after. There are three core ways in which bias can infiltrate an AI system.
1. Is there a complete data set?
Many kinds of bias can creep into algorithms that lack sufficient training data, or the data the model is learning from. Training data can be thought of as a spreadsheet with rows and columns. Rows represent the items the model is learning from while columns represent each piece of information needed about the individual to make a prediction.
One of the core challenges of ML is the amount of data required. Users can opt to use their own data or third party data, each presenting its unique set of pros and cons. Internal data might lack a sufficient amount of “rows,” and additionally, it’s often difficult for this data to have enough bad examples. For example, in the case of fraud, if a lender is sufficiently detecting fraud cases, the model cannot train as easily on examples of successful fraud.
Models can also utilize third party sources, such as a credit bureau. In this case, limitations may include lack of “columns” or detailed information.
In both cases, users need to evaluate the limitations of both data sets to ensure that the algorithm is training on enough examples as possible.
2. Is the question itself biased?
In many instances, an algorithm will compute a prediction based on history or previous decisions. In other words, if users want the model to select an outcome based on poor decisions, the model will mirror those bad decisions. If, for example, a financial institution asks an algorithm if they should authorize consumer credit, and they use data based on who the have lent to in the past, the algorithm will mirror those previous results.
A past example of how bias was introduced in AI models was asking models to evaluate resumes and predict who to interview based on past performance. Since the model traditionally gave more interviews to males, it became biased towards women. Instead, the question should have been, “how did this person perform once they started at the company.” AI users need to carefully consider what they are specifically asking their algorithms to predict to prevent this problem.
3. Will this model be accurate in the future?
One of the core challenges with AI and ML is that the models are constantly learning and changing over time. It’s unlikely the historical pool of data used to train the model will reflect future data sets, so users of ML need to constantly question if the data they have is generalizable in new segments, markets, and products. Thus, it’s vital to build the appropriate training data necessary to accurately account for those scenarios.
AI Lending in Consumer Banking
AI is a generational shift in technological capabilities that will unlock new opportunities in every industry, especially lending. AI and ML has the potential to impact several key components of banking, from marketing and underwriting to the onboarding process and predicting delinquency. Business leaders who take the time to understand ML components, how they work, and how to execute them well for their use cases will ultimately be the winners in lending in the next five to ten years.