Balancing fairness and innovation: AI in mortgage lending

By Martin Prescher, Ph.D. 

 

While many industries have been quick to integrate AI into products and operations, the mortgage industry has approached it with caution. And that’s a good thing.

AI comprises many technologies, including machine learning, generative neural nets, rule-based automated reasoning and natural language processing. Each of these can provide tremendous value. But without the right input and safeguards, they can cause unintended harm and expose your business to regulatory risk.

In the mortgage sector, understanding and managing these risks is crucial.

Deterministic vs predictive AI

Traditionally, we expect results from computers to be 100% accurate and predictable. But the results of AI are often only approximate, probable and even unpredictable – the programmers who build a neural network, for instance, can’t say how it will respond to any specific input.

Most automated underwriting systems use deterministic algorithms for assessing borrower creditworthiness. These software systems are rules-based, and the sequences leading to their decisions can be traced to their outcomes every time. No surprises there.

Predictive AI, on the other hand, is trained on input data and absorbs the patterns in it – “learning” to find the same patterns in new data. Results are often impressive, but the learning is never reduced to rules, and the only way to find out how the software will respond to new data is to try it. The term “predictive” can mislead because deterministic rule-based systems can also make predictions; the predictions just aren’t made directly from data.

The choice between using deterministic and predictive AI in product development depends on the nature of the problem and the available data.

Risks of bias in credit decisions with
predictive AI

A major concern with predictive AI in credit decisioning is its potential to perpetuate biases. That’s because predictive AI uses existing data sets to make decisions, and any bias present in the data, or in society itself, will go right into the “learning” just like any other pattern. 

Think about this: if we were to train AI to predict a borrower’s ability to pay using current credit scoring models, we would reproduce the models already in place and the biases inherent to them. We cannot study the default rates of people who have been denied financing; they don’t have loans to default on. Thus, we cannot learn if there are subsets consumers who have thin credit or lower credit but do not present undue risk. This is a form of the well-known survivor bias. 

In this case, limited data would limit our ability to improve how we assess a borrower’s ability to pay. We would only make favorable predictions about types of borrowers who actually got loans under the existing system.

Ethical considerations

The safeguards of deterministic AI are crucial to ethical applications of AI in determining a borrower’s ability to pay. As noted, developing unintended bias is an inherent risk of predictive AI.

Another inherent risk of predictive AI is that it can make unpredictable mistakes. Perhaps a borrower is perfectly good but doesn’t happen to fit a pattern the predictive AI was trained on. That borrower will be denied a loan, and no reason can be given. 

This is an area where regulators have stepped in to protect consumers from unfair lending practices. Unlike Siri mis-transcribing a few words of your text, denying someone a home loan based on a faulty decision by a machine is a life-altering moment, to say the least.

Building a safer mousetrap

For the highly scrutinized mortgage industry, getting AI right will take a development team with a deep understanding of both AI and the flaws in the financial industry. And those experts will need to build a governance structure with deterministic AI engines so lender-customer decisions don’t have an unintentional bias. 

Done correctly, this would provide lenders with an underlying engine built to ensure a level playing field so all customers would be judged fairly. After all, lenders are people, and people have biases, and they make mistakes. With the right human guidance, models can be built to issue decisions consistently and correctly with full transparency and auditability.  

Moving forward with AI

We think deterministic AI holds a lot of promise in moving lending decisions beyond the FICO score, so more creditworthy consumers, including those traditionally underserved by mortgage lenders, can be qualified without unduly increasing a lender’s risk. At the end of the day, lenders must recognize that this approach improves their underwriting process by making the end decision more fair.

I’ll use an example of FormFree’s Residual Income Knowledge Index (RIKI). This technology has been developed by data scientists with decades of expertise to prevent biased decisioning. RIKI is a totally impartial machine that uses deterministic AI to look at alternative data, such as cash flow data, to assess whether or not a person can afford the mortgage they’ve applied for. RIKI was developed to be used alongside credit scoring to identify borrowers who are excluded from traditional underwriting systems but have the ability to sustainably support a mortgage loan.

We are still in the early stages of integrating RIKI at beta sites and documenting use cases. Guild Mortgage is one pilot well underway. Guild used RIKI for its Complete Rate lending program launched in 2022 to understand an unscored applicant’s ability to support a mortgage using rent, utility and other recurring payments. A second planned Guild Mortgage program will use RIKI to evaluate thin-file borrowers with FICO scores of 620 or below to determine their ability to pay, which would be a huge step forward in being able to evaluate consumers being missed by current approaches to underwriting.  

To put this in perspective, AI holds a lot of promise in advancing beyond the FICO score, so more creditworthy consumers, including those traditionally underserved by mortgage lenders, can be qualified. We think using a model that measures the whole person is a better one. Such a model should deliver comprehensive insights and recommendations to lenders. Of course, the “decision” will always be a human one. 

 

Martin Prescher, chief technology officer at FormFree, is a leader in AI development, having built several high-profile startups, scale-ups, and corporate innovation units for major global players. He also guides the expansion of FormFree’s national sales strategy in the auto lending sector Prescher also teaches a course on AI and advanced applications at the University of Southern CA.

Originally posted on MBA Newslink

Related Articles

FormFree Founder Brent Chandler Reflects on the Company’s Significant…

Read More

Lending Bias Against Minorities Persists, But FormFree Offers an…

Read More

FormFree Pioneers Fair Mortgage Solutions, Promoting Minority Homeownership

Read More