Regulators Should Encourage Adoption of Fair Lending Algorithms

In 1869, English judge Baron Bramwell rejected the idea that “because the world becomes wiser as it grows older, it was therefore stupid before”. Financial regulators should adopt this same reasoning when examining financial institutions’ efforts to make their lending practices fairer using advanced technologies such as artificial intelligence and machine learning.

If regulators don’t, they risk stalling progress by encouraging financial institutions to stick to the status quo rather than actively seek ways to make lending more inclusive.

The simple, yet powerful concept articulated by Bramwell underpins a central pillar of public policy: you cannot use evidence that someone has improved something against them to prove wrongdoing. In law, this is called the “subsequent remedy” doctrine. It inspires people to continually improve products, experiences, and results without fear that their efforts will be used against them. While lawyers typically apply the doctrine to things like sidewalk repairs, there’s no reason it can’t apply to efforts to make lending algorithms fairer.

The Equal Credit Opportunity Act and Regulation B require lenders to ensure that their credit algorithms and policies do not unfairly deny credit to protected groups. For example, a credit underwriting algorithm would be considered unfair if it recommended denying loans to protected groups at higher rates than other groups when differences in approval rates do not reflect differences in risk. credit. And, even if they were, the algorithm could be considered unfair if there was a different algorithm that achieved a similar trading result with less disparity. That is, if there was a less discriminating alternative algorithm, or LDA.

Advances in modeling techniques, particularly advances made possible by artificial intelligence and machine learning, have made it possible to debias algorithms and search for LDAs in unprecedented ways. Using AI/ML, algorithms that would recommend denying Black, Hispanic, and female loan applicants at much higher rates than white males can be made to approve these groups at much more similar rates without being significantly less accurate in predicting their likelihood of loan default. This is where the problem lies. If a lender uses an algorithm and later finds an LDA, they might fear being sued by plaintiffs or their regulators if they admitted to its existence.

This is not a theoretical problem. I have personally seen bankers and fair lending lawyers wrestle with this problem. Lenders and lawyers who want to to improve algorithmic fairness were held back by concerns that the use of advanced LDA search methods would be used to show that what they were doing before was not enough to comply with ECOA. Likewise, lenders fear that upgrading to a new, fairer credit model will essentially admit that the previous model broke the law. For this reason, lenders may have an incentive to stick to fair lending tests and LDA research to back up the validity of the status quo.

It is precisely this scenario that Bramwell’s reasoning was intended to prevent. Economic actors should not be encouraged to avoid progress for fear of implicating the past. On the contrary, as modern tools and technologies, including AI/ML, allow us to more accurately assess the fairness and accuracy of credit decisions, we should encourage the adoption of these tools. Of course, we should do this without condoning past discrimination. If a prior model was illegally biased, regulators should deal with it appropriately. But they shouldn’t use a lender’s proactive adoption of a less discriminatory model to condemn the old one.

Fortunately, the solution here is simple. Financial regulators should indicate that they will not use the fact that a lender has identified an LDA – or that an existing model has been replaced by an LDA – against the lender in any supervisory or enforcement action related to loan equity. This recognition by regulators of the 19th century common law doctrine encouraging remediation and innovation would go a long way to encouraging lenders to constantly seek greater fairness in their lending activities. Such a position would not excuse past wrongdoing, but rather encourage improvement and advance the interests of consumers.

Comments are closed.