The Fairness of Credit Scoring Models
Christophe Hurlin, Christophe Pérignon, Sébastien Saurin
Management Science - 2024-11-14
In credit markets, screening algorithms aim to discriminate between good-type and bad-type borrowers. However, when doing so, they can also discriminate between individuals sharing a protected attribute (e.g., gender, age, racial origin) and the rest of the population. This can be unintentional and originate from the training data set or from the model itself. We show how to formally test the algorithmic fairness of scoring models and how to identify the variables responsible for any lack of fairness. We then use these variables to optimize the fairness-performance tradeoff. Our framework provides guidance on how algorithmic fairness can be monitored by lenders, controlled by their regulators, improved for the benefit of protected groups, while still maintaining a high level of forecasting accuracy. This paper was accepted by Will Cong, finance. Funding: This work was supported by the Autorité de Contrôle Prudentiel et de Résolution (ACPR) Chair in Regulation and Systemic Risk, the Fintech Chair at Dauphine-PSL University, and the French National Research Agency (ANR) [MLEforRisk ANR-21-CE26-0007, Ecodec ANR-11-LABX-0047, and F-STAR ANR-17-CE26-0007-01]. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2022.03888 .
Lien HAL