Reacfin Presentation : Cost of Explainability in AI - an Example with Credit Scoring Models

This paper examines the cost of explainability in machine learning models for credit scoring. The analysis is conducted under the constraint of meeting the regulatory requirements of the European Central Bank (ECB), using a real-life dataset of over 50,000 one-year credit exposures. We compare the statistical and financial performances of black-box models, such as XGBoostand neural networks, with inherently explainable models like logistic regression and GAMs. Notably, statistical performance does not necessarily correlate with financial performance.



Our results reveal a difference of 15 to 20 basis points in annual return on investment between the best performing black-box model and the best performing inherently explainable model, as cost of explainability. We also find that the cost of explainability increases together with the risk appetite.



To enhance the interpretability of explainable models, we apply isotonic smoothing of features’ shape functions based on expert judgment. Our findings suggest that incorporating expert judgment in the form of isotonic smoothing improves the explainability without compromising the performance. These results have significant implications for the use of explainable models in credit risk assessment and for regulatory compliance.



NB: This presentation was given as part of the 1st World Conference on eXplainable Artificial Intelligence.