Explainable AI Models for Credit Card Default Prediction: Balancing Accuracy and Interpretability
Keywords:
Explainable AI (XAI), Credit Card Default Prediction, Machine Learning, Interpretability, Transparency, SHAP, LIME, Feature Importance, Surrogate Models.Abstract
This paper will give an in-depth account of the use of Explainable AI (XAI) in credit card default prediction by juxtaposing theoretical frameworks and the empirical evidence of a particular case study. With the growing use of AI by financial institutions in credit scoring, the issue of interpretable and transparent models has been the most critical requirement not only to instill confidence among the stakeholders involved but also to meet regulatory requirements where clarity in automated decision-making is a requirement. The key conclusion is the accuracy-interpretability trade-off is not an insurmountable obstacle that cannot be overcome with the help of XAI methodologies practice. The empirical case study that adopted a surrogate modeling methodology involving the use of a Decision Tree to model a high-performing Gradient Boosting classifier proved that such hybrid approach can attain a strong predictive accuracy as well as generate interpretable outputs with near perfect fidelity. Such a success offers a concise and practical roadmap on how financial institutions can embrace the power of the latest AI models in a responsible manner, yet in compliance with strict regulatory rules.