EXPLAINABLE AND INTERPRETABLE MACHINE LEARNING MODELS FOR ANALYSIS OF OPEN BANKING DATA
Abstract
Background. The development of artificial intelligence and machine learning models has significantly influenced financial analytics and credit decision-making. These models provide high predictive accuracy but often operate as “black boxes,” which complicates the interpretation of their internal mechanisms. In the context of open banking, where decisions directly affect users’ access to financial resources, such opacity is a substantial drawback. This creates a need for explainable and interpretable approaches that make it possible to establish causal relationships between input features and output predictions.
Materials and Methods. The research methods are based on a multi-level approach to ML model interpretation. Feature Importance is applied for a statistical assessment of feature contributions; LIME is used to provide local interpretability; and SHAP (SHapley Additive exPlanations) is employed to capture nonlinear dependencies. Structural interpretability is ensured by DNFS (Deep Neuro-Fuzzy System) through the formation of fuzzy rules, while BRB-ER (Belief Rule Base with Evidential Reasoning) adds logically consistent explanations of decisions based on a rule base.
Results and Discussion. It is shown that, after hyperparameter optimization of credit risk models trained on open banking data, the accuracy of the DNFS model becomes 4 percentage points higher than that of the Random Forest model. A global analysis of feature importance scores obtained using Feature Importance, SHAP, and DNFS demonstrates a high correlation between them (above 88%), indicating model stability. At the local level, instances that reduce model accuracy are identified. Visualizations using SHAP graphs reveal regions of linear and nonlinear feature interactions and their influence on decision-making.
Conclusion. In contrast to the traditional use of individual XAI methods to explain machine learning model outputs, this work combines global and local feature importance metrics (Feature Importance, SHAP, LIME), fuzzy rule–based metrics from DNFS, and aggregated coefficients from BRB-ER. The proposed approach makes it possible to localize the causes of accuracy degradation, identify nonlinear feature dependencies, and assess the consistency of explanations through correlation analysis across methods.
Keywords: explainable artificial intelligence, machine learning, BRB, DNFS, fuzzy logic.
Full Text:
PDFReferences
[1] Martins T., de Almeida A. M., Cardoso E., L. Nunes (2024). Explainable Artificial Intelligence (XAI): A Systematic Literature Review on Taxonomies and Applications in Finance, in IEEE Access, vol.12, pp.618-629. https://doi.org/10.1007/s10462-024-10854-8
[2] Černevičienė, J., Kabašinskas, A. (2024). Explainable artificial intelligence (XAI) in finance: a systematic literature review. Artif Intell Rev 57, 216. https://doi.org/10.1007/s10462-024-10854-8
[3] Yeo, W.J., Van Der Heever, W., Mao, R. et al. (2025). A comprehensive review on financial explainable AI. Artif Intell Rev 58, 189 https://doi.org/10.1007/s10462-024-11077-7
[4] Chinnaraju A. (2025). Explainable AI (XAI) for trustworthy and transparent decision-making: A theoretical framework for AI interpretability. World Journal of Advanced Engineering Technology and Sciences, 14(03), 170-207. https://doi.org/10.30574/wjaets.2025.14.3.0106
[5] Salih, A.M., Raisi-Estabragh, Z., Galazzo, I.B., Radeva, P., Petersen, S.E., Lekadir, K. and Menegaz, G. (2025), A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME. Adv. Intell. Syst., 7: 2400304. https://doi.org/10.1002/aisy.202400304
[6] de Lange, P. E., Melsom, B., Vennerød, C. B., & Westgaard, S. (2022). Explainable AI for Credit Assessment in Banks. Journal of Risk and Financial Management, 15(12), 556. https://doi.org/10.3390/jrfm15120556
[7] Shreya, Harsh Pathak. (2025). Explainable Artificial Intelligence Credit Risk Assessment using Machine Learning. Computer Science Machine Learning. arXiv:2506.19383 https://doi.org/10.48550/arXiv.2506.19383
[8] Talaat, F.M., Aljadani, A., Badawy, M. et al. (2024). Toward interpretable credit scoring: integrating explainable artificial intelligence with deep learning for credit card default prediction. Neural Comput & Applic 36, 4847–4865 https://doi.org/10.1007/s00521-023-09232-2
[9] Aosen Gong, Wei He, You Cao, Guohui Zhou, Hailong Zhu. (2025). Interpretability metrics and optimization methods for belief rule based expert systems, Expert Systems with Applications, Volume 289, 128363, ISSN 0957-4174, https://doi.org/10.1016/j.eswa.2025.128363
[10] Yaqian You, Jianbin Sun, Ruirui Zhao, Yuejin Tan, Jiang Jiang, (2024). A rule reasoning diagram for visual representation and evaluation of belief rule-based systems, Expert Systems with Applications, Volume 255, Part D, 124806, ISSN 0957-4174, https://doi.org/10.1016/j.eswa.2024.124806
[11] Talpur, N., Abdulkadir, S.J., Alhussian, H. et al. (2022). A comprehensive review of deep neuro-fuzzy system architectures and their optimization methods. Neural Comput & Applic 34, 1837–1875. https://doi.org/10.1007/s00521-021-06807-9
[12] Talpur, N., Abdulkadir, S.J., Alhussian, H. et al. (2023). Deep Neuro-Fuzzy System application trends, challenges, and future perspectives: a systematic survey. Artif Intell Rev 56, 865–913. https://doi.org/10.1007/s10462-022-10188-3
[13] Yang, J. (2021). Fast TreeSHAP: Accelerating SHAP Value Computation for Trees. ArXiv. https://arxiv.org/abs/2109.09847
[14] Aljadani, A.; Alharthi, B.; Farsi, M.A.; Balaha, H.M.; Badawy, M.; Elhosseini, M.A. Mathematical Modeling and Analysis of Credit Scoring Using the LIME Explainer: A Comprehensive Approach. Mathematics 2023, 11, 4055. https://doi.org/10.3390/math11194055
[15] Badhon, B., Chakrabortty, R.K., Anavatti, S.G., Vanhoucke, M., IRAF-BRB: An explainable AI framework for enhanced interpretability in project risk assessment, Expert Systems with Applications, Volume 285, 2025, 127979, https://doi.org/10.1016/j.eswa.2025.127979
[16] Fostyak, M. Demkiv, L. (2025). Hybrid Optimized BRB–ML Model for Credit Rating Prediction in Open Banking Systems. Artificial Intelligence Stuc. intelekt. ; 30; (3):110-118 https://doi.org/10.15407/jai2025.03.110
[17] Rawal, R., Chug, R, Singh A., Prakash A., (2025). Review of Deep Learning Revolution on Neuro-Fuzzy Systems, Advances in Data Science and Adaptive Analysis, V.17(03). https://doi.org/10.1142/S2424922X25300015
[18] Fostyak, M. (2024). Development of an ai domain in a data mesh network for customer credit classification using transaction data IEEE 19th International Conference on Computer Science and Information Technologies (CSIT) IEEE Lviv Polytechnic Week 16-19 October, Lviv, DOI:10.1109/CSIT65290.2024.10982569
[19] Fostyak, M., Demkiv L. (2025). A data-centric approach to building ai models for determining the credit rating of fintech company clients based on open banking. ISSN 2710 – 1673 Artificial Intelligence Stuc. intelekt № 1. https://doi.org/10.15407/jai2025.01.132
[20] Hielkrem, L.O., Lange, P. E. d. (2023). Explaining Deep Learning Models for Credit Scoring with SHAP: A Case Study Using Open Banking Data. Journal of Risk and Financial Management, 16(4), 221. https://doi.org/10.3390/jrfm16040221
DOI: http://dx.doi.org/10.30970/eli.32.6
Refbacks
- There are currently no refbacks.

Electronics and information technologies / Електроніка та інформаційні технології