Explainable Artificial Intelligence: Analysis of Methodologies and Applications

Authors

DOI:

https://doi.org/10.24215/16666038.25.e07

Keywords:

Inteligencia artificial, explicabilidad, machine learning, aprendizaje automático

Abstract

The lack of transparency and explainability in machine learning models, often referred to as "black boxes," presents a  significant challenge that undermines trust and decision-making in critical applications such as medicine, finance, and security.
This study examines the necessity of improving explainability by evaluating recent advancements in explainability techniques, comparing them to earlier approaches, and assessing their impact on both theory and practice. Through a comprehensive literature review, current methodologies were identified, categorized, and evaluated based on their effectiveness and practical applications. The findings highlight the importance of well-established techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), alongside novel approaches such as SAMCNet and entropy-based  methods, for their ability to provide clearer and more understandable explanations. However, significant challenges remain, including the need for model-agnostic XAI (Explainable Artificial Intelligence) techniques that can be generalized across different contexts. These findings emphasize the ongoing importance of research in this field to enhance transparency and trust in AI. 

Downloads

Download data is not yet available.

References

R. Baeza-Yates, “Introduction to responsible AI,” en Proceedings of the 17th ACM International Conference on Web Search and Data Mining (WSDM ’24), 2024, pp. 1114–1117. Disponible en: https://doi.org/10.1145/3616855.3636455

I. Goodfellow, Y. Bengio y A. Courville, Deep learning, Cambridge, MA, USA: MIT Press, 2017.

Y. LeCun, “Generalization and network design strategies,” University of Toronto, Department of Computer Science, Tech. Rep. CRG-TR-89-4, 1989.

C. Molnar, Interpretable machine learning, 2nd ed., Lulu.com, 2020.

F. Doshi-Velez y B. Kim, “Towards a rigorous science of interpretable machine learning,” 2017.

Z. C. Lipton, “The mythos of model interpretability,” 2016.

O. Biran y C. Cotton, “Explanation and justification in machine learning: A survey,” en Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI) Workshop on Explainable Artificial Intelligence), 2017.

R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti y D. Pedreschi, “A survey of methods for explaining black box models,” ACM Computing Surveys, vol. 51, no. 5, pp. 93:1–93:42, 2018. Disponible en: https://doi.org/10.1145/3236009

S. Ali, T. Abuhmed, S. El-Sappagh, K. Muhammad, J. M. Alonso-Moral, R. Confalonieri, R. Guidotti, J. Del Ser, N. Díaz-Rodríguez y F. Herrera, “Explainable artificial intelligence (XAI): What we know and what is left to attain trustworthy artificial intelligence,” Information Fusion, vol. 99, p. 101805, 2023. Disponible en: https://doi.org/10.1016/j.inffus.2023.101805

M. T. Ribeiro, S. Singh y C. Guestrin, “Why should I trust you? Explaining the predictions of any classifier,” en Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2016, pp. 1135–1144. Disponible en: https://doi.org/10.1145/2939672.2939778

S. M. Lundberg y S.-I. Lee, “A unified approach to interpreting model predictions,” en Proceedings of Advances in Neural Information Processing Systems 30 (NeurIPS 2017), 2017, pp. 4765–4774.

R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh y D. Batra, “Grad-CAM: Visual explanations from deep networks via gradient-based localization,” en Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 618–626. Disponible en: https://doi.org/10.1109/ICCV.2017.74

K. Schwab, The fourth industrial revolution. Nueva York, NY, EE. UU.: Crown Business, 2017.

M. C. Pezzini, “Inteligencia artificial explicable: Análisis de metodologías y aplicaciones,” 2024. Disponible en: http://sedici.unlp.edu.ar/handle/10915/174328

G. Vilone y L. Longo, “Notions of explainability and evaluation approaches for explainable artificial intelligence,” Information Fusion, vol. 76, pp. 89–106, 2021. Disponible en: https://doi.org/10.1016/j.inffus.2021.05.009

I. Tiddi y S. Schlobach, “Knowledge graphs as tools for explainable machine learning: A survey,” Artificial Intelligence, vol. 302, p. 103627, 2022. Disponible en: https://doi.org/10.1016/j.artint.2021.103627

B. A. Kitchenham y S. Charters, “Guidelines for performing systematic literature reviews in software engineering,” Keele University y Durham University, Tech. Rep. EBSE-2007-01, 2007.

E. Ç. Mutlu, N. Yousefi y O. O. Garibay, “Contrastive counterfactual fairness in algorithmic decision-making,” en Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES ’22), 2022, pp. 499–507. Disponible en: https://doi.org/10.1145/3514094.3534143

X. Sáez-de Cámara, J. L. Flores, C. Arellano, A. Urbieta y U. Zurutuza, “Federated explainability for network anomaly characterization,” en Proceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses (RAID ’23), 2023, pp. 346–365. Disponible en: https://doi.org/10.1145/3607199.3607234

H. Guo, F. Jia, J. Chen, A. Squicciarini y A. Yadav, “Rocoursenet: Robust training of a prediction aware recourse model,” en Proceedings of the 32nd ACM International Conference on Information and Knowledge Management (CIKM ’23), 2023, pp. 619–628. Disponible en: https://doi.org/10.1145/3583780.3615040

M. Farhadloo, C. Molnar, G. Luo, Y. Li, S. Shekhar, R. L. Maus, S. Markovic, A. Leontovich y R. Moore, “Samcnet: Towards a spatially explainable AI approach for classifying mxif oncology data,” en Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’22), 2022, pp. 2860–2870. Disponible en: https://doi.org/10.1145/3534678.3539168

D. Das, B. Kim y S. Chernova, “Subgoal-based explanations for unreliable intelligent decision support systems,” en Proceedings of the 28th International Conference on Intelligent User Interfaces (IUI ’23), 2023, pp. 240–250. Disponible en: https://doi.org/10.1145/3581641.3584055

M. Begum, M. H. Shuvo, M. K. Nasir, A. Hossain, M. J. Hossain, I. Ashraf, J. Uddin y M. Samad, “Lcnn: Lightweight CNN architecture for software defect feature identification using explainable AI,” IEEE Access, vol. 12, pp. 55 744–55 756, 2024. Disponible en: https://doi.org/10.1109/ACCESS.2024.3388489

D. Flores-Araiza, F. Lopez-Tiro, E. Villalvazo-Avila, J. El-Beze, J. Hubert, G. Ochoa-Ruiz y C. Daul, “Deep prototypical-parts ease morphological kidney stone identification and are competitively robust to photometric perturbations,” en Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2023, pp. 295–304. Disponible en: https://doi.org/10.1109/CVPRW59228.2023.00035

F. Aghaeipoor, M. Sabokrou y A. Fernández, “Fuzzy rule-based explainer systems for deep neural networks: From local explainability to global understanding,” IEEE Transactions on Fuzzy Systems, pp. 1–12, 2023. Disponible en: https://doi.org/10.1109/TFUZZ.2023.3243935

P. Barbiero, J. D. T. M. Silva, J. Zarlenga, Y. Qi et al., “Entropy-based logic explanations of neural networks,” en Proceedings of the AAAI Conference on Artificial Intelligence, 2022, pp. 5935–5943. Disponible en: https://doi.org/10.1609/aaai.v36i5.20345

G. Ciravegna, P. Barbiero, F. Giannini, M. Gori, P. Liò, M. Maggini y S. Melacci, “Logic explained networks,” Artificial Intelligence, vol. 314, p. 103822, 2023. Disponible en: https://doi.org/10.1016/j.artint.2022.103822

E. Giunchiglia, M. C. Stoian, y T. Lukasiewicz, “Deep learning with logical constraints,” 2022.

I. E. Nielsen et al., “Robust explainability: A tutorial on gradient-based attribution methods for deep neural networks,” IEEE Signal Processing Magazine, vol. 39, no. 4, pp. 73–84, 2022. doi: 10.1109/MSP.2022.3142719.

S. Singla et al., “Explaining the black-box smoothly—a counterfactual approach,” Medical Image Analysis, vol. 84, p. 102721, 2023. doi: 10.1016/j.media.2022.102721.

M. Yuksekgonul, M. Wang, y J. Zou, “Post-hoc concept bottleneck models,” en Proceedings of the International Conference on Learning Representations (ICLR), 2023. arXiv:2205.15480.

X. Zhang et al., “smri-patchnet: A novel efficient explainable patch-based deep learning network for alzheimer’s disease diagnosis with structural MRI,” IEEE Access, vol. 11, pp. 108 603–108 616, 2023. doi: 10.1109/ACCESS.2023.3321220.

A. Jha, V. Rakesh, J. Chandrashekar, A. Samavedhi, y C. K. Reddy, “Supervised contrastive learning for interpretable long-form document matching,” ACM Transactions on Knowledge Discovery from Data, vol. 17, no. 2, p. Article 27, 2023. doi: 10.1145/3542822.

H. Zhang et al., “A question-centric multi-experts contrastive learning framework for improving the accuracy and interpretability of deep sequential knowledge tracing models,” ACM Transactions on Intelligent Systems and Technology, 2024. doi: 10.1145/3674840.

L. Sipos et al., “Identifying explanation needs of end-users: Applying and extending the XAI question bank,” en Proceedings of Mensch und Computer 2023 (MuC ’23). Rapperswil, Switzerland: ACM, 2023, pp. 1–6. doi: 10.1145/3603555.360851.

U. Ponnusamy, D. D. B. S., y N. Sampathila, “Approaching explainable artificial intelligence methods in the diagnosis of iron deficiency anemia using blood parameters,” en Proceedings of the 2023 International Conference on Recent Advances in Information Technology for Sustainable Development (ICRAIS), 2023, pp. 201–206. doi: 10.1109/ICRAIS59684.2023.10367126.

K. Kitamura, M. Irvan, y R. S. Yamaguchi, “XAI for medicine by chatgpt code interpreter,” en Proceedings of the 5th International Conference on Big Data Service and Intelligent Computation (BDSIC 2023), Singapore, 2023, pp. 28–34. doi: 10.1145/3633624.3633629.

M. R. Karim et al., “Interpreting black-box machine learning models for high-dimensional datasets,” en Proceedings of the 2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA), 2023, pp. 1–10. doi: 10.1109/DSAA60987.2023.10302562.

I. A. V. Ikechukwu y S. Murali, “xAI: An explainable AI model for the diagnosis of COPD from CXR images,” en Proceedings of the 2023 IEEE 2nd International Conference on Data, Decision and Systems (ICDDS), Mangaluru, India, 2023, pp. 1–6. doi: 10.1109/ICDDS59137.2023.10434619.

A. Salih et al., “Explainable artificial intelligence and cardiac imaging: Toward more interpretable models,” Circulation: Cardiovascular Imaging, vol. 16, 2023. doi: 10.1161/CIRCIMAGING.122.014519.

A. M. Conard, A. DenAdel, y L. Crawford, “A spectrum of explainable and interpretable machine learning approaches for genomic studies,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 15, p. e1617, 2023. doi: 10.1002/wics.1617.

R.-K. Sheu y M. S. Pardeshi, “A survey on medical explainable AI (XAI): Recent progress, explainability approach, human interaction and scoring system,” Sensors, vol. 22, no. 22, p. 8068, 2022. doi: 10.3390/s22208068.

R. El Shawi y M. H. Al-Mallah, “Interpretable local concept-based explanation with human feedback to predict all-cause mortality,” Journal of Artificial Intelligence Research, vol. 75, 2022. doi: 10.1613/jair.1.14019.

J. El Zini y M. Awad, “On the explainability of natural language processing deep models,” ACM Computing Surveys, vol. 55, no. 5, pp. Article 103, 1–31, 2023. doi: 10.1145/3529755.

S. A. Dubey y A. A. Pandit, “A comprehensive review and application of interpretable deep learning model for ADR prediction,” International Journal of Advanced Computer Science and Applications (IJACSA), vol. 13, no. 9, 2022. doi: 10.14569/IJACSA.2022.0130924.

Downloads

Published

2025-10-22

Issue

Section

Original Articles

How to Cite

[1]
“Explainable Artificial Intelligence: Analysis of Methodologies and Applications”, JCS&T, vol. 25, no. 2, p. e07, Oct. 2025, doi: 10.24215/16666038.25.e07.

Similar Articles

1-10 of 115

You may also start an advanced similarity search for this article.