Assessing and Comparing Interpretability Techniques for Artificial Neural Networks Breast Cancer Classification

TitreAssessing and Comparing Interpretability Techniques for Artificial Neural Networks Breast Cancer Classification
Publication TypeJournal Article
Year of Publication2021
AuthorsHakkoum, H, Idri, A, Abnane, I
JournalComputer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization
Volume9
Pagination587-599
Mots-clésArticle, Artificial intelligence, artificial neural network, Breast Cancer, Breast cancer classifications, cancer diagnosis, Computer aided diagnosis, cross validation, Data mining, Data-mining techniques, Diseases, Domain experts, early diagnosis, entropy, Explainability, Feature importance, Interpretability, Learn+, learning, learning algorithm, Lime, Machine learning, Multilayer neural networks, nerve cell, nonhuman, Partial dependence plot, perceptron, prediction, prognosis, Radial basis function networks, Treatment monitoring
Abstract

Breast cancer is the most common type of cancer among women. Thankfully, early detection and treatment improvements helped decrease the number of deaths. Data Mining techniques have always assisted BC tasks whether it is screening, diagnosis, prognosis, treatment, monitoring, and/or management. Nowadays, the use of Data Mining is witnessing a new era. In fact, the main objective is no longer to replace humans but to enhance their capabilities, which is why Artificial Intelligence is now referred to as Intelligence Augmentation. In this context, interpretability is used to help domain experts learn new patterns and machine learning experts debug their models. This paper aims to investigate three black-boxes interpretation techniques: Feature Importance, Partial Dependence Plot, and LIME when applied to two types of feed-forward Artificial Neural Networks: Multilayer perceptrons, and Radial Basis Function Network, trained on the Wisconsin Original dataset for breast cancer diagnosis. Results showed that local LIME explanations were instance-level interpretations that came in line with the global interpretations of the other two techniques. Global/local interpretability techniques can thus be combined to define the trustworthiness of a black-box model. © 2021 Informa UK Limited, trading as Taylor & Francis Group.

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85103249025&doi=10.1080%2f21681163.2021.1901784&partnerID=40&md5=78e1e57a62692bab2b39984182af7904
DOI10.1080/21681163.2021.1901784
Revues: 

Partenaires

Localisation

Suivez-nous sur

         

    

Contactez-nous

ENSIAS

Avenue Mohammed Ben Abdallah Regragui, Madinat Al Irfane, BP 713, Agdal Rabat, Maroc

  Télécopie : (+212) 5 37 68 60 78

  Secrétariat de direction : 06 61 48 10 97

        Secrétariat général : 06 61 34 09 27

        Service des affaires financières : 06 61 44 76 79

        Service des affaires estudiantines : 06 62 77 10 17 / n.mhirich@um5s.net.ma

        Résidences : 06 61 82 89 77

Contacts

    

Education - This is a contributing Drupal Theme
Design by WeebPal.