Shedding light on the black box of a neural network used to detect prostate cancer in whole slide images by occlusion-based explainability.
Artificial intelligence
Digital histopathology
Explainable AI
Machine learning
Occlusion sensitivity analysis
Prostate cancer
Journal
New biotechnology
ISSN: 1876-4347
Titre abrégé: N Biotechnol
Pays: Netherlands
ID NLM: 101465345
Informations de publication
Date de publication:
25 Dec 2023
25 Dec 2023
Historique:
received:
16
03
2023
revised:
29
08
2023
accepted:
30
09
2023
medline:
5
12
2023
pubmed:
5
10
2023
entrez:
4
10
2023
Statut:
ppublish
Résumé
Diagnostic histopathology faces increasing demands due to aging populations and expanding healthcare programs. Semi-automated diagnostic systems employing deep learning methods are one approach to alleviate this pressure. The learning models for histopathology are inherently complex and opaque from the user's perspective. Hence different methods have been developed to interpret their behavior. However, relatively limited attention has been devoted to the connection between interpretation methods and the knowledge of experienced pathologists. The main contribution of this paper is a method for comparing morphological patterns used by expert pathologists to detect cancer with the patterns identified as important for inference of learning models. Given the patch-based nature of processing large-scale histopathological imaging, we have been able to show statistically that the VGG16 model could utilize all the structures that are observable by the pathologist, given the patch size and scan resolution. The results show that the neural network approach to recognizing prostatic cancer is similar to that of a pathologist at medium optical resolution. The saliency maps identified several prevailing histomorphological features characterizing carcinoma, e.g., single-layered epithelium, small lumina, and hyperchromatic nuclei with halo. A convincing finding was the recognition of their mimickers in non-neoplastic tissue. The method can also identify differences, i.e., standard patterns not used by the learning models and new patterns not yet used by pathologists. Saliency maps provide added value for automated digital pathology to analyze and fine-tune deep learning systems and improve trust in computer-based decisions.
Identifiants
pubmed: 37793603
pii: S1871-6784(23)00051-1
doi: 10.1016/j.nbt.2023.09.008
pii:
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Pagination
52-67Informations de copyright
Copyright © 2023 The Authors. Published by Elsevier B.V. All rights reserved.
Déclaration de conflit d'intérêts
Declaration of Competing Interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Co-author, Petr HOLUB, is serving as an guest editor for the special issue of the journal.