CA-Net: Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation.
Journal
IEEE transactions on medical imaging
ISSN: 1558-254X
Titre abrégé: IEEE Trans Med Imaging
Pays: United States
ID NLM: 8310780
Informations de publication
Date de publication:
02 2021
02 2021
Historique:
pubmed:
3
11
2020
medline:
29
6
2021
entrez:
2
11
2020
Statut:
ppublish
Résumé
Accurate medical image segmentation is essential for diagnosis and treatment planning of diseases. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they are still challenged by complicated conditions where the segmentation target has large variations of position, shape and scale, and existing CNNs have a poor explainability that limits their application to clinical decisions. In this work, we make extensive use of multiple attentions in a CNN architecture and propose a comprehensive attention-based CNN (CA-Net) for more accurate and explainable medical image segmentation that is aware of the most important spatial positions, channels and scales at the same time. In particular, we first propose a joint spatial attention module to make the network focus more on the foreground region. Then, a novel channel attention module is proposed to adaptively recalibrate channel-wise feature responses and highlight the most relevant feature channels. Also, we propose a scale attention module implicitly emphasizing the most salient feature maps among multiple scales so that the CNN is adaptive to the size of an object. Extensive experiments on skin lesion segmentation from ISIC 2018 and multi-class segmentation of fetal MRI found that our proposed CA-Net significantly improved the average segmentation Dice score from 87.77% to 92.08% for skin lesion, 84.79% to 87.08% for the placenta and 93.20% to 95.88% for the fetal brain respectively compared with U-Net. It reduced the model size to around 15 times smaller with close or even better accuracy compared with state-of-the-art DeepLabv3+. In addition, it has a much higher explainability than existing networks by visualizing the attention weight maps. Our code is available at https://github.com/HiLab-git/CA-Net.
Identifiants
pubmed: 33136540
doi: 10.1109/TMI.2020.3035253
pmc: PMC7611411
mid: EMS130998
doi:
Types de publication
Journal Article
Research Support, Non-U.S. Gov't
Langues
eng
Sous-ensembles de citation
IM
Pagination
699-711Subventions
Organisme : Wellcome Trust
Pays : United Kingdom
Organisme : Wellcome Trust
ID : 203148
Pays : United Kingdom
Organisme : Wellcome Trust
ID : WT101957
Pays : United Kingdom
Organisme : Wellcome Trust
ID : 203148/Z/16/Z
Pays : United Kingdom
Références
IEEE Trans Med Imaging. 2018 Jul;37(7):1562-1573
pubmed: 29969407
Med Image Anal. 2017 Feb;36:61-78
pubmed: 27865153
Med Image Anal. 2019 Apr;53:197-207
pubmed: 30802813
IEEE Trans Biomed Eng. 2017 Sep;64(9):2065-2074
pubmed: 28600236
Med Image Anal. 2019 Jan;51:61-88
pubmed: 30390513
Med Image Anal. 2017 Dec;42:60-88
pubmed: 28778026
IEEE Trans Pattern Anal Mach Intell. 2019 Jul;41(7):1559-1572
pubmed: 29993532
IEEE Trans Pattern Anal Mach Intell. 2017 Dec;39(12):2481-2495
pubmed: 28060704
IEEE Trans Pattern Anal Mach Intell. 2018 Apr;40(4):834-848
pubmed: 28463186