Brain Tumor Segmentation via Multi-Modalities Interactive Feature Learning.

attention mechanism brain tumor segmentation deep neural network feature fusion multi-modality learning

Journal

Frontiers in medicine
ISSN: 2296-858X
Titre abrégé: Front Med (Lausanne)
Pays: Switzerland
ID NLM: 101648047

Informations de publication

Date de publication:
2021
Historique:
received: 15 01 2021
accepted: 04 03 2021
entrez: 31 5 2021
pubmed: 1 6 2021
medline: 1 6 2021
Statut: epublish

Résumé

Automatic segmentation of brain tumors from multi-modalities magnetic resonance image data has the potential to enable preoperative planning and intraoperative volume measurement. Recent advances in deep convolutional neural network technology have opened up an opportunity to achieve end-to-end segmenting the brain tumor areas. However, the medical image data used in brain tumor segmentation are relatively scarce and the appearance of brain tumors is varied, so that it is difficult to find a learnable pattern to directly describe tumor regions. In this paper, we propose a novel cross-modalities interactive feature learning framework to segment brain tumors from the multi-modalities data. The core idea is that the multi-modality MR data contain rich patterns of the normal brain regions, which can be easily captured and can be potentially used to detect the non-normal brain regions, i.e., brain tumor regions. The proposed multi-modalities interactive feature learning framework consists of two modules: cross-modality feature extracting module and attention guided feature fusing module, which aim at exploring the rich patterns cross multi-modalities and guiding the interacting and the fusing process for the rich features from different modalities. Comprehensive experiments are conducted on the BraTS 2018 benchmark, which show that the proposed cross-modality feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.

Identifiants

pubmed: 34055832
doi: 10.3389/fmed.2021.653925
pmc: PMC8158657
doi:

Types de publication

Journal Article

Langues

eng

Pagination

653925

Informations de copyright

Copyright © 2021 Wang, Yang, Peng, Ai, An, Yang, You and Ma.

Déclaration de conflit d'intérêts

BW and JA are employed by company Beijing Jingzhen Medical Technology Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Références

IEEE Trans Image Process. 2020 Feb 10;:
pubmed: 32054579
Med Image Anal. 2017 Feb;36:61-78
pubmed: 27865153
Sci Data. 2017 Sep 05;4:170117
pubmed: 28872634
IEEE Trans Med Imaging. 2015 Oct;34(10):1993-2024
pubmed: 25494501
IEEE Trans Pattern Anal Mach Intell. 2021 Apr;43(4):1423-1437
pubmed: 31670664
IEEE Trans Image Process. 2019 May;28(5):2200-2211
pubmed: 30507506
Front Neurosci. 2020 Apr 08;14:282
pubmed: 32322186
Appl Soft Comput. 2021 Jan;98:106897
pubmed: 33199977

Auteurs

Bo Wang (B)

The State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China.
Beijing Jingzhen Medical Technology Ltd., Beijing, China.

Jingyi Yang (J)

School of Artificial Intelligence, Xidian University, Xi'an, China.

Hong Peng (H)

Department of Radiology, The 1st Medical Center, Chinese PLA General Hospital, Beijing, China.

Jingyang Ai (J)

Beijing Jingzhen Medical Technology Ltd., Beijing, China.

Lihua An (L)

Radiology Department, Affiliated Hospital of Jining Medical University, Jining, China.

Bo Yang (B)

China Institute of Marine Technology & Economy, Beijing, China.

Zheng You (Z)

The State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China.

Lin Ma (L)

Department of Radiology, The 1st Medical Center, Chinese PLA General Hospital, Beijing, China.

Classifications MeSH