Automatic brain tissue segmentation in fetal MRI using convolutional neural networks.
Brain segmentation
Convolutional neural network
Deep learning
Fetal MRI
Intensity inhomogeneity
Journal
Magnetic resonance imaging
ISSN: 1873-5894
Titre abrégé: Magn Reson Imaging
Pays: Netherlands
ID NLM: 8214883
Informations de publication
Date de publication:
12 2019
12 2019
Historique:
received:
01
12
2018
revised:
04
05
2019
accepted:
15
05
2019
pubmed:
11
6
2019
medline:
6
5
2020
entrez:
11
6
2019
Statut:
ppublish
Résumé
MR images of fetuses allow clinicians to detect brain abnormalities in an early stage of development. The cornerstone of volumetric and morphologic analysis in fetal MRI is segmentation of the fetal brain into different tissue classes. Manual segmentation is cumbersome and time consuming, hence automatic segmentation could substantially simplify the procedure. However, automatic brain tissue segmentation in these scans is challenging owing to artifacts including intensity inhomogeneity, caused in particular by spontaneous fetal movements during the scan. Unlike methods that estimate the bias field to remove intensity inhomogeneity as a preprocessing step to segmentation, we propose to perform segmentation using a convolutional neural network that exploits images with synthetically introduced intensity inhomogeneity as data augmentation. The method first uses a CNN to extract the intracranial volume. Thereafter, another CNN with the same architecture is employed to segment the extracted volume into seven brain tissue classes: cerebellum, basal ganglia and thalami, ventricular cerebrospinal fluid, white matter, brain stem, cortical gray matter and extracerebral cerebrospinal fluid. To make the method applicable to slices showing intensity inhomogeneity artifacts, the training data was augmented by applying a combination of linear gradients with random offsets and orientations to image slices without artifacts. To evaluate the performance of the method, Dice coefficient (DC) and Mean surface distance (MSD) per tissue class were computed between automatic and manual expert annotations. When the training data was enriched by simulated intensity inhomogeneity artifacts, the average achieved DC over all tissue classes and images increased from 0.77 to 0.88, and MSD decreased from 0.78 mm to 0.37 mm. These results demonstrate that the proposed approach can potentially replace or complement preprocessing steps, such as bias field corrections, and thereby improve the segmentation performance.
Identifiants
pubmed: 31181246
pii: S0730-725X(18)30610-6
doi: 10.1016/j.mri.2019.05.020
pii:
doi:
Types de publication
Journal Article
Research Support, Non-U.S. Gov't
Langues
eng
Sous-ensembles de citation
IM
Pagination
77-89Informations de copyright
Copyright © 2019 Elsevier Inc. All rights reserved.