Interactive contouring through contextual deep learning.

CT Radiotherapy contouring deep learning interactive segmentation

Journal

Medical physics
ISSN: 2473-4209
Titre abrégé: Med Phys
Pays: United States
ID NLM: 0425746

Informations de publication

Date de publication:
Jun 2021
Historique:
revised: 31 01 2021
received: 04 09 2020
accepted: 10 03 2021
pubmed: 21 3 2021
medline: 10 7 2021
entrez: 20 3 2021
Statut: ppublish

Résumé

To investigate a deep learning approach that enables three-dimensional (3D) segmentation of an arbitrary structure of interest given a user provided two-dimensional (2D) contour for context. Such an approach could decrease delineation times and improve contouring consistency, particularly for anatomical structures for which no automatic segmentation tools exist. A series of deep learning segmentation models using a Recurrent Residual U-Net with attention gates was trained with a successively expanding training set. Contextual information was provided to the models, using a previously contoured slice as an input, in addition to the slice to be contoured. In total, 6 models were developed, and 19 different anatomical structures were used for training and testing. Each of the models was evaluated for all 19 structures, even if they were excluded from the training set, in order to assess the model's ability to segment unseen structures of interest. Each model's performance was evaluated using the Dice similarity coefficient (DSC), Hausdorff distance, and relative added path length (APL). The segmentation performance for seen and unseen structures improved when the training set was expanded by addition of structures previously excluded from the training set. A model trained exclusively on heart structures achieved a DSC of 0.33, HD of 44 mm, and relative APL of 0.85 when segmenting the spleen, whereas a model trained on a diverse set of structures, but still excluding the spleen, achieved a DSC of 0.80, HD of 13 mm, and relative APL of 0.35. Iterative prediction performed better compared to direct prediction when considering unseen structures. Training a contextual deep learning model on a diverse set of structures increases the segmentation performance for the structures in the training set, but importantly enables the model to generalize and make predictions even for unseen structures that were not represented in the training set. This shows that user-provided context can be incorporated into deep learning contouring to facilitate semi-automatic segmentation of CT images for any given structure. Such an approach can enable faster de-novo contouring in clinical practice.

Identifiants

pubmed: 33742454
doi: 10.1002/mp.14852
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

2951-2959

Subventions

Organisme : Cancer Research UK
ID : 28736
Pays : United Kingdom
Organisme : CRUK
ID : A15935
Organisme : CRUK Radnet Centre
ID : A28736
Organisme : Marie Skłodowska-Curie
ID : 766276

Commentaires et corrections

Type : ErratumIn

Informations de copyright

© 2021 Mirada Medical Ltd. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.

Références

Ramkumar A, Dolz J, Kirisli HA, et al. User interaction in semi-automatic segmentation of organs at risk: a case study in radiotherapy. J Digit Imaging 2016;29:264-277.
Lustberg T, van Soest J, Gooding M, et al. Clinical evaluation of atlas and deep learning based automatic contouring for lung cancer. Radiother Oncol 2018;126:312-317.
Jarrett D, Stride E, Vallis K, Gooding MJ. Applications and limitations of machine learning in radiation oncology. Br J Radiol 2019;92:20190001.
Gooding M, Smith A, Peressutti D, et al. PV-0531: Multi-centre evaluation of atlas-based and deep learning contouring using a modified Turing Test. Radiother Oncol 2018;127:S282-S283.
Perone CS, Cohen-Adad J. Promises and limitations of deep learning for medical image segmentation. J Med Art Int 2019;2:1.
Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Zeitschrift fur Medizinische Physik 2019;29:102-127.
Olabarriaga SD, Smeulders AWM. Interaction in the segmentation of medical images: A survey. Med Image Anal 2001;5:127-142.
Wang G, Li W, Zuluaga MA, et al. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Tran Med Imaging 2018;37:1562-1573.
Wang G, Zuluaga MA, Li W, et al. DeepIGeoS: A deep interactive geodesic framework for medical image segmentation. IEEE Trans Pattern Anal Mach Intellig 2019;41:1559-1572.
Sakinis T, Milletari F, Roth H, et al. Interactive segmentation of medical images through fully convolutional neural networks. 2019, ArXiv. abs/1903.0.
Léger J, Brion E, Javaid U, Lee J, De Vleeschouwer C, Macq B. Contour Propagation in CT scans with Convolutional Neural Networks, in International Conference on Advanced Concepts for Intelligent Vision Systems ACIVS 2018: Advanced Concepts for Intelligent Vision Systems; 2018:380-391.
Zheng Q, Delingette H, Duchateau N, Ayache N. 3D consistent robust segmentation of cardiac images by deep learning with spatial propagation. IEEE Trans Med Imaging 2018;37:2137-2148.
Novikov A, Major D, Wimmer M, Lenis D, Bühler K, Deep sequential segmentation of organs in volumetric medical scans. IEEE Trans Med Imaging 2018;38:1207-1215.
Ciçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation, in MICCAI, 2016:424-432.
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. Med Image Comput Comput-Ass Int 2015;9351:234-241.
Alom MZ, Hasan M, Yakopcic C, Taha TM, Asari VK. Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation. 2018, ArXiv. abs/1802.0.
Alom MZ, Yakopcic C, Taha TM, Asari V. Nuclei Segmentation with Recurrent Residual Convolutional Neural Networks based U-Net (R2U-Net), in IEEE National Aerospace and Electronics Conference; 2018:228-233.
Oktay O, Schlemper J, Le Folgoc L,et al. Attention U-Net: Learning Where to Look for the Pancreas, in Medical Imaging with Deep Learning. Amsterdam, 2018.
Schlemper J, Oktay O, Schaap M, Heinrich M, Kainz B, Glocker B, Rueckert D. Attention gated networks: Learning to leverage salient regions in medical images. Med Image Anal 2019;53:197-207.
Janocha K, Czarnecki WM. On loss functions for deep neural networks in classification. Schedae Informaticae 2016;25:49-59.
Kingma DP, Lei Ba J, ADAM: A method for stochastic optimization. ICLR. 2015.
Dice LR, Dice LR. Measures of the amount of ecologic association between species. Ecology 1945;26:297-302.
Huttenlocher DP, Klanderman GA, Rucklidge WJ. Comparing images using the Hausdorff distance. IEEE Trans Pattern Anal Mach Int 1993;15:850-863.
Vaassen F, Hazelaar C, Vaniqui A, et al. Evaluation of measures for assessing time-saving of automatic organ-at risk segmentation in radiotherapy. Phys Imaging Radiati Oncol 2020;13:1-6.
Feng X, Qing K, Tustison NJ, Meyer CH, Chen Q. Deep convolutional neural network for segmentation of thoracic organs at risk using cropped 3D images. Med Phys 2019;46:2169-2180.
Hossain S, Najeeb S, Shahriyar A, Abdullah ZR, Ariful Haque M. A Pipeline for Lung Tumor Detection and Segmentation from CT Scans Using Dilated Convolutional Neural Networks, in IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, Brighton; 2019:1348-1352.
Isensee F, Petersen J, Klein A, et al. nnU-Net: Self-adapting Framework for U-Net-Based Medical Image Segmentation, in Medical Segmentation Decathlon, Challenge 2018, 2018.
Yang J, Sharp G, Veeraraghavan H, et al. Data from Lung CT Segmentation Challenge; 2017.
Aerts HJWL, Wee L, Velazquez RE, et al. Data From NSCLC-Radiomics Lung1, Technical report. The Cancer Imaging Archive. 2019.
Wee L, Dekker A. Data from Head-Neck-Radiomics-HN1. Technical report: The Cancer Imaging Archive; 2019.
Simpson AL. A large annotated medical image dataset for the development and evaluation of segmentation algorithms. Technical report. 2019.

Auteurs

Michael J Trimpl (MJ)

Mirada Medical Ltd, Oxford, UK.
Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
Oxford Institute for Radiation Oncology, University of Oxford, Oxford, UK.

Djamal Boukerroui (D)

Mirada Medical Ltd, Oxford, UK.

Eleanor P J Stride (EPJ)

Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.

Katherine A Vallis (KA)

Oxford Institute for Radiation Oncology, University of Oxford, Oxford, UK.

Mark J Gooding (MJ)

Mirada Medical Ltd, Oxford, UK.

Articles similaires

Databases, Protein Protein Domains Protein Folding Proteins Deep Learning
1.00
Humans Magnetic Resonance Imaging Brain Infant, Newborn Infant, Premature
Cephalometry Humans Anatomic Landmarks Software Internet
Humans Shoulder Fractures Tomography, X-Ray Computed Neural Networks, Computer Female

Classifications MeSH