From CNNs to GANs for cross-modality medical image estimation.

Convolutional neural network Deep learning Generative adversarial network Image estimation Intensity projection

Journal

Computers in biology and medicine
ISSN: 1879-0534
Titre abrégé: Comput Biol Med
Pays: United States
ID NLM: 1250250

Informations de publication

Date de publication:
07 2022
Historique:
received: 07 02 2022
revised: 03 04 2022
accepted: 22 04 2022
pubmed: 4 5 2022
medline: 25 6 2022
entrez: 3 5 2022
Statut: ppublish

Résumé

Cross-modality image estimation involves the generation of images of one medical imaging modality from that of another modality. Convolutional neural networks (CNNs) have been shown to be useful in image-to-image intensity projections, in addition to identifying, characterising and extracting image patterns. Generative adversarial networks (GANs) use CNNs as generators and estimated images are classified as true or false based on an additional discriminator network. CNNs and GANs within the image estimation framework may be considered more generally as deep learning approaches, since medical images tend to be large in size, leading to the need for large neural networks. Most research in the CNN/GAN image estimation literature has involved the use of MRI data with the other modality primarily being PET or CT. This review provides an overview of the use of CNNs and GANs for cross-modality medical image estimation. We outline recently proposed neural networks and detail the constructs employed for CNN and GAN image-to-image synthesis. Motivations behind cross-modality image estimation are outlined as well. GANs appear to provide better utility in cross-modality image estimation in comparison with CNNs, a finding drawn based on our analysis involving metrics comparing estimated and actual images. Our final remarks highlight key challenges faced by the cross-modality medical image estimation field, including how intensity projection can be constrained by registration (unpaired versus paired data), use of image patches, additional networks, and spatially sensitive loss functions.

Identifiants

pubmed: 35504221
pii: S0010-4825(22)00348-1
doi: 10.1016/j.compbiomed.2022.105556
pii:
doi:

Types de publication

Journal Article Review Research Support, Non-U.S. Gov't

Langues

eng

Sous-ensembles de citation

IM

Pagination

105556

Informations de copyright

Copyright © 2022 Elsevier Ltd. All rights reserved.

Auteurs

Azin Shokraei Fard (A)

Centre for Advanced Imaging, University of Queensland, Brisbane, Australia.

David C Reutens (DC)

Centre for Advanced Imaging, University of Queensland, Brisbane, Australia; ARC Centre for Innovation in Biomedical Imaging Technology, Brisbane, Australia.

Viktor Vegh (V)

Centre for Advanced Imaging, University of Queensland, Brisbane, Australia; ARC Centre for Innovation in Biomedical Imaging Technology, Brisbane, Australia. Electronic address: v.vegh@uq.edu.au.

Articles similaires

Humans Ketamine Propofol Pulmonary Atelectasis Female
Humans Magnetic Resonance Imaging Phantoms, Imaging Infant, Newborn Signal-To-Noise Ratio
1.00
Humans Magnetic Resonance Imaging Brain Infant, Newborn Infant, Premature
Cephalometry Humans Anatomic Landmarks Software Internet

Classifications MeSH