Cross-modal tumor segmentation using generative blending augmentation and self-training.


Journal

IEEE transactions on bio-medical engineering
ISSN: 1558-2531
Titre abrégé: IEEE Trans Biomed Eng
Pays: United States
ID NLM: 0012737

Informations de publication

Date de publication:
01 Apr 2024
Historique:
medline: 1 4 2024
pubmed: 1 4 2024
entrez: 1 4 2024
Statut: aheadofprint

Résumé

Data scarcity and domain shifts lead to biased training sets that do not accurately represent deployment conditions. A related practical problem is cross-modal image segmentation, where the objective is to segment unlabelled images using previously labelled datasets from other imaging modalities. We propose a cross-modal segmentation method based on conventional image synthesis boosted by a new data augmentation technique called Generative Blending Augmentation (GBA). GBA leverages a SinGAN model to learn representative generative features from a single training image to diversify realistically tumor appearances. This way, we compensate for image synthesis errors, subsequently improving the generalization power of a downstream segmentation model. The proposed augmentation is further combined to an iterative self-training procedure leveraging pseudo labels at each pass. The proposed solution ranked first for vestibular schwannoma (VS) segmentation during the validation and test phases of the MICCAI CrossMoDA 2022 challenge, with best mean Dice similarity and average symmetric surface distance measures. Local contrast alteration of tumor appearances and iterative self-training with pseudo labels are likely to lead to performance improvements in a variety of segmentation contexts.

Identifiants

pubmed: 38557627
doi: 10.1109/TBME.2024.3384014
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Auteurs

Classifications MeSH