Multi-institutional PET/CT image segmentation using federated deep transformer learning.
Deep transformers
Federated learning
PET/CT
Privacy
Segmentation
Journal
Computer methods and programs in biomedicine
ISSN: 1872-7565
Titre abrégé: Comput Methods Programs Biomed
Pays: Ireland
ID NLM: 8506513
Informations de publication
Date de publication:
Oct 2023
Oct 2023
Historique:
received:
09
01
2023
accepted:
02
07
2023
medline:
29
8
2023
pubmed:
29
7
2023
entrez:
28
7
2023
Statut:
ppublish
Résumé
Generalizable and trustworthy deep learning models for PET/CT image segmentation necessitates large diverse multi-institutional datasets. However, legal, ethical, and patient privacy issues challenge sharing of datasets between different centers. To overcome these challenges, we developed a federated learning (FL) framework for multi-institutional PET/CT image segmentation. A dataset consisting of 328 FL (HN) cancer patients who underwent clinical PET/CT examinations gathered from six different centers was enrolled. A pure transformer network was implemented as fully core segmentation algorithms using dual channel PET/CT images. We evaluated different frameworks (single center-based, centralized baseline, as well as seven different FL algorithms) using 68 PET/CT images (20% of each center data). In particular, the implemented FL algorithms include clipping with the quantile estimator (ClQu), zeroing with the quantile estimator (ZeQu), federated averaging (FedAvg), lossy compression (LoCo), robust aggregation (RoAg), secure aggregation (SeAg), and Gaussian differentially private FedAvg with adaptive quantile clipping (GDP-AQuCl). The Dice coefficient was 0.80±0.11 for both centralized and SeAg FL algorithms. All FL approaches achieved centralized learning model performance with no statistically significant differences. Among the FL algorithms, SeAg and GDP-AQuCl performed better than the other techniques. However, there was no statistically significant difference. All algorithms, except the center-based approach, resulted in relative errors less than 5% for SUV The developed FL-based (with centralized method performance) algorithms exhibited promising performance for HN tumor segmentation from PET/CT images.
Sections du résumé
BACKGROUND AND OBJECTIVE
OBJECTIVE
Generalizable and trustworthy deep learning models for PET/CT image segmentation necessitates large diverse multi-institutional datasets. However, legal, ethical, and patient privacy issues challenge sharing of datasets between different centers. To overcome these challenges, we developed a federated learning (FL) framework for multi-institutional PET/CT image segmentation.
METHODS
METHODS
A dataset consisting of 328 FL (HN) cancer patients who underwent clinical PET/CT examinations gathered from six different centers was enrolled. A pure transformer network was implemented as fully core segmentation algorithms using dual channel PET/CT images. We evaluated different frameworks (single center-based, centralized baseline, as well as seven different FL algorithms) using 68 PET/CT images (20% of each center data). In particular, the implemented FL algorithms include clipping with the quantile estimator (ClQu), zeroing with the quantile estimator (ZeQu), federated averaging (FedAvg), lossy compression (LoCo), robust aggregation (RoAg), secure aggregation (SeAg), and Gaussian differentially private FedAvg with adaptive quantile clipping (GDP-AQuCl).
RESULTS
RESULTS
The Dice coefficient was 0.80±0.11 for both centralized and SeAg FL algorithms. All FL approaches achieved centralized learning model performance with no statistically significant differences. Among the FL algorithms, SeAg and GDP-AQuCl performed better than the other techniques. However, there was no statistically significant difference. All algorithms, except the center-based approach, resulted in relative errors less than 5% for SUV
CONCLUSIONS
CONCLUSIONS
The developed FL-based (with centralized method performance) algorithms exhibited promising performance for HN tumor segmentation from PET/CT images.
Identifiants
pubmed: 37506602
pii: S0169-2607(23)00371-1
doi: 10.1016/j.cmpb.2023.107706
pii:
doi:
Types de publication
Journal Article
Multicenter Study
Langues
eng
Sous-ensembles de citation
IM
Pagination
107706Informations de copyright
Copyright © 2023 The Author(s). Published by Elsevier B.V. All rights reserved.
Déclaration de conflit d'intérêts
Declaration of Competing Interest None