Improving unsupervised stain-to-stain translation using self-supervision and meta-learning.

Deep learning Digital pathology Domain translation Kidney Segmentation Stain-to-stain translation

Journal

Journal of pathology informatics
ISSN: 2229-5089
Titre abrégé: J Pathol Inform
Pays: United States
ID NLM: 101528849

Informations de publication

Date de publication:
2022
Historique:
entrez: 21 10 2022
pubmed: 22 10 2022
medline: 22 10 2022
Statut: epublish

Résumé

In digital pathology, many image analysis tasks are challenged by the need for large and time-consuming manual data annotations to cope with various sources of variability in the image domain. Unsupervised domain adaptation based on image-to-image translation is gaining importance in this field by addressing variabilities without the manual overhead. Here, we tackle the variation of different histological stains by unsupervised stain-to-stain translation to enable a stain-independent applicability of a deep learning segmentation model. We use CycleGANs for stain-to-stain translation in kidney histopathology, and propose two novel approaches to improve translational effectivity. First, we integrate a prior segmentation network into the CycleGAN for a self-supervised, application-oriented optimization of translation through semantic guidance, and second, we incorporate extra channels to the translation output to implicitly separate artificial meta-information otherwise encoded for tackling underdetermined reconstructions. The latter showed partially superior performances to the unmodified CycleGAN, but the former performed best in all stains providing instance-level Dice scores ranging between 78% and 92% for most kidney structures, such as glomeruli, tubules, and veins. However, CycleGANs showed only limited performance in the translation of other structures, e.g. arteries. Our study also found somewhat lower performance for all structures in all stains when compared to segmentation in the original stain. Our study suggests that with current unsupervised technologies, it seems unlikely to produce "generally" applicable simulated stains.

Sections du résumé

Background UNASSIGNED
In digital pathology, many image analysis tasks are challenged by the need for large and time-consuming manual data annotations to cope with various sources of variability in the image domain. Unsupervised domain adaptation based on image-to-image translation is gaining importance in this field by addressing variabilities without the manual overhead. Here, we tackle the variation of different histological stains by unsupervised stain-to-stain translation to enable a stain-independent applicability of a deep learning segmentation model.
Methods UNASSIGNED
We use CycleGANs for stain-to-stain translation in kidney histopathology, and propose two novel approaches to improve translational effectivity. First, we integrate a prior segmentation network into the CycleGAN for a self-supervised, application-oriented optimization of translation through semantic guidance, and second, we incorporate extra channels to the translation output to implicitly separate artificial meta-information otherwise encoded for tackling underdetermined reconstructions.
Results UNASSIGNED
The latter showed partially superior performances to the unmodified CycleGAN, but the former performed best in all stains providing instance-level Dice scores ranging between 78% and 92% for most kidney structures, such as glomeruli, tubules, and veins. However, CycleGANs showed only limited performance in the translation of other structures, e.g. arteries. Our study also found somewhat lower performance for all structures in all stains when compared to segmentation in the original stain.
Conclusions UNASSIGNED
Our study suggests that with current unsupervised technologies, it seems unlikely to produce "generally" applicable simulated stains.

Identifiants

pubmed: 36268068
doi: 10.1016/j.jpi.2022.100107
pii: S2153-3539(22)00701-5
pmc: PMC9577059
doi:

Types de publication

Journal Article

Langues

eng

Pagination

100107

Informations de copyright

© 2022 The Authors.

Références

IEEE Rev Biomed Eng. 2009;2:147-71
pubmed: 20671804
IEEE Trans Med Imaging. 2019 Oct;38(10):2293-2302
pubmed: 30762541
Kidney Int. 2021 Jan;99(1):86-101
pubmed: 32835732
Sci Rep. 2017 Dec 4;7(1):16878
pubmed: 29203879
Patterns (N Y). 2020 Sep 11;1(6):100089
pubmed: 33205132
Nat Biomed Eng. 2019 Jun;3(6):466-477
pubmed: 31142829
Arch Pathol Lab Med. 2013 Dec;137(12):1723-32
pubmed: 23738764
J Am Soc Nephrol. 2021 Jan;32(1):52-68
pubmed: 33154175
J Am Soc Nephrol. 2019 Oct;30(10):1968-1979
pubmed: 31488607
IEEE Trans Med Imaging. 2020 Nov;39(11):3257-3267
pubmed: 31283474
IEEE Trans Med Imaging. 2018 Mar;37(3):792-802
pubmed: 29533895
Nat Commun. 2021 Aug 12;12(1):4884
pubmed: 34385460
Sci Rep. 2020 Oct 15;10(1):17507
pubmed: 33060677
Knowl Inf Syst. 2013 Sep 1;36(3):537-556
pubmed: 24039326

Auteurs

Nassim Bouteldja (N)

Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany.
Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany.

Barbara M Klinkhammer (BM)

Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany.

Tarek Schlaich (T)

Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany.

Peter Boor (P)

Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany.
Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany.

Dorit Merhof (D)

Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany.
Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.

Classifications MeSH