Unsupervised domain adaptation for clinician pose estimation and instance segmentation in the operating room.
Human Pose Estimation
Low resolution Images
Operating Room
Person Instance Segmentation
Self-training
Semi-supervised Learning
Unsupervised Domain Adaptation
Journal
Medical image analysis
ISSN: 1361-8423
Titre abrégé: Med Image Anal
Pays: Netherlands
ID NLM: 9713490
Informations de publication
Date de publication:
08 2022
08 2022
Historique:
received:
20
08
2021
revised:
23
03
2022
accepted:
24
06
2022
pubmed:
10
7
2022
medline:
27
7
2022
entrez:
9
7
2022
Statut:
ppublish
Résumé
The fine-grained localization of clinicians in the operating room (OR) is a key component to design the new generation of OR support systems. Computer vision models for person pixel-based segmentation and body-keypoints detection are needed to better understand the clinical activities and the spatial layout of the OR. This is challenging, not only because OR images are very different from traditional vision datasets, but also because data and annotations are hard to collect and generate in the OR due to privacy concerns. To address these concerns, we first study how joint person pose estimation and instance segmentation can be performed on low resolutions images with downsampling factors from 1x to 12x. Second, to address the domain shift and the lack of annotations, we propose a novel unsupervised domain adaptation method, called AdaptOR, to adapt a model from an in-the-wild labeled source domain to a statistically different unlabeled target domain. We propose to exploit explicit geometric constraints on the different augmentations of the unlabeled target domain image to generate accurate pseudo labels and use these pseudo labels to train the model on high- and low-resolution OR images in a self-training framework. Furthermore, we propose disentangled feature normalization to handle the statistically different source and target domain data. Extensive experimental results with detailed ablation studies on the two OR datasets MVOR+ and TUM-OR-test show the effectiveness of our approach against strongly constructed baselines, especially on the low-resolution privacy-preserving OR images. Finally, we show the generality of our method as a semi-supervised learning (SSL) method on the large-scale COCO dataset, where we achieve comparable results with as few as 1% of labeled supervision against a model trained with 100% labeled supervision. Code is available at https://github.com/CAMMA-public/HPE-AdaptOR.
Identifiants
pubmed: 35809529
pii: S1361-8415(22)00172-4
doi: 10.1016/j.media.2022.102525
pii:
doi:
Types de publication
Journal Article
Research Support, Non-U.S. Gov't
Langues
eng
Sous-ensembles de citation
IM
Pagination
102525Informations de copyright
Copyright © 2022 Elsevier B.V. All rights reserved.
Déclaration de conflit d'intérêts
Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.