Self-Supervised Robust Feature Matching Pipeline for Teach and Repeat Navigation.
artificial neural network
computer vision
deep learning
long-term autonomy
mobile robot
self-supervised machine learning
visual teach and repeat navigation
Journal
Sensors (Basel, Switzerland)
ISSN: 1424-8220
Titre abrégé: Sensors (Basel)
Pays: Switzerland
ID NLM: 101204366
Informations de publication
Date de publication:
07 Apr 2022
07 Apr 2022
Historique:
received:
01
03
2022
revised:
28
03
2022
accepted:
31
03
2022
entrez:
23
4
2022
pubmed:
24
4
2022
medline:
27
4
2022
Statut:
epublish
Résumé
The performance of deep neural networks and the low costs of computational hardware has made computer vision a popular choice in many robotic systems. An attractive feature of deep-learned methods is their ability to cope with appearance changes caused by day-night cycles and seasonal variations. However, deep learning of neural networks typically relies on large numbers of hand-annotated images, which requires significant effort for data collection and annotation. We present a method that allows autonomous, self-supervised training of a neural network in visual teach-and-repeat (VT&R) tasks, where a mobile robot has to traverse a previously taught path repeatedly. Our method is based on a fusion of two image registration schemes: one based on a Siamese neural network and another on point-feature matching. As the robot traverses the taught paths, it uses the results of feature-based matching to train the neural network, which, in turn, provides coarse registration estimates to the feature matcher. We show that as the neural network gets trained, the accuracy and robustness of the navigation increases, making the robot capable of dealing with significant changes in the environment. This method can significantly reduce the data annotation efforts when designing new robotic systems or introducing robots into new environments. Moreover, the method provides annotated datasets that can be deployed in other navigation systems. To promote the reproducibility of the research presented herein, we provide our datasets, codes and trained models online.
Identifiants
pubmed: 35458823
pii: s22082836
doi: 10.3390/s22082836
pmc: PMC9032253
pii:
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Subventions
Organisme : Czech Science Foundation
ID : 20-27034J
Références
J Opt Soc Am A Opt Image Sci Vis. 2001 Feb;18(2):253-64
pubmed: 11205970
IEEE Trans Pattern Anal Mach Intell. 2007 Jun;29(6):1052-67
pubmed: 17431302
IEEE Trans Pattern Anal Mach Intell. 2022 Apr;44(4):2074-2088
pubmed: 33074802