Image-Based Lunar Hazard Detection in Low Illumination Simulated Conditions via Vision Transformers.
deep neural network
hazard detection
image segmentation
lunar hazard detection
lunar south pole
supervised learning
vision transformer
vision-based hazard detection
Journal
Sensors (Basel, Switzerland)
ISSN: 1424-8220
Titre abrégé: Sensors (Basel)
Pays: Switzerland
ID NLM: 101204366
Informations de publication
Date de publication:
13 Sep 2023
13 Sep 2023
Historique:
received:
26
05
2023
revised:
05
09
2023
accepted:
08
09
2023
medline:
28
9
2023
pubmed:
28
9
2023
entrez:
28
9
2023
Statut:
epublish
Résumé
Hazard detection is fundamental for a safe lunar landing. State-of-the-art autonomous lunar hazard detection relies on 2D image-based and 3D Lidar systems. The lunar south pole is challenging for vision-based methods. The low sun inclination and the terrain rich in topographic features create large areas in shadow, hiding the terrain features. The proposed method utilizes a vision transformer (ViT) model, which is a deep learning architecture based on the transformer blocks used in natural language processing, to solve this problem. Our goal is to train the ViT model to extract terrain features information from low-light RGB images. The results show good performances, especially at high altitudes, beating the UNet, one of the most popular convolutional neural networks, in every scenario.
Identifiants
pubmed: 37765902
pii: s23187844
doi: 10.3390/s23187844
pmc: PMC10535458
pii:
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Références
IEEE Trans Pattern Anal Mach Intell. 1986 Jun;8(6):679-98
pubmed: 21869365
IEEE Trans Med Imaging. 2018 Dec;37(12):2663-2674
pubmed: 29994201