Directional intensified feature description using tertiary filtering for augmented reality tracking.


Journal

Scientific reports
ISSN: 2045-2322
Titre abrégé: Sci Rep
Pays: England
ID NLM: 101563288

Informations de publication

Date de publication:
20 Nov 2023
Historique:
received: 07 08 2023
accepted: 03 11 2023
medline: 21 11 2023
pubmed: 21 11 2023
entrez: 21 11 2023
Statut: epublish

Résumé

Augmented Reality (AR) is applied in almost every field, and a few, but not limited, are engineering, medical, gaming and internet of things. The application of image tracking is inclusive in all these mentioned fields. AR uses image tracking to localize and register the position of the user/AR device for superimposing the virtual image into the real-world. In general terms, tracking the image enhance the users' experience. However, in the image tracking application, establishing the interface between virtual realm and the physical world has many shortcomings. Many tracking systems are available, but it lacks in robustness and efficiency. The robustness of the tracking algorithm, is the challenging task of implementation. This study aims to enhance the users' experience in AR by describing an image using Directional Intensified Features with Tertiary Filtering. This way of describing the features improve the robustness, which is desired in the image tracking. A feature descriptor is robust, in the sense that it does not compromise, when the image undergoes various transformations. This article, describes the features based on the Directional Intensification using Tertiary Filtering (DITF). The robustness of the algorithm is improved, because of the inherent design of Tri-ocular, Bi-ocular and Dia-ocular filters that can intensify the features in all required directions. The algorithm's robustness is verified with respect to various image transformations. The oxford dataset is used for performance analysis and validation. DITF model is designed to achieve the repeatability score of illumination-variation , blur changes and view-point variation, as 100%, 100% and 99% respectively. The comparative analysis has been performed in terms of precision and re-call. DITF outperforms the state-of-the-art descriptors, namely, BEBLID, BOOST, HOG, LBP, BRISK and AKAZE. An Implementation of DITF source code is available in the following GitHub repository: github.com/Johnchristopherclement/Directional-Intensified-Feature-Descriptor.

Identifiants

pubmed: 37985678
doi: 10.1038/s41598-023-46643-6
pii: 10.1038/s41598-023-46643-6
pmc: PMC10662146
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

20311

Informations de copyright

© 2023. The Author(s).

Références

Multimed Tools Appl. 2023;82(11):16555-16589
pubmed: 36185319
Multimed Tools Appl. 2023 Jan 31;:1-31
pubmed: 36743996
IEEE Trans Pattern Anal Mach Intell. 2016 Feb;38(2):295-307
pubmed: 26761735
J Digit Imaging. 2022 Jun;35(3):496-513
pubmed: 35141807
Sci Rep. 2023 Sep 4;13(1):14510
pubmed: 37666967
Sci Rep. 2022 Jun 30;12(1):11043
pubmed: 35773266

Auteurs

Indhumathi S (I)

Vellore Institute of Technology, Vellore, India.

J Christopher Clement (JC)

Vellore Institute of Technology, Vellore, India. christopher.clement@vit.ac.in.

Classifications MeSH