A Deep Recurrent Learning-Based Region-Focused Feature Detection for Enhanced Target Detection in Multi-Object Media.

deep reinforcement learning (DRL) feature extraction hotspot image analysis multi-object region of interest target detection

Journal

Sensors (Basel, Switzerland)
ISSN: 1424-8220
Titre abrégé: Sensors (Basel)
Pays: Switzerland
ID NLM: 101204366

Informations de publication

Date de publication:
31 Aug 2023
Historique:
received: 13 08 2023
revised: 25 08 2023
accepted: 29 08 2023
medline: 9 9 2023
pubmed: 9 9 2023
entrez: 9 9 2023
Statut: epublish

Résumé

Target detection in high-contrast, multi-object images and movies is challenging. This difficulty results from different areas and objects/people having varying pixel distributions, contrast, and intensity properties. This work introduces a new region-focused feature detection (RFD) method to tackle this problem and improve target detection accuracy. The RFD method divides the input image into several smaller ones so that as much of the image as possible is processed. Each of these zones has its own contrast and intensity attributes computed. Deep recurrent learning is then used to iteratively extract these features using a similarity measure from training inputs corresponding to various regions. The target can be located by combining features from many locations that overlap. The recognized target is compared to the inputs used during training, with the help of contrast and intensity attributes, to increase accuracy. The feature distribution across regions is also used for repeated training of the learning paradigm. This method efficiently lowers false rates during region selection and pattern matching with numerous extraction instances. Therefore, the suggested method provides greater accuracy by singling out distinct regions and filtering out misleading rate-generating features. The accuracy, similarity index, false rate, extraction ratio, processing time, and others are used to assess the effectiveness of the proposed approach. The proposed RFD improves the similarity index by 10.69%, extraction ratio by 9.04%, and precision by 13.27%. The false rate and processing time are reduced by 7.78% and 9.19%, respectively.

Identifiants

pubmed: 37688012
pii: s23177556
doi: 10.3390/s23177556
pmc: PMC10490795
pii:
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Subventions

Organisme : Cracow University of Technology
ID : E-1/2023

Références

IEEE Trans Image Process. 2021;30:6917-6929
pubmed: 34339371
Sci Rep. 2021 Oct 7;11(1):19938
pubmed: 34620900
Micron. 2022 Mar;154:103197
pubmed: 35058109
J Vis. 2023 May 2;23(5):16
pubmed: 37212782

Auteurs

Jinming Wang (J)

College of Information Science & Technology, Zhejiang Shuren University, Hangzhou 310015, China.

Ahmed Alshahir (A)

Department of Electrical Engineering, College of Engineering, Jouf University, Sakakah 72388, Saudi Arabia.

Ghulam Abbas (G)

School of Electrical Engineering, Southeast University, Nanjing 210096, China.

Khaled Kaaniche (K)

Department of Electrical Engineering, College of Engineering, Jouf University, Sakakah 72388, Saudi Arabia.

Mohammed Albekairi (M)

Department of Electrical Engineering, College of Engineering, Jouf University, Sakakah 72388, Saudi Arabia.

Shahr Alshahr (S)

Department of Electrical Engineering, College of Engineering, Jouf University, Sakakah 72388, Saudi Arabia.

Waleed Aljarallah (W)

Department of Electrical Engineering, College of Engineering, Jouf University, Sakakah 72388, Saudi Arabia.

Anis Sahbani (A)

Institute for Intelligent Systems and Robotics (ISIR), CNRS, Sorbonne University, 75006 Paris, France.

Grzegorz Nowakowski (G)

Faculty of Electrical and Computer Engineering, Cracow University of Technology, Warszawska 24 Str., 31-155 Cracow, Poland.

Marek Sieja (M)

Faculty of Electrical and Computer Engineering, Cracow University of Technology, Warszawska 24 Str., 31-155 Cracow, Poland.

Classifications MeSH