Deep user identification model with multiple biometric data.

Multimodal learning Multitask learning Person identification

Journal

BMC bioinformatics
ISSN: 1471-2105
Titre abrégé: BMC Bioinformatics
Pays: England
ID NLM: 100965194

Informations de publication

Date de publication:
16 Jul 2020
Historique:
received: 11 06 2020
accepted: 18 06 2020
entrez: 18 7 2020
pubmed: 18 7 2020
medline: 22 8 2020
Statut: epublish

Résumé

Recognition is an essential function of human beings. Humans easily recognize a person using various inputs such as voice, face, or gesture. In this study, we mainly focus on DL model with multi-modality which has many benefits including noise reduction. We used ResNet-50 for extracting features from dataset with 2D data. This study proposes a novel multimodal and multitask model, which can both identify human ID and classify the gender in single step. At the feature level, the extracted features are concatenated as the input for the identification module. Additionally, in our model design, we can change the number of modalities used in a single model. To demonstrate our model, we generate 58 virtual subjects with public ECG, face and fingerprint dataset. Through the test with noisy input, using multimodal is more robust and better than using single modality. This paper presents an end-to-end approach for multimodal and multitask learning. The proposed model shows robustness on the spoof attack, which can be significant for bio-authentication device. Through results in this study, we suggest a new perspective for human identification task, which performs better than in previous approaches.

Sections du résumé

BACKGROUND BACKGROUND
Recognition is an essential function of human beings. Humans easily recognize a person using various inputs such as voice, face, or gesture. In this study, we mainly focus on DL model with multi-modality which has many benefits including noise reduction. We used ResNet-50 for extracting features from dataset with 2D data.
RESULTS RESULTS
This study proposes a novel multimodal and multitask model, which can both identify human ID and classify the gender in single step. At the feature level, the extracted features are concatenated as the input for the identification module. Additionally, in our model design, we can change the number of modalities used in a single model. To demonstrate our model, we generate 58 virtual subjects with public ECG, face and fingerprint dataset. Through the test with noisy input, using multimodal is more robust and better than using single modality.
CONCLUSIONS CONCLUSIONS
This paper presents an end-to-end approach for multimodal and multitask learning. The proposed model shows robustness on the spoof attack, which can be significant for bio-authentication device. Through results in this study, we suggest a new perspective for human identification task, which performs better than in previous approaches.

Identifiants

pubmed: 32677882
doi: 10.1186/s12859-020-03613-3
pii: 10.1186/s12859-020-03613-3
pmc: PMC7367324
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

315

Commentaires et corrections

Type : ErratumIn

Références

Circulation. 2000 Jun 13;101(23):E215-20
pubmed: 10851218

Auteurs

Hyoung-Kyu Song (HK)

Korea Advanced Institute of Science and Technology, Daejeon, South Korea.

Ebrahim AlAlkeem (E)

Electrical Engineering and Computer Science Department, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates.
Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates.

Jaewoong Yun (J)

Research Institute, NOTA Incorporated, Gangnam-gu, Seoul, South Korea.

Tae-Ho Kim (TH)

Institute for Artificial Intelligence, Korea Advanced Institute of Science and Technology, Daejeon, South Korea.

Hyerin Yoo (H)

Research Institute, NOTA Incorporated, Gangnam-gu, Seoul, South Korea.

Dasom Heo (D)

Research Institute, NOTA Incorporated, Gangnam-gu, Seoul, South Korea.

Myungsu Chae (M)

Research Institute, NOTA Incorporated, Gangnam-gu, Seoul, South Korea.

Chan Yeob Yeun (C)

Electrical Engineering and Computer Science Department, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates. chan.yeun@ku.ac.ae.
Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates. chan.yeun@ku.ac.ae.

Articles similaires

[Redispensing of expensive oral anticancer medicines: a practical application].

Lisanne N van Merendonk, Kübra Akgöl, Bastiaan Nuijen
1.00
Humans Antineoplastic Agents Administration, Oral Drug Costs Counterfeit Drugs

Smoking Cessation and Incident Cardiovascular Disease.

Jun Hwan Cho, Seung Yong Shin, Hoseob Kim et al.
1.00
Humans Male Smoking Cessation Cardiovascular Diseases Female
Humans United States Aged Cross-Sectional Studies Medicare Part C
1.00
Humans Yoga Low Back Pain Female Male

Classifications MeSH