Deep user identification model with multiple biometric data.
Multimodal learning
Multitask learning
Person identification
Journal
BMC bioinformatics
ISSN: 1471-2105
Titre abrégé: BMC Bioinformatics
Pays: England
ID NLM: 100965194
Informations de publication
Date de publication:
16 Jul 2020
16 Jul 2020
Historique:
received:
11
06
2020
accepted:
18
06
2020
entrez:
18
7
2020
pubmed:
18
7
2020
medline:
22
8
2020
Statut:
epublish
Résumé
Recognition is an essential function of human beings. Humans easily recognize a person using various inputs such as voice, face, or gesture. In this study, we mainly focus on DL model with multi-modality which has many benefits including noise reduction. We used ResNet-50 for extracting features from dataset with 2D data. This study proposes a novel multimodal and multitask model, which can both identify human ID and classify the gender in single step. At the feature level, the extracted features are concatenated as the input for the identification module. Additionally, in our model design, we can change the number of modalities used in a single model. To demonstrate our model, we generate 58 virtual subjects with public ECG, face and fingerprint dataset. Through the test with noisy input, using multimodal is more robust and better than using single modality. This paper presents an end-to-end approach for multimodal and multitask learning. The proposed model shows robustness on the spoof attack, which can be significant for bio-authentication device. Through results in this study, we suggest a new perspective for human identification task, which performs better than in previous approaches.
Sections du résumé
BACKGROUND
BACKGROUND
Recognition is an essential function of human beings. Humans easily recognize a person using various inputs such as voice, face, or gesture. In this study, we mainly focus on DL model with multi-modality which has many benefits including noise reduction. We used ResNet-50 for extracting features from dataset with 2D data.
RESULTS
RESULTS
This study proposes a novel multimodal and multitask model, which can both identify human ID and classify the gender in single step. At the feature level, the extracted features are concatenated as the input for the identification module. Additionally, in our model design, we can change the number of modalities used in a single model. To demonstrate our model, we generate 58 virtual subjects with public ECG, face and fingerprint dataset. Through the test with noisy input, using multimodal is more robust and better than using single modality.
CONCLUSIONS
CONCLUSIONS
This paper presents an end-to-end approach for multimodal and multitask learning. The proposed model shows robustness on the spoof attack, which can be significant for bio-authentication device. Through results in this study, we suggest a new perspective for human identification task, which performs better than in previous approaches.
Identifiants
pubmed: 32677882
doi: 10.1186/s12859-020-03613-3
pii: 10.1186/s12859-020-03613-3
pmc: PMC7367324
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Pagination
315Commentaires et corrections
Type : ErratumIn
Références
Circulation. 2000 Jun 13;101(23):E215-20
pubmed: 10851218