UAT: Universal Attention Transformer for Video Captioning.

end-to-end learning transformer video captioning

Journal

Sensors (Basel, Switzerland)
ISSN: 1424-8220
Titre abrégé: Sensors (Basel)
Pays: Switzerland
ID NLM: 101204366

Informations de publication

Date de publication:
25 Jun 2022
Historique:
received: 29 04 2022
revised: 16 06 2022
accepted: 23 06 2022
entrez: 9 7 2022
pubmed: 10 7 2022
medline: 14 7 2022
Statut: epublish

Résumé

Video captioning via encoder-decoder structures is a successful sentence generation method. In addition, using various feature extraction networks for extracting multiple features to obtain multiple kinds of visual features in the encoding process is a standard method for improving model performance. Such feature extraction networks are weight-freezing states and are based on convolution neural networks (CNNs). However, these traditional feature extraction methods have some problems. First, when the feature extraction model is used in conjunction with freezing, additional learning of the feature extraction model is not possible by exploiting the backpropagation of the loss obtained from the video captioning training. Specifically, this blocks feature extraction models from learning more about spatial information. Second, the complexity of the model is further increased when multiple CNNs are used. Additionally, the author of Vision Transformers (ViTs) pointed out the inductive bias of CNN called the local receptive field. Therefore, we propose the full transformer structure that uses an end-to-end learning method for video captioning to overcome this problem. As a feature extraction model, we use a vision transformer (ViT) and propose feature extraction gates (FEGs) to enrich the input of the captioning model through that extraction model. Additionally, we design a universal encoder attraction (UEA) that uses all encoder layer outputs and performs self-attention on the outputs. The UEA is used to address the lack of information about the video's temporal relationship because our method uses only the appearance feature. We will evaluate our model against several recent models on two benchmark datasets and show its competitive performance on MSRVTT/MSVD datasets. We show that the proposed model performed captioning using only a single feature, but in some cases, it was better than the others, which used several features.

Identifiants

pubmed: 35808316
pii: s22134817
doi: 10.3390/s22134817
pmc: PMC9269373
pii:
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Subventions

Organisme : The National Research Foundation of Korea(NRF) grant funded by the Korea government(*MSIT) *Ministry of Science and ICT
ID : 2018R1A5A7059549
Organisme : The National Research Foundation of Korea(NRF) grant funded by the Korea government(*MSIT) *Ministry of Science and ICT
ID : 2020R1A2C1014037
Organisme : Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT)
ID : 2020-0-01373

Références

IEEE Trans Pattern Anal Mach Intell. 2021 Sep;43(9):3259-3272
pubmed: 32149622
IEEE Trans Image Process. 2022;31:2004-2016
pubmed: 35139018

Auteurs

Heeju Im (H)

Department of Artificial Intelligence, Hanyang University, Seoul 04763, Korea.

Yong-Suk Choi (YS)

Department of Computer Science and Engineering, Hanyang University, Seoul 04763, Korea.

Articles similaires

Unsupervised learning for real-time and continuous gait phase detection.

Dollaporn Anopas, Yodchanan Wongsawat, Jetsada Arnin
1.00
Humans Gait Neural Networks, Computer Unsupervised Machine Learning Walking
Humans Shoulder Fractures Tomography, X-Ray Computed Neural Networks, Computer Female
Humans Artificial Intelligence Neoplasms Prognosis Image Processing, Computer-Assisted
Humans Deep Learning Mouth Neoplasms Drug Resistance, Neoplasm Cell Line, Tumor

Classifications MeSH