ChipQA: No-Reference Video Quality Prediction via Space-Time Chips.
Journal
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
ISSN: 1941-0042
Titre abrégé: IEEE Trans Image Process
Pays: United States
ID NLM: 9886191
Informations de publication
Date de publication:
2021
2021
Historique:
pubmed:
18
9
2021
medline:
18
9
2021
entrez:
17
9
2021
Statut:
ppublish
Résumé
We propose a new model for no-reference video quality assessment (VQA). Our approach uses a new idea of highly-localized space-time (ST) slices called Space-Time Chips (ST Chips). ST Chips are localized cuts of video data along directions that implicitly capture motion. We use perceptually-motivated bandpass and normalization models to first process the video data, and then select oriented ST Chips based on how closely they fit parametric models of natural video statistics. We show that the parameters that describe these statistics can be used to reliably predict the quality of videos, without the need for a reference video. The proposed method implicitly models ST video naturalness, and deviations from naturalness. We train and test our model on several large VQA databases, and show that our model achieves state-of-the-art performance at reduced cost, without requiring motion computation.
Identifiants
pubmed: 34534087
doi: 10.1109/TIP.2021.3112055
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM