Completely Blind Quality Assessment of User Generated Video Content

Kancharla, Parimala and Channappayya, Sumohana (2022) Completely Blind Quality Assessment of User Generated Video Content. IEEE Transactions on Image Processing, 31. pp. 263-274. ISSN 1057-7149

[img] Text
IEEE_Transactions_on_Image_Processing.pdf - Published Version
Restricted to Registered users only

Download (5MB) | Request a copy

Abstract

In this work, we address the challenging problem of completely blind video quality assessment (BVQA) of user generated content (UGC). The challenge is twofold since the quality prediction model is oblivious of human opinion scores, and there are no well-defined distortion models for UGC content. Our solution is inspired by a recent computational neuroscience model which hypothesizes that the human visual system (HVS) transforms a natural video input to follow a straighter temporal trajectory in the perceptual domain. A bandpass filter based computational model of the lateral geniculate nucleus (LGN) and V1 regions of the HVS was used to validate the perceptual straightening hypothesis. We hypothesize that distortions in natural videos lead to loss in straightness (or increased curvature) in their transformed representations in the HVS. We provide extensive empirical evidence to validate our hypothesis. We quantify the loss in straightness as a measure of temporal quality, and show that this measure delivers acceptable quality prediction performance on its own. Further, the temporal quality measure is combined with a state-of-the-art blind spatial (image) quality metric to design a blind video quality predictor that we call STraightness Evaluation Metric (STEM). STEM is shown to deliver state-of-the-art performance over the class of BVQA algorithms on five UGC VQA datasets including KoNViD-1K, LIVE-Qualcomm, LIVE-VQC, CVD and YouTube-UGC. Importantly, our solution is completely blind i.e., training-free, generalizes very well, is explainable, has few tunable parameters, and is simple and easy to implement. © 1992-2012 IEEE.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Channappayya, Sumohanahttps://orcid.org/0000-0002-5880-4023
Item Type: Article
Additional Information: This work was supported by Qualcomm Technologies through the Qualcomm Innovation Fellowship India 2019 and 2020.
Uncontrolled Keywords: human visual system (HVS); Lateral geniculate nucleus (LGN); video quality assessment (VQA)
Subjects: Electrical Engineering
Divisions: Department of Electrical Engineering
Depositing User: . LibTrainee 2021
Date Deposited: 28 Jul 2022 09:51
Last Modified: 28 Jul 2022 09:51
URI: http://raiithold.iith.ac.in/id/eprint/9987
Publisher URL: http://doi.org/10.1109/TIP.2021.3130541
OA policy: https://v2.sherpa.ac.uk/id/publication/3474
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 9987 Statistics for this ePrint Item