Dendi, Sathya Veera Reddy and Krishnappa, Gokul and Channappayya, Sumohana
(2019)
Full-Reference Video Quality Assessment Using Deep 3D Convolutional Neural Networks.
In: 25th National Conference on Communications, NCC, 20 - 23 February 2019, Bangalore, India.
Full text not available from this repository.
(
Request a copy)
Abstract
We present a novel framework called Deep Video QUality Evaluator (DeepVQUE) for full-reference video quality assessment (FRVQA) using deep 3D convolutional neural networks (3D ConvNets). DeepVQUE is a complementary framework to traditional handcrafted feature based methods in that it uses deep 3D ConvNet models for feature extraction. 3D ConvNets are capable of extracting spatio-temporal features of the video which are vital for video quality assessment (VQA). Most of the existing FRVQA approaches operate on spatial and temporal domains independently followed by pooling, and often ignore the crucial spatio-temporal relationship of intensities in natural videos. In this work, we pay special attention to the contribution of spatio-temporal dependencies in natural videos to quality assessment. Specifically, the proposed approach estimates the spatio-temporal quality of a video with respect to its pristine version by applying commonly used distance measures such as the l1 or the l2 norm to the volume-wise pristine and distorted 3D ConvNet features. Spatial quality is estimated using off-the-shelf full-reference image quality assessment (FRIQA) methods. Overall video quality is estimated using support vector regression (SVR) applied to the spatio-temporal and spatial quality estimates. Additionally, we illustrate the ability of the proposed approach to localize distortions in space and time.
Actions (login required)
|
View Item |