Study of Subjective Quality and Objective Blind Quality Prediction of Stereoscopic Videos

Appina, Balasubramanyam and Dendi, Sathya Veera Reddy and Manasa, K and Channappayya, Sumohana et. al (2019) Study of Subjective Quality and Objective Blind Quality Prediction of Stereoscopic Videos. IEEE Transactions on Image Processing. p. 1. ISSN 1057-7149

Full text not available from this repository. (Request a copy)

Abstract

We present a new subjective and objective study on full high-definition (HD) stereoscopic (3D or S3D) video quality. In subjective study, we constructed an S3D video dataset with 12 pristine and 288 test videos, and the test videos are generated by applying the H.264 and H.265 compression, blur and frame freeze artifacts. We also propose a no reference (NR) objective video quality assessment (QA) algorithm that relies on measurements of the statistical dependencies between the motion and disparity subband coefficients of S3D videos. Inspired by the Generalized Gaussian Distribution (GGD) approach in liu2011statistical, we model the joint statistical dependencies between the motion and disparity components as following a Bivariate Generalized Gaussian Distribution (BGGD). We estimate the BGGD model parameters (α,β) and the coherence measure (Ψ) from the eigenvalues of the sample covariance matrix (M) of the BGGD. In turn, we model the BGGD parameters of pristine S3D videos using a Multivariate Gaussian (MVG) distribution. The likelihood of a test video's MVG model parameters coming from the pristine MVG model is computed and shown to play a key role in the overall quality estimation. We also estimate the global motion content of each video by averaging the SSIM scores between pairs of successive video frames. To estimate the test S3D video's spatial quality, we apply the popular 2D NR unsupervised NIQE image QA model on a frame-by-frame basis on both views. The overall quality of a test S3D video is finally computed by pooling the test S3D video's likelihood estimates, global motion strength and spatial quality scores. The proposed algorithm, which is 'completely blind' (requiring no reference videos or training on subjective scores) is called the Motion and Disparity based 3D video quality evaluator (MoDi3D). We show that MoDi3D delivers competitive performance over a wide variety of datasets including the IRCCYN dataset, the WaterlooIVC Phase I dataset, the LFOVIA dataset and our proposed LFOVIAS3DPh2 S3D video dataset.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Channappayya, SumohanaUNSPECIFIED
Item Type: Article
Subjects: Electrical Engineering
Divisions: Department of Electrical Engineering
Depositing User: Team Library
Date Deposited: 23 May 2019 11:37
Last Modified: 23 May 2019 11:37
URI: http://raiithold.iith.ac.in/id/eprint/5304
Publisher URL: http://doi.org/10.1109/TIP.2019.2914950
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 5304 Statistics for this ePrint Item