Uncertainity quantification in Convolutional Deep Gaussian Process

Jain, Dinesh and Srijith, P K (2019) Uncertainity quantification in Convolutional Deep Gaussian Process. Masters thesis, Indian institute of technology Hyderabad.

[img] Text
Mtech_Thesis_TD1515_2019.pdf
Restricted to Repository staff only until December 2019.

Download (2MB) | Request a copy

Abstract

Gaussian process [1] and it’s variants of deep structures like deep gaussian processes [2] and convolutional deep gaussian processes [3] are inherently equipped with a flexibility of encapsulating infinite order feature functions in their kernels and yet incorporating occam’s razor [4] to prevent overfitting. By stacking multitude of GPs, it becomes possible to manifest even a non-stationary version of the random process even with a stationary kernel at each and every layer. Using Doubly Stochastic Variational Inference [5], we study it’s performance on active learning based on certain acquisition functions. Convolutional Deep GP promises to be a good generalization for it’s counterpart, convolutional networks in non-bayesian Deep Neural Networks. Active learning [6] relies on learning from minimal amount of data and capitalizing the learned structure on acquiring data points whose identification is difficult to classify. The uncertainity estimates help in propagating the belief across network and make better confidence estimates which is not possible in non-bayesian topology. Also the non-probabilistic neural networks voraciously eat large chunk of data to make prediction that too without any uncertainity estimates. We also gauge caliberation of the learned deep GP model based on reliability diagram [7, 8] and certain uncertainity scores. We analyse the behaviour of the above model on unseen classes to figure out how much it can distinguish between what it has learned and what is completely new to it’s vision using similar uncertainity estimates as used for above analysis. We also conduct experiments on the same learned model to analyse it’s response to adversarial examples [9] to find how immune it is to such deceiving examples.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Srijith, P KUNSPECIFIED
Item Type: Thesis (Masters)
Uncontrolled Keywords: Gaussian process, Neural networks, Calibration, Uncertainty, Adversarial attacks
Subjects: Electrical Engineering
Divisions: Department of Electrical Engineering
Depositing User: Team Library
Date Deposited: 17 Jul 2019 06:23
Last Modified: 09 Sep 2019 05:01
URI: http://raiithold.iith.ac.in/id/eprint/5740
Publisher URL:
Related URLs:

    Actions (login required)

    View Item View Item
    Statistics for RAIITH ePrint 5740 Statistics for this ePrint Item