A Framework for Learning Ante-hoc Explainable Models via Concepts

Sarkar, Anirban and Vijaykeerthy, Deepak and Sarkar, Anindya and Balasubramanian, Vineeth N (2022) A Framework for Learning Ante-hoc Explainable Models via Concepts. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, 19-24 June 2022, New Orleans.

[img] Text
Proceedings_of_the _EEE.pdf - Published Version
Available under License Creative Commons Attribution.

Download (8MB)

Abstract

Self-explaining deep models are designed to learn the latent concept-based explanations implicitly during training, which eliminates the requirement of any post-hoc explanation generation technique. In this work, we propose one such model that appends an explanation generation module on top of any basic network and jointly trains the whole module that shows high predictive performance and generates meaningful explanations in terms of concepts. Our training strategy is suitable for unsupervised concept learning with much lesser parameter space requirements compared to baseline methods. Our proposed model also has provision for leveraging self-supervision on concepts to extract better explanations. However, with full concept supervision, we achieve the best predictive performance compared to recently proposed concept-based explainable models. We report both qualitative and quantitative results with our method, which shows better performance than recently proposed concept-based explainability methods. We reported exhaustive results with two datasets without ground truth concepts, i.e., CIFAR10, ImageNet, and two datasets with ground truth concepts, i.e., AwA2, CUB-200, to show the effectiveness of our method for both cases. To the best of our knowledge, we are the first ante-hoc explanation generation method to show results with a large-scale dataset such as ImageNet. © 2022 IEEE.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Balasubramanian, Vineeth NUNSPECIFIED
Item Type: Conference or Workshop Item (Paper)
Additional Information: In this work, we propose a new framework towards learning ante-hoc concept-based explanations that: (i) can be added easily to existing backbone classification architectures with minimal additional parameters; (ii) can provide explanations for model decisions in terms of concepts for an individual input image or groups of images; & (iii) can work with different levels of supervision, including no concept-level supervision at all. Even though our framework incorporates additional components to existing deep learning backbones (or pipelines), we can discard most of them after the training. We only retain the sub-network (or module) to generate explanations in addition to the ones on standard deep learning pipelines (i.e., feature extractor and the classifier function) during the prediction time. Hence, compared to existing self-explaining models, the additional cost incurred by our framework is relatively insignificant. We performed a comprehensive suite of experiments to study the accuracy and explainability of our method on multiple benchmark datasets both quantitatively and qualitatively. Our approach consistently outperforms the baseline methods in all the datasets. In addition to this, we also performed ablation studies to illustrate the importance of additional components added by our method. Acknowledgements. This work has been partly supported by the funding received from MoE and DST, Govt of India, through the UAY and ICPS programs. We thank the anonymous reviewers for their valuable feedback that improved the presentation of this paper.
Uncontrolled Keywords: accountability; Explainable computer vision; fairness; privacy and ethics in vision; Representation learning; Transparency
Subjects: Computer science
Divisions: Department of Computer Science & Engineering
Depositing User: . LibTrainee 2021
Date Deposited: 23 Nov 2022 11:36
Last Modified: 23 Nov 2022 11:36
URI: http://raiithold.iith.ac.in/id/eprint/11399
Publisher URL: https://doi.org/10.1109/CVPR52688.2022.01004
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 11399 Statistics for this ePrint Item