Evaluating and Mitigating Bias in Image Classifiers: A Causal Perspective Using Counterfactuals

Dash, Saloni and Balasubramanian, Vineeth N and Sharma, Amit (2022) Evaluating and Mitigating Bias in Image Classifiers: A Causal Perspective Using Counterfactuals. In: 22nd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, 4 January 2022 through 8 January 2022, Waikoloa.

[img] Text
Proceedings_2022_IEEE_CVF.pdf - Published Version
Available under License Creative Commons Attribution.

Download (3MB)

Abstract

Counterfactual examples for an input - perturbations that change specific features but not others - have been shown to be useful for evaluating bias of machine learning models, e.g., against specific demographic groups. However, generating counterfactual examples for images is nontrivial due to the underlying causal structure on the various features of an image. To be meaningful, generated perturbations need to satisfy constraints implied by the causal model. We present a method for generating counterfactuals by incorporating a structural causal model (SCM) in an improved variant of Adversarially Learned Inference (ALI), that generates counterfactuals in accordance with the causal relationships between attributes of an image. Based on the generated counterfactuals, we show how to explain a pre-trained machine learning classifier, evaluate its bias, and mitigate the bias using a counterfactual regularizer. On the Morpho-MNIST dataset, our method generates counterfactuals comparable in quality to prior work on SCM-based counterfactuals (DeepSCM), while on the more complex CelebA dataset our method outperforms DeepSCM in generating high-quality valid counterfactuals. Moreover, generated counterfactuals are indistinguishable from reconstructed images in a human evaluation experiment and we subsequently use them to evaluate the fairness of a standard classifier trained on CelebA data. We show that the classifier is biased w.r.t. skin and hair color, and how counterfactual regularization can remove those biases. © 2022 IEEE.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Balasubramanian, Vineeth Nhttps://orcid.org/0000-0003-2656-0375
Item Type: Conference or Workshop Item (Paper)
Uncontrolled Keywords: Accountability; Autoencoders; Deep Learning; Explainable AI; Fairness; GANs; Neural Generative Models; Privacy and Ethics in Vision Deep Learning
Subjects: Computer science
Divisions: Department of Computer Science & Engineering
Depositing User: . LibTrainee 2021
Date Deposited: 28 Jul 2022 13:57
Last Modified: 28 Jul 2022 13:57
URI: http://raiithold.iith.ac.in/id/eprint/9999
Publisher URL: http://doi.org/10.1109/WACV51458.2022.00393
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 9999 Statistics for this ePrint Item