Mangla, Puneet and Singh, Vedant and Balasubramanian, Vineeth N.
(2021)
On Saliency Maps and Adversarial Robustness.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12458.
pp. 272-288.
ISSN 0302-9743
Full text not available from this repository.
(
Request a copy)
Abstract
A very recent trend has emerged to couple the notion of interpretability and adversarial robustness, unlike earlier efforts that focus solely on good interpretations or robustness against adversaries. Works have shown that adversarially trained models exhibit more interpretable saliency maps than their non-robust counterparts, and that this behavior can be quantified by considering the alignment between the input image and saliency map. In this work, we provide a different perspective to this coupling and provide a method, Saliency based Adversarial training (SAT), to use saliency maps to improve the adversarial robustness of a model. In particular, we show that using annotations such as bounding boxes and segmentation masks, already provided with a dataset, as weak saliency maps, suffices to improve adversarial robustness with no additional effort to generate the perturbations themselves. Our empirical results on CIFAR-10, CIFAR-100, Tiny ImageNet, and Flower-17 datasets consistently corroborate our claim, by showing improved adversarial robustness using our method. We also show how using finer and stronger saliency maps leads to more robust models, and how integrating SAT with existing adversarial training methods, further boosts the performance of these existing methods.
Actions (login required)
|
View Item |