Poisoned Classifiers are not only backdoored, they are fundamentally broken
Mingjie Sun1
Siddhant Agarwal2
Zico Kolter3
1 Carnegie Mellon University
2 IIT Kharagpur
ICLR 2021 workshop on Security and Safety in Machine Learning systems, Under reviw in ICLR 2022.

Abstract

Under a commonly-studied backdoor poisoning attack against classification models, an attacker adds a small trigger to a subset of the training data, such that the presence of this trigger at test time causes the classifier to always predict some target class. It is often implicitly assumed that the poisoned classifier is vulnerable exclusively to the adversary who possesses the trigger. In this paper, we show empirically that this view of backdoored classifiers is incorrect. We describe a new threat model for poisoned classifier, where one without knowledge of the original trigger, would want to control the poisoned classifier. Under this threat model, we propose a test-time, human-in-the-loop attack method to generate multiple effective alternative triggers without access to the initial backdoor and the training data. We construct these alternative triggers by first generating adversarial examples for a smoothed version of the classifier, created with a procedure called Denoised Smoothing, and then extracting colors or cropped portions of smoothed adversarial images with human interaction. We demonstrate the effectiveness of our attack through extensive experiments on high-resolution datasets: ImageNet and TrojAI. We also compare our approach to previous work on modeling trigger distributions and find that our method are more scalable and efficient in generating effective triggers. Last, we include a user study which demonstrates that our method allows users to easily determine the existence of such backdoors in existing poisoned classifiers. Thus, we argue that there is no such thing as a secret backdoor in poisoned classifiers: poisoning a classifier invites attacks not just by the party that possesses the trigger, but from anyone with access to the classifier.

Paper & Code

Minjie Sun, Siddhant Agarwal, Zico Kolter
Poisoned Classifiers are not only backdoored, they are fundamentally broken
Workshop on Security and Safety in Machine Learning systems at
Ninth International Conference on Learning Representations (ICLR)
, 2021
[ArXiv] [Code]