More wearable adversarial examples

On the topic of one of my side interests, this paper from back in October details a strategy for adversarial attacks on object detectors in the real world. They make a clear distinction about the problem being faced here: evading an image classifier is one thing, since that could just be a matter of making the classifier think a face is a car, a cat, a tree, or etc. Creating a pattern capable of consistently evading an object detector is a different task, as in this case the classifier may be searching for a wide number of pre-defined object classes.

Long story short, the authors found a truly hideous but effective adversarial pattern. The lead figure is a great representation and its caption is perfect:

yolov2.png

It’s worth reading, especially because some of the methods aren’t actually very technical (e.g., experiments with paper dolls).

It’s notable to me that three of the four authors are from Facebook AI. I know there’s a perception that the AI/ML labs at the big tech companies have extensive freedom to work on interesting technical challenges, but I’m still left wondering about how Facebook may use this knowledge. I’d presume they’d make their object recognizers and classifiers (and by extension, real-world human-scale classifiers) more robust. Perhaps it’s just nice that continued work on adversarial examples is published publicly.

Here’s that citation:

Wu, Z., Lim, S.-N., Davis, L. & Goldstein, T. Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors. arXiv:1910.14667 [cs, math] (2019).