Unadversarial Examples: Designing Objects for Robust Vision

  • Hadi Salman* ,
  • Andrew Ilyas* ,
  • Logan Engstrom* ,
  • Sai Vemprala ,
  • Aleksander Madry ,
  • Ashish Kapoor

NeurIPS 2021 |

Publication

We study a class of realistic computer vision settings wherein one can influence the design of the objects being recognized. We develop a framework that leverages this capability to significantly improve vision models’ performance and robustness. This framework exploits the sensitivity of modern machine learning algorithms to input perturbations in order to design “robust objects,” i.e., objects that are explicitly optimized to be confidently detected or classified. We demonstrate the efficacy of the framework on a wide variety of vision-based tasks ranging from standard benchmarks, to (in-simulation) robotics, to real-world experiments. Our code can be found on GitHub (opens in new tab).

Publication Downloads

Unadversarial

December 22, 2020

This framework exploits the sensitivity of modern machine learning algorithms to input perturbations in order to design “robust objects,” i.e., objects that are explicitly optimized to be confidently detected or classified. We demonstrate the efficacy of the framework on a wide variety of vision-based tasks ranging from standard benchmarks, to (in-simulation) robotics, to real-world experiments.