Adversarial examples generated with standard methods do not consistently fool a classifier in the physical world, due to a combination of viewpoint shifts, camera noise, and other natural transformations. These examples require complete control over direct input to the classifier, which is fundamentally impossible in many real-world systems.
Andrew Ilyas, Logan Engstrom, and Anish Athalye offer an overview of an algorithm that produces adversarial examples that remain adversarial under an attacker-chosen distribution and demonstrate its application in two dimensions, producing adversarial images that are robust to noise, distortion, and affine transformation and showing that these input distortions are ineffective against robust adversarial examples. They then apply the algorithm to produce the first physical 3D-printed adversarial objects, demonstrating how the approach works in the real world.
Andrew Ilyas is an undergraduate student at the Massachusetts Institute of Technology.
Logan Engstrom is an undergraduate student at the Massachusetts Institute of Technology.
Anish Athalye is a graduate student at the Massachusetts Institute of Technology.
For exhibition and sponsorship opportunities, email aisponsorships@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of AI contacts
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com