Machine-learning models are popular in security tasks such as malware detection, network intrusion detection, and spam detection. These models can achieve extremely high accuracy on test datasets and are widely used in practice.
However, these results are for particular test datasets. Unlike other fields, security tasks involve adversaries responding to the classifier. For example, attackers may try to generate new malware deliberately designed to evade existing classifiers. This breaks the assumption of machine-learning models that the training data and the operational data share the same data distribution. As a result, it is important to consider attackers’ efforts to disrupt or evade the generated models.
David Evans provides an introduction to the techniques adversaries use to circumvent machine-learning classifiers and presents case studies of machine classifiers under attack. David then outlines methods for automatically predicting the robustness of a classifier when used in an adversarial context and techniques that may be used to harden a classifier to decrease its vulnerability to attackers.
David Evans is a professor of computer science at the University of Virginia and leader of the Security Research Group. His research focuses on privacy and security for computing systems and empowering individuals and organizations to control how their data is used and shared. He is the author of an open computer science textbook and a children’s book on combinatorics and computability and teacher of one of the world’s most popular MOOCs. He won the Outstanding Faculty Award from the State Council of Higher Education for Virginia, an all-university teaching award, and was program co-chair for the 31st and 32nd IEEE Symposia on Security and Privacy and will be program co-chair for ACM CCS 2017. He holds SB, SM, and PhD degrees in computer science from MIT.
Comments on this page are now closed.
©2016, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com