Machine learning is already at the core of many critical systems including healthcare, cybersecurity, finance, and transportation. The papers on adversarial machine learning are piling up in arXiv, but what would a system that assess the safety of ML system look like in practice? What does it mean for data scientists to guarantee that their system is adequately protected from adversarial manipulation?
Ram Shankar Kumar shares a framework and corresponding best practices to quantitatively assess the safety of your ML systems. The opportunities when such a framework is put to effect are plentiful; for a start, you regain your customers’ trust that ML systems aren’t brittle; that they just come in varying, quantifiable degrees of safety.
This talk represents work from Azure Security Data Science and Microsoft Research and work done at the Berkman Klein Center at Harvard University.
Ram Shankar is a data cowboy on the Azure security data science team at Microsoft, where his team focuses on modeling massive amounts of security logs to surface malicious activity. His work has appeared in industry conferences like DEF CON, BSides, BlueHat, DerbyCon, MIRCon, Infiltrate, and Strata as well as academic conferences like NIPS and ACM-CCS. Ram holds a degree focused on machine learning and security from Carnegie Mellon University. He’s currently an affiliate at the Berkman Klein Center at Harvard, exploring the intersection of machine learning and security.
For exhibition and sponsorship opportunities, email email@example.com
For information on trade opportunities with O'Reilly conferences, email firstname.lastname@example.org
View a complete list of Strata Data Conference contacts
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com