War stories from the front lines of ML
Who is this presentation for?
- Data scientists, data and privacy engineers, privacy officers, and anyone managing a technical team
This September, Immuta and the Future of Privacy Forum will release a new version of their 2018 Beyond Explainability whitepaper focused on privacy and security in machine learning and advanced analytics. (The original whitepaper can be accessed on the Future of Privacy Forum website.)
In conjunction with the release of their new paper, Andrew Burt and Brenda Leong convene a panel of experts including David Florsek, Chris Wheeler, and Alex Beutel to detail their experiences from the front lines, diving into real-life examples of when ML goes wrong, and the lessons they learned. These practical lessons will help you use the full power of ML technologies while understanding, and responsibly managing, the potential risks and failures. Ultimately, you’ll learn to view these shortcomings, and the ability to properly address them, as part of the necessary cycle of oversight and transparency—and as key to these systems’ long-term success.
What you'll learn
- Discover concrete lessons from major organizations of how to address risks and shortcomings of deploying machine learning systems and technologies
Andrew is Managing Partner at bnh.ai, a boutique law firm focused on AI and analytics, and Chief Legal Officer at Immuta. He is also a Visiting Fellow at Yale Law School’s Information Society Project. Previously, Andrew served as Special Advisor for Policy to the head of the Federal Bureau of Investigation’s Cyber Division, where he served as lead author on the FBI’s after action report for the 2014 attack on Sony.
A leading authority on the intersection between law and technology, Andrew has published articles in The New York Times, The Financial Times, and Harvard Business Review, where he is a regular contributor.
Andrew is a term-member of the Council on Foreign Relations, a member of the Washington, D.C. and Virginia State Bars, and a certified cyber incident response handler. He holds a JD from Yale Law School and a BA with first-class honors from McGill University.
Future of Privacy Forum
Brenda Leong is a senior counsel and director of strategy at the Future of Privacy Forum (FPF) and a Certified Information Privacy Professional/United States (CIPP/US). She oversees strategic planning of organizational goals, as well as managing the FPF portfolio on biometrics, particularly facial recognition, along with the ethics and privacy issues associated with artificial intelligence. She works on industry standards and collaboration on privacy concerns, by partnering with stakeholders and advocates to reach practical solutions to the privacy challenges for consumer and commercial data uses. Previously, Brenda served in the US Air Force, including policy and legislative affairs work from the Pentagon and the US Department of State. She’s a graduate of the George Mason University School of Law.
David Florsek is the architect of innovation at IDEMIA National Security Solutions (IDEMIA NSS), where he leads efforts to define, develop, and deploy situational awareness platforms that integrate multiple types of intelligence-oriented data, including facial and biometric data, linguistic and lexicological data, historical and contextual data, and other forms of information. David has been responsible for developing and deploying systems as wide ranging as the FBI’s Integrated Automated Fingerprint Identification System (IAFIS) biometric criminal justice system and from military air-to-ground missile-targeting systems for fighter jets to ground-based missile-defense systems to university student financial management systems helping students to track their meal plans and maintenance inventory control systems. Previously, David led a successful entrepreneurial consulting small business and spent many years of software and system development and integration at Deloitte, Lockheed Martin Aeronautics, and Boeing; and was at the Centers for Disease Control and Prevention (CDC), US Department of Veterans Affairs (VA), Federal Bureau of Investigation (FBI), Department of Homeland Security (DHS), University System of Georgia, and, of course, all branches of the Department of Defense. As a developer, David designed circuitry flying today on the US Air Force’s B-52 bomber as well as software and systems operational across many agencies within the US federal government. The common thread across all of these efforts was the need to find innovative solutions to seemingly intractable problems. David has pioneered efforts in data mining, AI, and DL to extract actionable information from what was previously considered vast quantities of trash data. David specializes in developing algorithms and concepts that bridge traditional boundaries, staying within the defined law and statutes, but that solve previously unsolved problems.
Alex Beutel is a staff research scientist on the SIR team at Google Brain, leading a team working on ML fairness and researching neural recommendation and ML for systems. He earned a PhD from Carnegie Mellon University’s Computer Science Department and his BS from Duke University in computer science and physics. His PhD thesis on large-scale user behavior modeling, covering recommender systems, fraud detection, and scalable machine learning, was given the SIGKDD 2017 Doctoral Dissertation Award Runner-Up. He received the Best Paper Award at KDD 2016 and ACM GIS 2010, was a finalist for best paper in KDD 2014 and ASONAM 2012, and was awarded the Facebook Fellowship in 2013 and the NSF Graduate Research Fellowship in 2011. More details can be found at alexbeutel.com.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires