When you train a model on private data, how much of that information does the model retain? Katharine Jarmul reviews research on attacks against models to extract training data and expose potentially sensitive information. Katharine then shares potential defenses as well as best practices when training models using private or sensitive data.
Katharine Jarmul is a data scientist and cofounder of KIProtect, a data security and privacy company for data science workflows, based in Berlin, Germany. She researches and is passionate about ethical machine learning, natural language processing, data privacy, and information security.
For exhibition and sponsorship opportunities, email aisponsorships@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of AI contacts
©2018, O’Reilly UK Ltd • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com