When you train a model on private data, how much of that information does the model retain? Katharine Jarmul reviews research on attacks against models to extract training data and expose potentially sensitive information. Katharine then shares potential defenses as well as best practices when training models using private or sensitive data.
Katharine Jarmul is a data scientist and cofounder of KIProtect, a data security and privacy company for data science workflows, based in Berlin, Germany. She researches and is passionate about ethical machine learning, natural language processing, data privacy, and information security.
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
©2018, O’Reilly UK Ltd • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org