Sep 23–26, 2019

Schedule: Ethics sessions

Ethics and compliance are areas of interest to many in the data community. Beyond privacy, data professionals are much more engaged in topics such as fairness, accountability, transparency, and explainability in machine learning. Are data sets that are being used for model training representative of the broader population? For certain application domains and settings, transparency and interpretability are essential and regulators may require more transparent models, even at the expense of power and accuracy. More generally, how do companies mitigate risk when using ML?

Add to your personal schedule
11:20am12:00pm Wednesday, September 25, 2019
Location: 3B - Expo Hall
Harsha Nori (Microsoft), Sameul Jenkins (Microsoft), Rich Caruana (Microsoft)
Understanding decisions made by machine learning systems is critical for sensitive uses, ensuring fairness, and debugging production models. Interpretability is a maturing field of research that presents many options for trying to understand model decisions. Microsoft is releasing new tools to help you train powerful, interpretable models and interpret decisions of existing blackbox systems. Read more.
Add to your personal schedule
1:15pm1:55pm Wednesday, September 25, 2019
Location: 1E 14
Andrew Burt (Immuta), Brenda Leong (Future of Privacy Forum)
Machine learning techniques are being deployed across almost every industry and sector. But this adoption comes with real, and oftentimes underestimated, privacy and security risks. In this session, Immuta and the Future of Privacy Forum will convene leading industry representatives and experts to talk about real life examples of when ML goes wrong, and the lessons they learned. Read more.
Add to your personal schedule
2:05pm2:45pm Wednesday, September 25, 2019
Location: 1A 12/14
Mikio Braun (Zalando SE)
With ML becoming more and more mainstream, the side effects of using machine learning and AI on our lives become more and more visible. One has to take extra measures to make machine learning models fair and unbiased In addition, awareness for preserving the privacy in ML models is rapidly growing. Read more.
Add to your personal schedule
2:05pm2:45pm Wednesday, September 25, 2019
Location: 1E 14
Andrew Burt (Immuta), Brenda Leong (Future of Privacy Forum)
From the EU to California and China, more and more of the world is regulating how data can be used. In this session, Immuta and the Future of Privacy Forum will convene leading experts on law and data science for a deep dive into ways to regulate the use of AI and advanced analytics. Come learn why these laws are being proposed, how they’ll impact data, and what the future has in store. Read more.
Add to your personal schedule
2:55pm3:35pm Wednesday, September 25, 2019
Location: 1E 10/11
Farrah Bostic (The Difference Engine)
We are living in a culture obsessed with predictions. In politics and business, we collect data in service of the obsession. But our need for certainty and control leads some organizations to be duped by unproven technology or pseudo-science - often with unforeseen societal consequences. This talk looks at historical - and sometimes funny! - examples of sacrificing understanding for 'data'. Read more.
Add to your personal schedule
5:25pm6:05pm Wednesday, September 25, 2019
Location: 1E 14
Brindaalakshmi K (Independent Consultant)
There is a lack of standard for the collection of gender data. This session takes a look at the implications of such a lack in the context of a developing country like India, the exclusion of individuals beyond the binary genders of male and female and how this exclusion permeates beyond the public sector into private sector services. Read more.
Add to your personal schedule
11:20am12:00pm Thursday, September 26, 2019
Location: 1A 12/14
Alejandro Saucedo (The Institute for Ethical AI & Machine Learning)
Undesired bias in machine learning has become a worrying topic due to the numerous high profile incidents. In this talk we demystify machine learning bias through a hands-on example. We'll be tasked to automate the loan approval process for a company, and introduce key tools and techniques from latest research that allow us to assess and mitigate undesired bias in our machine learning models. Read more.
Add to your personal schedule
1:15pm1:55pm Thursday, September 26, 2019
Location: 1E 10/11
David Castillo (Capital One)
The head of Capital One's Center for Machine Learning will share best practices for building a Responsible AI program in the enterprise, from multidisciplinary internal working groups to research & development. Read more.
Add to your personal schedule
3:45pm4:25pm Thursday, September 26, 2019
Location: 1E 14
Audrey Lobo-Pulo (Phoensight), Annette Hester (National Energy Board, Canada), Ryan Hum (National Energy Board, Canada)
As new digital platforms emerge and governments look at new ways to engage with citizens, there is an increasing awareness of the role these platforms play in shaping public participation and democracy. This talk examines the design attributes of civic engagement technologies, and their ensuing impacts. A framework for better achieving desired outcomes is demonstrated with a NEB Canada case study. Read more.

    Contact us

    confreg@oreilly.com

    For conference registration information and customer service

    partners@oreilly.com

    For more information on community discounts and trade opportunities with O’Reilly conferences

    strataconf@oreilly.com

    For information on exhibiting or sponsoring a conference

    Contact list

    View a complete list of Strata Data Conference contacts