Mar 15–18, 2020

Schedule: Ethics, Bias, & Explainability sessions

Add to your personal schedule
11:00am11:40am Tuesday, March 17, 2020
Location: 210 F
Divya Sivasankaran (integrate.ai)
In recent years, there's been a lot of attention on the need for ethical considerations in ML, as well as different ways to address bias in different stages of the ML pipeline. However, there hasn't been a lot of focus on how to bring fairness to ML products. Divya Sivasankaran explores the key challenges (and how to overcome them) in operationalizing fairness and bias in ML products. Read more.
Add to your personal schedule
11:50am12:30pm Tuesday, March 17, 2020
Location: 210 F
Ilana Golbin (PwC), Anand Rao (PwC)
Join in for a practitioner’s overview of the risks of AI and depiction of responsible AI deployment within an organization. You'll discover how to ensure the safety, security, standardized testing, and governance of systems and how models can be fooled or subverted. Ilana Golbin and Anand Rao illustrate how organizations safeguard AI applications and vendor solutions to mitigate AI risks. Read more.
Add to your personal schedule
1:45pm2:25pm Tuesday, March 17, 2020
Location: 210 F
Financial services companies use machine learning models to solve critical business use cases. Regulators demand model explainability. Chanchal Chatterjee shares how Google solved financial services business critical problems such as credit card fraud, anti-money laundering, lending risk, and insurance loss using complex machine learning models you can explain to regulators. Read more.
Add to your personal schedule
2:35pm3:15pm Tuesday, March 17, 2020
Location: 210 F
Ankur Taly (Fiddler)
Ankur Taly showcases a new paradigm for model explanations called "slice and explain" that unifies several existing explanation tools into a single framework. You'll learn how to leverage the framework as a data scientist, business user, and regulator to successfully analyze models. Read more.
Add to your personal schedule
4:15pm4:55pm Tuesday, March 17, 2020
Location: 210 F
Bahman Bahmani (Rakuten)
With California Consumer Privacy Act (CCPA) looming near, Europe’s GDPR still sending shockwaves, and public awareness of privacy breaches heightening, we're in the early days of a new era of personal data protection. Bahman Bahmani explores the challenges and opportunities for AI in this new era and provides actionable insights for you to navigate your path to AI success. Read more.
Add to your personal schedule
11:00am11:40am Wednesday, March 18, 2020
Location: 210 F
Devices discover their way around the network and proxy the intent of the users behind them; leveraging this information for behavior analytics can raise privacy concerns. A selective use of embedding models on a crafted corpus from anonymized data can address these concerns. Ramsundar Janakiraman details a way to build representations with behavioral insights that also preserves user identity. Read more.
Add to your personal schedule
11:50am12:30pm Wednesday, March 18, 2020
Location: 210 F
Krishna Gade (Fiddler Labs)
Krishna Gade outlines how "explainable AI" fills a critical gap in operationalizing AI and adopting an explainable approach into the end-to-end ML workflow from training to production. You'll discover the benefits of explainability such as the early identification of biased data and better confidence in model outputs. Read more.
Add to your personal schedule
1:45pm2:25pm Wednesday, March 18, 2020
Location: 210 F
Daniel Jeffries (Pachyderm)
With algorithms making more and more decisions in our lives, from who gets a job to who gets hired and fired, and even who goes to jail, it’s more critical than ever that we make AI auditable and explainable in the real world. Daniel Jeffries breaks down how you can make your AI and ML systems auditable and transparent right now with a few classic IT techniques your team already knows well. Read more.
Add to your personal schedule
2:35pm3:15pm Wednesday, March 18, 2020
Location: 210 F
Moin Nadeem (Intel)
The real world is highly biased, but we still train AI models on that data. This leads to models that are highly offensive and discriminatory. For instance, models have learned that male engineers are preferable, and therefore discriminate when used in hiring. Moin Nadeem explores how to assess the social biases that popular models exhibit and how to leverage this to create a more fair model. Read more.

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

Become a sponsor

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires