Presented By
O’Reilly + Intel AI
Put AI to Work
April 15-18, 2019
New York, NY
Discover opportunities for applied AI
Organizations that successfully apply AI innovate and compete more effectively. How is AI transforming your business?
Be a part of the program—apply to speak by October 16.

Schedule: Ethics, Privacy, and Security sessions

Add to your personal schedule
9:00am12:30pm Tuesday, April 16, 2019
Implementing AI
Location: Regent Parlor
Rachel Bellamy (IBM Research), Kush Varshney (IBM Research), Karthikeyan Natesan Ramamurthy (IBM), Michael Hind (IBM Research AI)
Learn to use and contribute to the new open-source Python package AI Fairness 360 directly from its creators. Architected to translate new developments from research labs to data science practitioners in industry, this is the first comprehensive toolkit with metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias. Read more.
Add to your personal schedule
11:05am11:45am Wednesday, April 17, 2019
Deepashri Varadharajan (CB Insights)
At CB Insights, we track over 3,000 AI startups across 25+ verticals. While every vertical has benefited from deep learning and better hardware processing, the bottlenecks and opportunities are unique to each sector. We will explore what is driving AI applications in different verticals like healthcare, retail, and security, and analyze emerging business models. Read more.
Add to your personal schedule
11:05am11:45am Wednesday, April 17, 2019
Models and Methods
Location: Regent Parlor
Siwei Lyu (University of Albany)
In this talk, I will first briefly review the evolution of techniques behind the generation of fake media, and then introduce several projects I was involved in digital media forensics for detection of fake media, with a special focus on some of our recent works on detecting AI-generated fake videos (DeepFakes). Read more.
Add to your personal schedule
1:50pm2:30pm Wednesday, April 17, 2019
Anand Rao (PwC)
Broader AI adoption and gaining trust from customers requires AI systems to be fair, interpretable, robust, and safe. This talk synthesizes the current research in FAT (Fairness, Accountability, Transparency) into a step-by-step methodology to address these issues. Case studies from financial services and healthcare are used to illustrate the approach. Read more.
Add to your personal schedule
2:40pm3:20pm Wednesday, April 17, 2019
Anna Gressel (Debevoise & Plimpton LLP), Jim Pastore (Debevoise & Plimpton LLP), Anwesa Paul (American Express)
This is a crash course on the emerging legal and regulatory frameworks governing AI, including GDPR and California Consumer Privacy Act. It will also explore key lawsuits challenging AI in U.S. courts - and unpack implications for companies going forward. By understanding these trends, companies can more effectively mitigate legal and regulatory risks and position their AI products for success. Read more.
Add to your personal schedule
4:55pm5:35pm Wednesday, April 17, 2019
Models and Methods
Location: Grand Ballroom West
Yishay Carmiel (IntelligentWire)
In recent years, we have seen tremendous improvements in artificial intelligence. The major breakthroughs are due to the advances of neural-based models. However, the more popular these algorithms and techniques get, the more serious the consequences of data and user privacy. These issues will drastically impact the future of AI research. Read more.
Add to your personal schedule
4:55pm5:35pm Wednesday, April 17, 2019
Implementing AI
Location: Mercury Rotunda
Andrew Zaldivar (Google)
The development of AI is creating new opportunities to improve the lives of all people. It is also raising new questions about ways to build fairness, interpretability and other moral and ethical values into these systems. Using Jupyter and TensorFlow, this presentation will share hands-on examples that highlight current work and recommended practices towards the responsible development of AI. Read more.
Add to your personal schedule
1:00pm1:40pm Thursday, April 18, 2019
Interacting with AI
Location: Rendezvous
Jeff Thompson (Stevens Institute of Technology)
What is it like to be a mobile phone or to attach a wind sensor to a neural network? This talk outlines several recent creative projects that push the tools of AI in new directions. Part technical discussion and part case study for embedding artists in technical institutions, this talk explores the ways that artists and scientists can collaborate to expand the ways that AI can be used. Read more.
Add to your personal schedule
1:00pm1:40pm Thursday, April 18, 2019
Law and Ethics
Location: Sutton North/Center
Joanna Bryson (University of Bath)
Although not a universally held goal, maintaining human accountability for AI is necessary for society’s long-term stability. Fortunately, the legal and technological problems of maintaining control are fairly well understood and amenable to engineering. The next problem is establishing the social and political will for assigning and maintaining accountability for intelligent artifacts. Read more.
Add to your personal schedule
1:50pm2:30pm Thursday, April 18, 2019
Interacting with AI
Location: Regent Parlor
Forough Poursabzi-Sangdeh (Microsoft Research NYC)
In this talk, I will argue that to understand interpretability, we need to bring humans in the loop and run human-subject experiments. I approach the problem of interpretability from an interdisciplinary perspective which builds on decades of research in psychology, cognitive science, and social science to understand human behavior and trust. Read more.
Add to your personal schedule
2:40pm3:20pm Thursday, April 18, 2019
Case Studies, Machine Learning
Location: Sutton South
Alina Matyukhina (Canadian Institute for Cybersecurity)
Machine learning models are often susceptible to adversarial deception of their input at test time, which is leading to a poorer performance. In this session we will investigate the feasibility of deception in source code attribution techniques in real world environment. This session will present attack scenarios on users identity in open-source projects and discuss possible protection methods. Read more.