Presented By
O’Reilly + Intel AI
Put AI to Work
April 15-18, 2019
New York, NY
Discover opportunities for applied AI
Organizations that successfully apply AI innovate and compete more effectively. How is AI transforming your business?
Be a part of the program—apply to speak by October 16.

Interpretable Deep Learning in Healthcare

Behrooz Hashemian (Massachusetts General Hospital)
2:40pm3:20pm Thursday, April 18, 2019
Interacting with AI
Location: Regent Parlor
Secondary topics:  Computer Vision, Health and Medicine, Models and Methods, Reliability and Safety

Who is this presentation for?

Executives with some technical backgrounds, AI Scientists and Engineers

Level

Intermediate

Prerequisite knowledge

A basic understanding of deep learning implementation, training and inference

What you'll learn

- Applications of AI in healthcare - Why interpretability matters? - How to make deep learning models interpretable?

Description

Artificial Intelligence, especially deep learning, has made major breakthroughs in performing various complex tasks such as computer vision, speech recognition and natural language understanding. Despite the widespread success of these deep learning models, they are usually regarded as black box, and often better predictive power comes with deeper models, which makes it harder to understand and interpret why a specific outcome has achieved.

While treated as a black-box solution, AI has been successfully deployed in many industries, such as advertisement, e-commerce, social media, etc. However, in a sensitive discipline like healthcare, where any decision comes with a huge long-term responsibility, qualitative and quantitative evaluation of how decisions are made is essential for acceptance and integration of AI into clinical workflow. It is also of utmost importance to ensure regulators that appropriate evidences support these AI-based clinical decisions.

In this talk, I present various use-cases of artificial intelligence in healthcare that are set to dramatically change the clinical practice. I explain the main challenges of integrating AI in hospitals and clinical workflows including interpretability of deep learning models. Interpretability is not only important for integration but also essential for improving the model by showing why it fails, increasing the security by discovering the edge cases, and verifying the reliability and transferability of the models by reasoning about the decision process. Then, I review interpretability methodologies, from feature visualization to saliency maps and deconvolution, and discuss their power and limitation. Finally, I demonstrate how we have exploited these state-of-the-art methods for characterizing different lesions in medical images and paving the road for adoption and integration of AI in hospitals.

Photo of Behrooz Hashemian

Behrooz Hashemian

Massachusetts General Hospital

Dr. Behrooz Hashemian is senior machine learning scientist at the MGH & BWH Center for Clinical Data Science, where he is responsible for developing and implementing state-of-the-art machine learning models to address various clinical use cases by leveraging medical imaging data, clinical time-series data and electronic health records. Prior to CCDS, Dr. Hashemian worked at MIT, Senseable City Lab, as Chief Data Officer with focus on innovative implementation of big data analytics and artificial intelligence in smart cities.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)