Put AI to Work
April 15-18, 2019
New York, NY
Please log in

Interpretable deep learning in healthcare

Behrooz Hashemian (VideaHealth)
2:40pm3:20pm Thursday, April 18, 2019
Interacting with AI
Location: Regent Parlor
Secondary topics:  Computer Vision, Health and Medicine, Models and Methods, Reliability and Safety

Who is this presentation for?

  • Executives, AI scientists, and engineers



Prerequisite knowledge

  • A basic understanding of deep learning implementation, training, and inference

What you'll learn

  • Explore applications of AI in healthcare
  • Learn why interpretability matters and how to make deep learning models interpretable


Artificial intelligence, especially deep learning, has made major breakthroughs in performing various complex tasks such as computer vision, speech recognition, and natural language understanding. Despite the widespread success of these deep learning models, they are usually regarded as “black box” solutions. Often better predictive power comes with deeper models, which makes it harder to understand and interpret why a specific outcome has achieved. But in a sensitive discipline like healthcare, where any decision comes with a huge long-term responsibility, qualitative and quantitative evaluation of how decisions are made is essential for acceptance and integration of AI into the clinical workflow. It is also of utmost importance to assure regulators that appropriate evidence supports these AI-based clinical decisions.

Behrooz Hashemian shares various use cases of artificial intelligence in healthcare that are set to dramatically change the clinical practice. Behrooz outlines the main challenges of integrating AI in hospitals and clinical workflows, including interpretability of deep learning models. Interpretability is not only important for integration but also essential for improving the model by showing why it fails, increasing the security by discovering the edge cases, and verifying the reliability and transferability of the models by reasoning about the decision process. Behrooz reviews interpretability methodologies, from feature visualization to saliency maps and deconvolution, and discusses their power and limitation. He concludes by demonstrating how he and his team have exploited these state-of-the-art methods for characterizing different lesions in medical images, paving the way for adoption and integration of AI in hospitals.

Photo of Behrooz Hashemian

Behrooz Hashemian


Behrooz Hashemian is Vice President of Artificial Intelligence at VideaHealth. Previously, he was lead machine learning scientist at the MGH & BWH Center for Clinical Data Science, where he was responsible for developing and implementing state-of-the-art machine learning models to address various clinical use cases by leveraging medical imaging data, clinical time series data, and electronic health records, and chief data officer at the MIT Senseable City Lab, where he focused on innovative implementation of big data analytics and artificial intelligence in smart cities.