October 28–31, 2019
Please log in

Diagnose and explain: Neural X-ray diagnosis with visual and textual evidence

4:10pm4:50pm Wednesday, October 30, 2019
Location: Grand Ballroom A/B

Who is this presentation for?

  • NLP researchers playing with clinical data, radiologists, and healthcare specialists

Level

Intermediate

Description

In the past few months, Mila conducted a study to understand how deep learning models are being used in clinical settings, especially in radiology. The general opinion is that because they sometimes outperform humans at predicting diseases, deep learning models occupy an important place within radiologists’ workflows. This is unfortunately not true when it comes to X-ray interpretation.

The study found that radiologists don’t rely on diagnostics produced by black box systems, when they don’t have access to clinical findings supporting these diagnostics. This suggests that predicting diseases with no form of clinical explanation is of little interest to radiologists, regardless of prediction accuracy. In the meantime, a radiology report contains different sections, including “clinical findings” and “impression,” which constitute textual evidence supporting a diagnosis. Since there exists large free-text databases with such data, the researchers studied whether a language model can be trained to generate these sections at test time, with satisfying clinical accuracy.

Wisdom d’Almeida dives into what makes this task challenging from a natural language modeling point of view and presents a novel approach to optimize language models for clinical pertinence. He details the design and training of a medical report generation model with TensorFlow and its testing with a TensorFlow.js web interface. The dataset used is MIMIC-CXR, a large publicly available database of chest radiographs with free-text radiology reports.

Prerequisite knowledge

  • Experience with language modeling, image captioning, and image classification

What you'll learn

  • Learn how to use TensorFlow and Cloud TPUs to train a model for radiology report generation and how to optimize language models for clinical pertinence (making them more "clinically aware")
Photo of Wisdom d'Almeida

Wisdom d'Almeida

Mila

Wisdom d’Almeida is a Visiting Researcher at Mila, working with Yoshua Bengio on System 2 reasoning with deep learning models, based on the Consciousness Prior. His other research interests include grounded language learning and AI explainability. In the past, Wisdom worked on natural language understanding for common-sense reasoning, with application to areas such as healthcare—his master’s dissertation was about medical report generation with natural language explanations. Wisdom’s works in AI won a Government of India National Award in 2018. Previously, he interned at Google in San Francisco and demoed at Google Cloud Next 2018. Wisdom holds a master’s degree from KIIT in India and a BS from Université de Lomé in Togo, where he grew up. In his spare time, you can see him struggling with his vocal cords and his guitar strings.

  • O'Reilly
  • TensorFlow
  • Google Cloud
  • IBM
  • NVIDIA
  • Databricks
  • Tensor Networks
  • VMware
  • Amazon Web Services
  • One Convergence
  • Quantiphi
  • Lambda Labs
  • Tech Mahindra
  • cnvrg.io
  • Determined AI
  • Inferencery
  • Manceps, Inc.
  • PerceptiLabs
  • Valohai

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

sponsorships@oreilly.com

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires