October 28–31, 2019
Please log in

Neural structured learning in TensorFlow

Da-Cheng Juan (Google Research), Sujith Ravi (Google AI)
4:10pm4:50pm Thursday, October 31, 2019
Location: Great American Ballroom J/K
Average rating: *****
(5.00, 1 rating)

Who is this presentation for?

  • Engineers, researchers, data scientists, and developers who construct and train neural networks with TensorFlow




Neural structured learning is an easy-to-use, open-sourced TensorFlow framework that both novice and advanced developers can use for training neural networks with structured signals. NSL can be applied to construct accurate and robust models for vision, language understanding, and prediction in general.

Many machine learning tasks benefit from using structured data that contains rich relational information among the samples. These structures can be explicitly given (e.g., as a graph) or implicitly inferred (e.g., as an adversarial example). Leveraging structured signals during training allows developers to achieve higher model accuracy, particularly when the amount of labeled data is relatively small. Training with structured signals also leads to more robust models.

Da-Cheng Juan and Sujith Ravi explore the concept, framework, and workflow of NSL and provides the code examples for practitioners and developers.

Prerequisite knowledge

  • A basic understanding of neural networks and TensorFlow

What you'll learn

  • Discover the concept, framework, and workflow of NSL

Da-Cheng Juan

Google Research

Da-Cheng Juan is a senior software engineer at Google Research, exploring graph-based machine learning, deep learning, and their real-world applications. Da-Cheng was the recipient of the 2012 Intel PhD Fellowship. His current research interests span across semi-supervised learning, convex optimization, and large-scale deep learning. He received his PhD from the Department of Electrical and Computer Engineering and his master’s degree from the Machine Learning Department, both at Carnegie Mellon University. Da-Cheng has published more than 30 research papers in the related fields; in addition to research, he also enjoys algorithmic programming and has won several awards in major programming contests.

Photo of Sujith Ravi

Sujith Ravi

Google AI

Sujith Ravi is a senior staff research scientist and senior manager at Google AI, where he leads the company’s large-scale graph-based machine learning platform and on-device machine learning efforts for products used by millions of people everyday in Search, Gmail, Photos, Android, and YouTube. These technologies power features like Smart Reply, image search, on-device predictions in Android, and platforms like Neural Structured Learning and Learn2Compress. Sujith has authored over 90 scientific publications and patents in top-tier machine learning and natural language processing conferences, and his work won the SIGDIAL Best Paper Award in 2019 and ACM SIGKDD Best Research Paper Award in 2014. His work has been featured in Wired, Forbes, Forrester, New York Times, TechCrunch, VentureBeat, Engadget, New Scientist, among others, and he’s a mentor for Google Launchpad startups. Sujith was the cochair (AI and deep learning) for the 2019 National Academy of Engineering (NAE) Frontiers of Engineering symposium. He was the cochair for ICML 2019, NAACL 2019, and NeurIPS 2018 ML workshops and regularly serves as senior/area chair and PC of top-tier machine learning and natural language processing conferences.

  • O'Reilly
  • TensorFlow
  • Google Cloud
  • IBM
  • Databricks
  • Tensor Networks
  • VMware
  • Amazon Web Services
  • One Convergence
  • Quantiphi
  • Lambda Labs
  • Tech Mahindra
  • cnvrg.io
  • Determined AI
  • Inferencery
  • Manceps, Inc.
  • PerceptiLabs
  • Valohai

Contact us


For conference registration information and customer service


For more information on community discounts and trade opportunities with O’Reilly conferences


For information on exhibiting or sponsoring a conference


For media/analyst press inquires