Presented By O'Reilly and Cloudera
Make Data Work
September 25–26, 2017: Training
September 26–28, 2017: Tutorials & Conference
New York, NY

NoScope: Querying videos 1,000x faster with deep learning

Daniel Kang (Stanford University)
Data science & advanced analytics, Machine Learning
Location: 1A 06/07 Level: Advanced
Secondary topics:  Deep learning

Who is this presentation for?

  • Data engineers, data scientists, and managers

Prerequisite knowledge

  • Basic familiarity with data science

What you'll learn

  • Learn how to analyze real-world video datasets at scale with NoScope

Description

Video is one of the fastest-growing sources of data with rich semantic information, and advances in deep learning have made it possible query this information with near-human accuracy. However, inference remains prohibitively expensive: the most powerful GPU cannot run the state-of-the-art at real time.

Daniel Kang offers an overview of NoScope new open source project from the Stanford InfoLab under Matei Zaharia and Peter Bailis, with contributions from John Emmons and Firas Abuzaid—which runs queries over video 1,000x faster. NoScope achieves such speeds by exploiting temporal, environmental, and query-specific redundancies in video. Daniel explains how the project exploits these redundancies and how these concepts can be generalized.

Photo of Daniel Kang

Daniel Kang

Stanford University

Daniel Kang is a PhD student in the Stanford InfoLab, where he is supervised by Peter Bailis and Matei Zaharia. Daniel’s research interests lie broadly at the intersection of machine learning and systems. Currently, he is working on deep learning applied to video analysis.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)