Video is one of the fastest-growing sources of data with rich semantic information, and advances in deep learning have made it possible to query this information with near-human accuracy. However, inference remains prohibitively expensive: the most powerful GPU cannot run the state of the art at real time.
Daniel Kang offers an overview of NoScope, a new open source project from the Stanford InfoLab created under Matei Zaharia and Peter Bailis, with contributions from John Emmons and Firas Abuzaid, which runs queries over video 1,000x faster. NoScope achieves such speeds by exploiting temporal, environmental, and query-specific redundancies in video. Daniel explains how the project exploits these redundancies and how these concepts can be generalized.
Daniel Kang is a PhD student in the Stanford InfoLab, where he is supervised by Peter Bailis and Matei Zaharia. Daniel’s research interests lie broadly at the intersection of machine learning and systems. Currently, he is working on deep learning applied to video analysis.
For exhibition and sponsorship opportunities, email strataconf@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of Strata Data Conference contacts
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com