As video volumes grow, it’s increasingly important to prioritize human attention. Thus, practitioners have turned to automatic methods of processing data. While delivering impressive levels of accuracy, these automatic methods are infeasible to run at scale (10x slower than real time on an NVIDIA P100 GPU). Even if these methods can scale, they are difficult to deploy, requiring ad hoc solutions.
Daniel Kang offers an overview of exploratory video analytics engine BlazeIt, which offers FrameQL, a declarative SQL-like language for querying video, and a query optimizer for executing these queries. You’ll see how FrameQL can capture a large set of real-world queries ranging from aggregation (e.g., counting cars) and scrubbing (e.g., finding interesting clips in video) and how BlazeIt can execute certain queries up to 2,000x faster than a naive approach.
Daniel Kang is a PhD student in the Stanford InfoLab, where he is supervised by Peter Bailis and Matei Zaharia. Daniel’s research interests lie broadly at the intersection of machine learning and systems. Currently, he is working on deep learning applied to video analysis.
For exhibition and sponsorship opportunities, email strataconf@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of Strata Data Conference contacts
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com