Presented By O’Reilly and Cloudera
Make Data Work
March 5–6, 2018: Training
March 6–8, 2018: Tutorials & Conference
San Jose, CA

Powering robotics clouds with Alluxio

Bin Fan (Alluxio), Shaoshan Liu (PerceptIn)
11:50am12:30pm Wednesday, March 7, 2018
Average rating: ****.
(4.00, 1 rating)

Who is this presentation for?

  • System architects and data scientists

Prerequisite knowledge

  • A general understanding of AI and distributed computing

What you'll learn

  • Understand how PerceptIn designed and implemented a cloud architecture to support video streaming and online object recognition tasks using Alluxio

Description

The rise of robotics applications demands new cloud architectures that deliver high throughput and low latency. Bin Fan and Shaoshan Liu explain how PerceptIn designed and implemented a cloud architecture to support video streaming and online object recognition tasks and demonstrate how Alluxio supports these emerging cloud architectures.

Bin and Shaoshan also offer an overview of in-home surveillance robots, which require the following features from the cloud: online object detection, video streaming, storage, search, and video playback. These requirements necessitate a storage layer that can handle an enormous amount of incoming data, which may end up in different storage systems (including S3, GCS, Swift, HDFS, OSS, GlusterFS, and NFS). And when writing and retrieving video feeds, the storage layer must provide high throughout and low latency. To fulfill these requirements, PerceptIn designed and implemented a cloud architecture consisting of the following components:

  • PerceptIn client devices, which capture video feeds and send the video feeds to the cloud along with their metadata (sessionID, timestamp, location)
  • PerceptIn streaming server, which streams on-demand live video feeds to users
  • Object recognition, which extracts object labels from each incoming video
  • A KV store, for organizing the video feeds (The key to each video has the following format—sessionID, timestamp, duration, location, and list of labels.)
  • A query engine, which supports retrieval of video feeds (Users can search using any combination of time and location, as well as extracted labels. For example, you could search for videos between 1/1/2017 and 1/2/2017 located in your bedroom that contain the object “dog.”)
  • A business analytics engine, which generates high-level statistics of all video data (for example, the most common objects that appear in living rooms)

Alluxio provides two key features that are critical to the success of this architecture. First, it provides high throughput and low latency to support fast retrieval of video feeds. Second, it provides a unified namespace to support many popular storage systems, including S3, GCS, Swift, HDFS, OSS, GlusterFS, and NFS. Alluxio enables more than 650 MB/s throughput, whereas the native filesystem only achieves 120 MB/s (a 5x increase). This throughput is critical as it determines how fast you can write a video feed to storage. If the throughput is too low, then the storage layer may become the bottleneck of the whole multimedia data pipeline.

Alluxio also supports fast retrieval: with Alluxio, you can retrieve a video within 500 milliseconds. However, when the video is stored in remote machines, the latency can be as high as 20 seconds. Using Alluxio to buffer “hot” video data could reduce retrieval latencies by as many as 40 folds. In addition, different users demand different persistent storage underlying Alluxio: some may use HDFS; others may use S3. Without Alluxio, PerceptIn would have to manage multiple interfaces, one for each persistent storage. With Alluxio’s unified namespace, PerceptIn only has to maintain one major interface while supporting many different underlying storage systems.

Photo of Bin Fan

Bin Fan

Alluxio

Bin Fan is a software engineer at Alluxio and a PMC member of the Alluxio project. Previously, Bin worked at Google, building next-generation storage infrastructure, where he won Google’s technical infrastructure award. He holds a PhD in computer science from Carnegie Mellon University.

Photo of Shaoshan Liu

Shaoshan Liu

PerceptIn

Shaoshan Liu is the cofounder and president of PerceptIn, a company working on developing a next-generation robotics platform. Previously, he worked on autonomous driving and deep learning infrastructure at Baidu USA. Shaoshan holds a PhD in computer engineering from the University of California, Irvine.