Presented By O’Reilly and Intel Nervana
Put AI to work
September 17-18, 2017: Training
September 18-20, 2017: Tutorials & Conference
San Francisco, CA

Enabling computer-vision-based autonomous driving with affordable and reliable sensors

Shaoshan Liu (PerceptIn)
4:00pm–4:40pm Wednesday, September 20, 2017
Implementing AI
Location: Imperial B Level: Intermediate
Secondary topics:  New product development, Transportation and autonomous vehicles

Prerequisite Knowledge

What you'll learn

  • Explore PerceptIn's high-definition, stereo 360-degree camera sensors targeted for computer-vision-based autonomous driving

Description

Autonomous driving technology consists of three major subsystems: algorithms, including sensing, perception, and decision; the client system, including the robotics operating system and hardware platform; and the cloud platform, including data storage, simulation, high-definition (HD) mapping, and deep learning model training. The algorithm subsystem extracts meaningful information from raw sensor data to understand its environment and make decisions about its actions. The client subsystem integrates these algorithms to meet real-time and reliability requirements. The cloud platform provides offline computing and storage capabilities for autonomous cars and can be used to test new algorithms, update the HD map, and train better recognition, tracking, and decision models.

Autonomous cars, like humans, need good eyes and a good brain to drive safely. Traditionally, lidar is the main sensor in autonomous driving, and it is the critical piece in both localization and obstacle-recognition scenarios. However, lidar has several major drawbacks, including extremely high cost (over US$80,000), lack of information (even 64-line lidar only captures a relatively sparse representation of the space), inconsistency in changing weather conditions, etc. As a result, PerceptIn investigated whether cars could drive themselves with computer vision.

The argument against this concept is that the camera does not provide accurate localization or a good obstacle-detection mechanism, especially when the object is far away (more than 30 meters). But do we actually need centimeter-accurate localization all the time? RTK and PPP GPS already provide centimeter-accurate localization, and if humans can drive cars with meter-accurate GPS, we should be able to do the same with driverless cars. If this is achievable, high-definition maps may not be needed for localization. Google Maps and Google Street View may suffice—a leap forward in autonomous driving development. And a combination of stereo vision, sonar, and millimeter radar could be to achieve high-fidelity obstacle avoidance.

Shaoshan Liu explains how PerceptIn designed and implemented its high-definition, stereo 360-degree camera sensors targeted for computer-vision-based autonomous driving. This sensor has an effective range of over 30 meters with no blind spots and can be use for obstacle detection as well as localization. Shaoshan discusses the sensor as well as the obstacle detection algorithm and the localization algorithm that come with this hardware.

Photo of Shaoshan Liu

Shaoshan Liu

PerceptIn

Shaoshan Liu is the cofounder and president of PerceptIn, a company working on developing a next-generation robotics platform. Previously, he worked on autonomous driving and deep learning infrastructure at Baidu USA. Shaoshan holds a PhD in computer engineering from the University of California, Irvine.

刘少山,PerceptIn联合创始人,董事长。加州大学欧文分校计算机博士,研究方向包括人工智能,无人驾驶,机器人,系统软件与异构计算。 PerceptIn专注于开发智能机器人系统,包括家用机器人,工业机器人,以及无人驾驶。 在创立PerceptIn之前,刘少山博士在人工智能以及系统方向有超过十年的研发经验,其经历包括英特尔研究院(INTEL RESEARCH),法国国家信息与自动化研究所(INRIA),微软研究院(MICROSOFT RESEARCH),微(MICROSOFT), 领英(LinkedIn),以及百度美国研究院 (Baidu USA)。