Sep 9–12, 2019

Data distribution search: Deep reinforcement learning to improvise input datasets

Vijay Gabale (Infilect)
11:55am12:35pm Thursday, September 12, 2019
Location: 230 C

Who is this presentation for?

  • Practitioners, developers, engineers, and researchers

Level

Intermediate

Description

Beyond computer games and neural architecture search, practical applications of deep reinforcement learning to improve classical classification or detection tasks are few and far between. Vijay Gabale outlines a technique and some experiences of applying DRL on improving the distribution input datasets to achieve state-of-the-art performance, specifically on object-detection tasks.

Vijay provides a few examples from the retail industry to highlight common issues faced by deep learning practitioners in solving real-world computer vision problems, such as less data and imbalanced data, which often don’t spin the engine of deep learning enough to learn discriminative or generative features to solve classification or detection tasks. However, it’s a well-known fact that appropriate data distribution, either via data generation or augmentation, significantly improves the pattern-learning performance of deep networks. He details a deep reinforcement learning framework that systematically chooses classes and operations, whether it’s generation or augmentation, a classification problem or a detection problem. The network performance improvement is modeled as a reward toward choosing the right set of generation and augmentation techniques.

The framework is far more inexpensive than doing a neural architecture search for a given problem, and it can achieve the same or better results (i.e., a well-formed data distribution trumps custom discovered neural architecture). Furthermore, the framework is able to achieve state-of-the-art results on PASCAL visual object classes (VOC) and Microsoft common object in context (MS COCO) datasets by using the single shot multibox detection (SSD) technique—proposed in 2016—by applying the framework on the underlying datasets. Vijay shows an example of a custom dataset, with its visualization, where it’s possible to achieve 22% performance improvement using SSD as compared focal-loss driven RetinaNet architecture.

Prerequisite knowledge

  • A basic understanding of deep learning
  • General knowledge of object detection, reinforcement learning, and the problems caused by data imbalance

What you'll learn

  • Discover a new way to artificially increase the size of datasets while balancing data distribution
Photo of Vijay Gabale

Vijay Gabale

Infilect

Vijay Gabale is the CTO at Infilect. Previously, he worked at IBM Research Labs, where he worked on research and development of machine learning and deep learning in retail, telecom, and education. He has over 20 A* publications and over 5 patents to his name. He’s a frequent speaker at AI conferences (e.g., GTC 2018, San Jose; ACM KDD, 2018, London), and has won numerous awards for his contribution to advancements in AI. He earned his PhD from the Indian Institute of Technology Bombay.

  • Intel AI
  • O'Reilly
  • Amazon Web Services
  • IBM Watson
  • Dataiku
  • Dell Technologies
  • Intuit
  • Gamalon
  • H2O.ai
  • Hewlett Packard Enterprise
  • MapR Technologies
  • Sisu Data
  • Intuit

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

Become a sponsor

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires