Data distribution search: Deep reinforcement learning to improvise input datasets
Who is this presentation for?
- Practitioners, developers, engineers, and researchers
Beyond computer games and neural architecture search, practical applications of deep reinforcement learning to improve classical classification or detection tasks are few and far between. Vijay Gabale outlines a technique and the experiences of applying DRL on improving the distribution input datasets to achieve state-of-the-art performance, specifically on object detection tasks.
You’ll see a few examples in the retail industry to highlight common issues faced by deep learning practitioners in solving real-world computer vision problems—less data and imbalanced data, which often don’t spin the engine of deep learning enough to learn discriminative or generative features to solve classification or detection tasks. However, it’s a well-known fact that appropriate data distribution, either via data generation or data augmentation, significantly improves the pattern-learning performance of deep networks.
Vijay details a deep reinforcement learning framework that systematically chooses classes and operations to perform on those classes, be it generation or augmentation, be it a classification problem or a detection problem. The network performance improvement is modeled as a reward toward choosing the right set of generation and augmentation techniques.
Through extensive performance testing, Vijay explains that the framework is far more inexpensive to achieve the same or better result than doing a neural architecture search for a given problem, (i.e., a well-formed data distribution trumps custom discovered neural architecture). Furthermore, the framework is able to achieve state-of-the-art results on PASCAL visual object classes (VOC) and Microsoft common object in context (MS COCO) datasets, in using single shot multibox detection (SSD) technique, proposed in 2016, by applying the framework on the underlying datasets. Vijay shows an example of a custom dataset, with its visualization, where it’s possible to achieve 22% performance improvement using SSD as compared focal-loss driven RetinaNet architecture.
- A basic understanding of deep learning
- General knowledge of object detection, reinforcement learning, and the problems caused by data imbalance
What you'll learn
- Learn a new way to artificially increase the size of datasets while balancing data distribution
Vijay Gabale is the CTO at Infilect. Previously, he worked at IBM Research Labs, where he worked on research and development of machine learning and deep learning in retail, telecom, and education. He has over 20 A* publications and over 5 patents to his name. He’s a frequent speaker at AI conferences (e.g., GTC 2018, San Jose, ACM KDD, 2018, London), and has won numerous awards for his contribution to advancements in AI. He earned his PhD from the Indian Institute of Technology Bombay.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
Diversity and Inclusion Sponsor
R & D and Innovation Track Sponsor
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
View a complete list of O'Reilly AI contacts