Presented By
O’Reilly + Intel AI
Put AI to Work
April 15-18, 2019
New York, NY

An Active Learning Framework to Optimize Training of Deep Model with Human-in-the-loop

Humayun Irshad (Figure Eight)
4:05pm4:45pm Thursday, April 18, 2019
Interacting with AI
Location: Mercury Rotunda
Secondary topics:  Computer Vision, Data and Data Networks, Models and Methods

Who is this presentation for?

Humayun Irshad, Lead Scientist, Machine Learning

Level

Intermediate

Prerequisite knowledge

Attendees will know some basics of Machine Learning and Computer Vision. They should have some knowledge about key element in deep learning (transfer learning) for object detection problem.

What you'll learn

The attendees will know more about one of the important challenge in deep learning world which is the selection of training dataset, more specifically when there is large amount of unlabeled data exist. Attendees will not only learn different ways to optimize the selection of training dataset in cost-effective and fast way, but also improve the performance of deep model to predict object which is rear in the big dataset.

Description

Deep learning models have been used extensively to solve real-world problems in recent years. The performance of such models relies heavily on large amounts of labeled data for training. While the advances of data collection technology have enabled the acquisition of a massive volume of data, labeling the data remains an expensive and time-consuming task. Random selection of data points for generating training data may lead to a time consuming and inefficient process particularly when there is a high variation in the scale, shape and orientation of the data, or when sample data points do not follow an even distribution.
Active learning techniques are being progressively adopted to accelerate the development of machine learning solutions by allowing the model to query the data they learn from. In this talk, an active learning framework with transfer learning and crowd sourcing approach is introduced to solve a real-world problem in transportation and autonomous driving discipline, parking sign recognition, for which a large amount of unlabeled data is available and proposed an active learning-based approach to address it. The main novelty of proposed framework is a crowd sourcing based active learning framework which intelligently select a small subset of images that efficiently improved the performance of object detection model. For the iterative part of the proposed active learning framework, we defined a criteria for the selection of images which are good candidates for building a better model. These candidate images were chosen to cover not only a diverse range of parking signs but also challenging corner cases where the parking signs are either partially occluded, blurred, reflecting light or are placed far away in the background.
We discuss how such a framework contributes to building an accurate model in a cost-effective and fast way to solve the parking sign recognition problem in spite of the unevenness of the data associated with the fact that street-level images (such as parking signs) vary in shape, color, orientation and scale, and often appear on top of different types of background.

Photo of Humayun Irshad

Humayun Irshad

Figure Eight

Humayun Irshad is a lead scientist in Machine Learning & Computer Vision at Figure-Eight. He is developing machine learning, more specifically deep learning frameworks for various applications like object detection, segmentation and classification in fields ranging from medical, retail, self-driving car, etc. He has 3 years PostDoc experience at Harvard Medical School where he developed machine learning and deep learning frameworks include region of interest detection and classification, nuclei and gland detection, segmentation and classification in 2D and 3D medical images.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)