Most applications of deep learning today address difficult and complex problems: the input data includes high-dimensional images, text, or audio, and the output data often falls into one of hundreds or even thousands of categories, or equally high-dimensional data. In such regimes, the conventional wisdom is that deep learning requires hundreds of thousands of examples and extremely deep neural networks.
However, despite the community’s focus on these problems, many meaningful and important problems fall outside this scope. In particular, we can break down a large and complex problem, such as digitization of all information on a scanned paper document, into a number of relatively simpler subproblems, such as identifying the data elements and digitizing each separately. In such cases, although the input to each subproblem may be complex, noisy, and varied, the output categories may be narrower in scope and better defined. In such cases, deep learning with small models and small data can produce an enormous improvement over simpler, more conventional machine learning approaches.
Ramesh Sridharan explains how Captricity uses deep learning with tiny datasets at scale, training thousands of models using tens to hundreds of examples each. For instance, when solving the problem of recognizing which of several boxes is checked on noisy scanned paper documents across many types of forms, Captricity saw tremendous improvement moving from traditional classification methods to small convolutional neural nets with only a few hundred parameters. These models are dynamically trained by an automatic model deployment infrastructure with tens to hundreds of human-generated examples, achieving over 99.5% accuracy in tandem with the company’s human and machine intelligence platform.
Ramesh describes how Captricity built, tested, and deployed these models, sharing the criteria for identifying problems that can be solved by these nontraditional approaches and outlining the requirements of the automatic deployment infrastructure that decides which examples to train these models on and whether or not their accuracy is sufficient to use them in production. Along the way, Ramesh offers nuanced look at different types of errors that allow Captricity to automatically compensate for certain types of prediction errors and explains what the company learned about metrics to track the performance of these models—particularly when and why accuracy alone is not enough.
Ramesh Sridharan is a machine learning engineering manager at Captricity. Ramesh is passionate about using technology for social good, and his research has helped enable a cross-collaboration between researchers and doctors to understand large, complex medical image collections, particularly in predicting the effects of diseases such as Alzheimer’s on brain anatomy. He holds a PhD in electrical engineering and computer science from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), where his thesis focused on developing machine learning and computer vision technologies to enhance medical image analysis.
©2018, O’Reilly UK Ltd • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com