"Human error": How can we help people build models that do what they expect
It’s never been easier to train machine learning models. With excellent open source tooling, lower compute techniques, and incredible educational material online, really anybody can start to train their own models today. Yet when domain experts try to transfer their expertise to an ML model, the results can be unpredictable. The same model can be astonishingly good and then make errors that make absolutely no sense to the human trying to teach the machine.
Motivated by a series of real stories (mostly in computer vision), Anna Roth discusses both human and technical factors and suggests some future directions.
Anna S. Roth a PM for the computer vision cloud team at Microsoft. Previously she worked at Microsoft Technology & Research, on the team which launched Microsoft Cognitive Services. Say hello on Twitter at @AnnaSRoth.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires