Mar 15–18, 2020

Machine Learning for Managers

Robert Horton (Microsoft), Mario Inchiosa (Microsoft), John-Mark Agosta (Microsoft)
1:30pm5:00pm Monday, March 16, 2020
Location: LL20C

Who is this presentation for?

Non-technical or Business audience

Level

Non-technical

Description

Due to the tremendous recent success and popularity of machine learning, these technologies are now affecting a wide array of different software products and businesses, notably healthcare. This half-day workshop is designed to introduce the fundamental concepts of ML to decision makers and software product managers so that they will be able to make more effective use of machine learning results, and be better able to recognize opportunities to incorporate ML technologies in a variety of products and processes. This is NOT just another dumbed-down intro to machine learning; we focus on how to use ML to make better decisions, including whether to use machine learning in a given application. This is a hands-on workshop and you should bring a laptop to do the (optional) exercises, but you do not need any programming expertise beyond the ability to use a web browser and Microsoft Excel.

The workshop includes three sections:

Part I: Software 2.0.
We start by walking through the process of trying to build a ‘language classifier’, that is, a program that can look at some textual data and decide what language it is written in, using traditional hard-coded logic. How do you decide which words of the text to look at? How many different words do you need to use to identify each language? Can you think of any useful statistics that you could gather about how often various words appear in each language? How can you measure how well your classifier works? Did a new rule you added to your classifier program make it better? Machine learning is basically just automating this type of process. Then we build some simple machine learning classifiers, evaluate their performance, and examine the phenomenon that is Kryptonite to ML: overfitting. Along the way you will learn important vocabulary terms like “feature”, “label”, and “test set”, become familiar with diagnostic plots that chart the learning process (“learning curves”) as well as commonly used plots for visualizing data distributions, and start to understand why your data scientists are always begging for more data.

Part II: Decision Support.
Most machine learning classifiers give fuzzy results; rather than telling you whether a picture is a dog or a cat, it gives you probabilities. People accustomed to black and white answers may need to learn new approaches to deal with these shades of gray. We examine the process of characterizing the performance of a classifier by relating its sensitivity (the ability to detect positive cases) to its specificity (the ability to not detect the negatives). In general, classifiers allow you to make a trade-off between quality and quantity by adjusting a threshold; you have to settle for finding fewer positives if you insist on only taking the purest subset. In a business context we can often assign dollar values to each of the two types of mistakes a binary classifier can make: it can think a bad widget is good, or it can think a good widget is bad. In medical testing there is usually a different weighting for screening tests (where it is important to not miss anybody, so sensitivity is emphasized over specificity), as opposed to confirmatory tests (where you want to be sure the patient really has the disease). Since machine learning makes it possible to test for huge numbers of possible errors (for example, in electronic health record systems), we may need to consider the risk of overwhelming users with false alarms (leading to “alert fatigue”). The tradeoffs between sensitivity and specificity need to be evaluated in the context of the system in which the classifier is deployed. Our main exercise in this section is to use an economic utility model to weight these types of errors and help us decide on the best classifier threshold to use to maximize reward in various scenarios. As part of that process we will take an in-depth look at some of the most important types of diagnostic plots for visualizing classifier performance (including ROC curves, precision-recall curves, and lift plots), and frame machine learning as a way to automate (some) decisions.

Part III: Causality and Other Cautionary Tales.
The dirty secret of ML is that it is built on correlation, not causation. Just because we find that red-headed people are more likely to get melanoma doesn’t mean that we can protect them from cancer by coloring their hair brown. We examine the problem of confounding, and how it can affect our interpretation of how various features might affect outcomes (this also shows why we still need statisticians to keep us honest). To really sort out cause and effect relationships we need more than just ML; we need to do experiments. Both ‘A/B tests’ in software development and randomized controlled trials of medical interventions are designed to detect causal relationships, and we will briefly explore the statistical considerations involved in that kind of testing. Finally, we will show a new approach to automating the experimentation process using a web-based cognitive service based on reinforcement learning.

Prerequisite knowledge

This is an introductory workshop with optional computer exercises. Exercises will not require programming, but you should be familiar with Microsoft Excel.

Materials or downloads needed in advance

You should bring a laptop with a web browser, and have access to Microsoft Excel.

What you'll learn

You will get a general overview of how machine learning difers from traditional software engineering, and learn basic principles of how to apply probabilistic results, including estimating the costs and benefits of applying machine learning classifiers in various contexts. Finally, you will also learn how ML and advanced analytics can help to guide, but not replace, the process of experimentally testing the effects of incremental changes to products and processes.
Photo of Robert Horton

Robert Horton

Microsoft

Bob Horton is a senior data scientist in the Bing User Understanding team. He came to Microsoft from Revolution Analytics, where he was on the Professional Services team. Long before becoming a data scientist, he was a regular scientist (with a PhD in biomedical science and molecular biology from the Mayo Clinic). Some time after that, he got an MS in computer science from California State University, Sacramento. Bob currently holds an adjunct faculty appointment in health informatics at the University of San Francisco, where he gives occasional lectures and advises students on data analysis and simulation projects.

Photo of Mario Inchiosa

Mario Inchiosa

Microsoft

Dr. Inchiosa’s passion for data science and high-performance computing drives his work as Principal Software Engineer in Microsoft Cloud + AI, where he focuses on delivering advances in scalable advanced analytics, machine learning, and AI. Previously, Mario served as Revolution Analytics’ Chief Scientist and as Analytics Architect in IBM’s Big Data organization, where he worked on advanced analytics in Hadoop, Teradata, and R. Prior to that, Mario was US Chief Scientist in Netezza Labs, bringing advanced analytics and R integration to Netezza’s SQL-based data warehouse appliances. He also served as US Chief Science Officer at NuTech Solutions, a computer science consultancy specializing in simulation, optimization, and data mining, and Senior Scientist at BiosGroup, a complexity science spin-off of the Santa Fe Institute. Mario holds Bachelor’s, Master’s, and PhD degrees in Physics from Harvard University. He has been awarded four patents and has published over 30 research papers, earning Publication of the Year and Open Literature Publication Excellence awards.

Photo of John-Mark Agosta

John-Mark Agosta

Microsoft

John Mark Agosta leads a team that is expanding the machine learning and artificial intelligence capabilities of Microsoft Azure. He recently joined Microsoft, which if he were smarter, he should have done earlier in his career — a career that involved working with startups and labs in the Bay Area, in such areas as “The Connected Car 2025” at Toyota ITC, sales opportunity scoring at Inside Sales, malware detection at Intel, and automated planning at SRI. At Intel Labs, he was awarded a Santa Fe Institute Business Fellowship in 2007. He has over 30 peer-reviewed publications and 6 accepted patents. His dedication to probability and its applications is shown by his participation in the annual Uncertainty in AI conference since its inception in 1985. When feeling low he recharges his spirits by singing Russian music with Slavyanka, the Bay Area’s Slavic music chorus.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

Become a sponsor

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires