Interpretable models result in more accurate, safer, and more profitable machine learning products. A model you can interpret and understand is one you can more easily improve and can offer insights that can be used to change real-world outcomes for the better. It is also one you, regulators, and society can better trust to be safe and nondiscriminatory. But interpretability can be hard to ensure. There is a central tension between accuracy and interpretability: the most accurate models are necessarily the hardest to understand.
Michael Lee Williams explores the growing business case for interpretability and its concrete applications, including churn, finance, and healthcare. Along the way, Michael offers an overview of the open source, model-agnostic tool LIME, which gets around the accuracy-interpretability tension by allowing you to peer inside black-box models. Michael concludes with a demonstration of a working web application that uses LIME to explain why customers churn and raises the possibility of intervening to prevent their loss.
Mike Lee Williams is a research engineer at Cloudera Fast Forward Labs, where he builds prototypes that bring the latest ideas in machine learning and AI to life and helps Cloudera’s customers understand how to make use of these new technologies. Mike holds a PhD in astrophysics from Oxford.
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com