From recommendations engines to deep learning algorithms to help detect cancer, machine learning is quickly becoming a normal part of our lives. Which means more possibilities of encountering algorithms gone awry—such as the infamous Tay the racist Twitter Bot. Every step of building an algorithm introduces risks for injecting unintended bias; yet we rarely focus on ways to mitigate or eliminate these. Instead, we accept these as the cost of operating in this space.
Nivia Henry walks you through the journey of how an algorithm is built; risk points for errors including algorithmic bias; and steps you can take to catch, reduce, or eliminate them before they cause harm to your users.
Nivia S. Henry fundamentally believes that happy people, working in a healthy environment, will produce great outcomes. This is the philosophy behind her 15-plus-year career creating structures in which high-performing teams thrive. Today, Nivia plies her trade as an a manager of engineering managers at Spotify. Her career path has included nearly every role in tech, but her true passion is inspiring people to do their best work. Nivia has cochaired one of the largest tracks for Agile Alliance, organized meetups, and has spoken at conferences of all sizes. Her hobbies include being an overbearing mom to a gorgeous cat and traveling with her awesome husband, Andre. You can find her on Twitter and LinkedIn.
For exhibition and sponsorship opportunities, email velocity@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of Velocity contacts
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com