Adopting a machine learning system is an essential step for enterprise companies to progress to the next stage of their business. However, machine learning systems tend to be complex, because they depend on different languages, libraries, or frameworks, such as scikit-learn, TensorFlow, and XGBoost. As a result, there are many challenges for building machine learning system in production, including determining which architecture is best for which use case, how to deploy your predictive models, and how to move from development and to a production environment.
Aki Ariga explains how to put your machine learning model into production, discusses common issues and obstacles you may encounter, and shares best practices and typical architecture patterns of deployment ML models with example designs from the Hadoop and Spark ecosystem using Cloudera Data Science Workbench.
Aki Ariga is a field data scientist at Cloudera, where he works on service development with machine learning and natural language processing. His work has included researching spoken dialogue systems, building a large corpus analysis system, and developing services such as recipe recommendations. Aki is a sparklyr contributor. He organizes several tech communities in Japan, including Ruby, machine learning, and Julia.
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org