Deploying and serving chatbot models at Amazon scale
Who is this presentation for?Software Engineers, Machine Learning Engineers, Applied Scientists, Technical Managers
In this session, you will learn how Lex, Amazon’s cloud-based AI-powered chatbot service, was architected, built and deployed. In the session we will go over practical considerations for deploying and maintaining deep learning models in production, we will survey the team’s learnings, and will explore the technologies used by the service: Apache MXNet and MXNet Model Server, and how they were leveraged to build and scale the successful service.
Prerequisite knowledgeSoftware engineering. Basic knowledge of deep learning.
What you'll learnDeploying deep learning models in production. Designing DL-powered systems.
Amazon Web Services
Hagay Lupesko is part of the deep learning leadership team at Amazon Web Services, and currently works to democratize Artificial Intelligence and Deep Learning through cloud services and open source projects such as MXNet and ONNX. He has been busy building software for the past 15 years, and still enjoys every bit of it (literally)! He engineered and shipped products across various domains: from 3D cardiac imaging with real time in-vessel tracking, through semi-conductors fab systems that measures structures the size of molecules, and up to web-scale systems with global distribution.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
Diversity and Inclusion Sponsor
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
View a complete list of O'Reilly AI contacts