Deep and machine learning have become increasingly essential to businesses, but one of the main issues with deployment is finding the right way to operationalize the model within the company. The serverless approach for deep learning provides cheap, simple, scalable, and reliable architecture for it. Serverless architecture changes the rules of the game. Instead of thinking about cluster management, scalability, and query processing, you can now focus completely on training the model. The downside of this approach is that you have to keep certain limitations in mind and understand how to integrate your model in the right fashion.
AWS Lambda, its function-as-a-service solution, can achieve very significant results: you can do 20–30K runs for just $1 (under a pay-as-you-go model), run 10K functions in parallel, and easily integrate with other AWS services. AWS allows you to easily connect to an API, chatbot, database, or stream of events.
Rustem Feyzkhanov walks you through deploying the TensorFlow model for image captioning on AWS infrastructure. Rustem also shows how to construct serverless workflows for deep learning that enable you to conduct A/B testing of the models, canary deployments, and error handling.
Rustem Feyzkhanov is a machine learning engineer at Instrumental, where he creates analytical models for the manufacturing industry. Rustem is passionate about serverless infrastructure (and AI deployments on it) and has ported several packages to AWS Lambda from TensorFlow, Keras, and scikit-learn for ML to PhantomJS, Selenium, and WRK for web scraping.
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org