Fueling innovative software
July 15-18, 2019
Portland, OR

Unlocking your serverless functions with OpenFaaS for AI chatbot projects

Sergio Méndez (Universidad San Carlos de Guatemala)
5:05pm5:45pm Wednesday, July 17, 2019
Secondary topics:  Customer Centered
Average rating: *....
(1.33, 3 ratings)

Who is this presentation for?

  • Cloud architects and anyone in artificial intelligence

Level

Intermediate

Description

Many companies are starting to use AI chatbots to implement live support for customers, but these implementations are not without challenges. Join Sergio Mendez to learn about the challenges companies like Movistar face in the search for cost-effective solutions, high-availability, stable systems, and the rapid introduction of new features, and implementing DevOps in different kinds of systems.

Using Movistar as an example, Sergio explains what an AI chatbot is, details the lifecycle of an AI chatbot, and presents use cases for AI chatbots in different fields in the industry. One of the company’s requirements was to build an application to create custom chatbots. Following the steps in the image, step one is to read the initial parameters, for example, register a new chatbot in your IM service—in the case of Telegram, the Bot Father is a chatbot permitted to create new chatbots. Step two requires you to chose an AI library to create and train your AI model for machine learning, and in step three, you choose a serverless frame and code all the necessary code to build a serverless function. In step four, you deploy the code into your serverless architecture, and in step five, you register the serverless chatbot function in the web hook of your previous registered chatbot. And then all you have to do is monitor the behavior of new chatbots.

Sergio walks you through the ecosystem for building an AI chatbot architecture in a serverless way and explains how to use this AI ecosystem for different kinds of applications that use AI techniques like NRN, decision trees, and machine learning. You’ll learn about the different components that can be used in AI applications, for example, serverless platforms, container orchestrators, SQL and NoSQL databases, brokers, and third-party services like email services and SMS.

Sergio then leads a deep dive into serverless architecture and demonstrates how OpenFaas can help you build these architectures. Sergio explores different software components that integrate OpenFaas, including of-watchdog, which is the most important part to implementing serverless functions that need to preload ML models, in the same way as TensorFlow, and provides a simple framework to implement some level DevOps based on processes and agile development. You’ll be able to see a system to deploy serverless functions using OpenFaaS on top of an open source architecture using Kubernetes, Docker, TensorFlow, RabbitMQ, and Redis, all managed by a Rancher Dashboard.

Prerequisite knowledge

  • Familiarity with Docker or Kubernetes, AI libraries like TensorFlow or sci-kit learn, and major cloud providers like AWS, GCP, Azure, or Digital Ocean

What you'll learn

  • Learn the main components needed to design a serverless architecture
  • Explore useful open source technologies to design cost-effective IA architectures for AI applications and promote DevOps and CI/CD in your infrastructure
  • Learn how to evaluate and choose the right services from a cloud provider to prevent vendor lock-in and design a robust and highly available serverless architecture on top of open source technologies like containers
Photo of Sergio Méndez

Sergio Méndez

Universidad San Carlos de Guatemala

Sergio Méndez is a system engineer and professor of operating systems, software engineering, and AI at the University at Guatemala. Sergio is also founder and cloud architect at Curzona, an online courses startup focusing on containers, DevOps, CI/CD, cloud computing technologies, and big data. He’s working on AI projects and open source solutions using microservices, AI, and NoSQL for telephone companies.