July 20–24, 2015
Portland, OR

Development, testing, acceptance and production with Docker and Kubernetes

Patrick Reilly (Kismatic, Inc.)
5:00pm–5:40pm Wednesday, 07/22/2015
Sponsored E 143/144
Average rating: ***..
(3.25, 4 ratings)

Let’s say you just started at a new company or you discovered a handy new open source library and you’re excited to get running. You git clone the code, search for install instructions, and come up empty. You ask your co-workers where you can find documentation, and they laugh. “We’re agile, we don’t waste time on documentation.” Everyone remembers that setting things up the first time was painful, a hazing ritual for new hires, but no one really remembers all the steps, and besides, the code has changed and the process is probably different now anyways.

Docker containers start and stop so quickly, and are so lightweight, that you could easily run a dozen of them on your developer work station (e.g. one for a front-end service, one for a back-end service, one for a database, and so on). But what makes Docker even more powerful is that a Docker image will run exactly the same way no matter where you run them. So once you’ve put in the time to make your code work in a Docker image on your local computer, you can ship that image to any other computer and you can be confident that your code will still work when it gets there.

Once you get your Docker image working locally, you can share it with others. You can run docker push to publish your Docker images to the public Docker registry or to a private registry within your company. Or better yet, you can check your Dockerfile into source control and let your continuous integration environment build, test, and push the images automatically. Once the image is published, you can use the docker run command to run that image on any computer, such as another developer’s workstation or in test or in production, and you can be sure that app will work exactly the same way everywhere without anyone having to fuss around with dependencies or configuration. Many hosting providers have first class support for Docker, such as Amazon’s EC2 Container Service and Google’s Container (GKE) Engine.

Once you start using Docker, it’s addictive — it’s liberating to be able to monkey around with different Linux flavors, dependencies, libraries, and configurations, all without leaving your development workstation in a messy state. You can quickly and easily switch from one Docker image to another (e.g. when switching from one project to another), throw an image away if it isn’t working, or use Docker Compose to work with multiple images at the same time (e.g. connect an image that contains a Go app to another image that contains a MySQL database). And you can leverage the thousands of open source images in the Docker Public Registry. For example, instead of building the my-go-app image from scratch and trying to figure out exactly which combination of libraries make Go happy, you could use the pre-built go image which is maintained and tested by the Docker community.

The tutorial serves two purposes. Once you are using Docker containers the next question is how to scale and start containers across multiple Docker hosts, balancing the containers across them. So enters Kubernetes it adds a higher level API to define how containers are logically grouped, allowing to define pools of containers, load balancing and affinity.

Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Kismatic, Mesosphere, Microsoft, RedHat, IBM and Docker amongst many others.

Google has been using container technology for over ten years, starting over 2 billion containers per week. With Kubernetes it shares its container expertise creating an open platform to run containers at scale.

Kubernetes is an amazing project, and highly promising to manage Docker deployments across multiple servers and simplify the execution of long running and distributed Docker containers. By abstracting infrastructure concepts and working on states instead of processes, it provides easy definition of clusters, including self healing capabilities out of the box. In short, Kubernetes makes management of Docker fleets easier.

I hope that in the future, more and more companies will package their tech stacks as Docker images so that the on-boarding process for new-hires will be reduced to a single docker run or docker-compose up command. Similarly, I hope that more and more open source projects will be packaged as Docker images so instead of a long series of install instructions in the README, you just use docker run, and have the code working in minutes.

This session is sponsored by Kismatic

Photo of Patrick Reilly

Patrick Reilly

Kismatic, Inc.

Patrick Reilly is a CEO of Kismatic the enterprise support for Docker and Kubernetes company.

He excels at developing elegant solutions to complicated problems as well as applying emerging technologies to solve everyday problems. He develops new functionality for and maintains technical solutions for a diverse customer base. He develops in Scala, Go, Java, ASP.Net, C, C++, C#, PHP, Python, Ruby on Rails (RoR), Zope/Plone.

He has a wealth of platform development experience on big web sites and has previously worked at Mesosphere, Wikimedia Foundation (Wikipedia), OmniTI, Schematic, Media Revolution, Sony Pictures, and numerous others where he has really enjoyed building high traffic sites. He is very active in the open source community.