Engineering the Future of Software
November 13–14, 2016: Training
November 14–16, 2016: Tutorials & Conference
San Francisco, CA

Running containerized applications securely in production

Giuseppe de Candia (Midokura)
1:15pm–2:05pm Tuesday, 11/15/2016
Microservices, pros and cons
Location: California West Level: Intermediate
Average rating: ***..
(3.50, 2 ratings)

Prerequisite knowledge

  • A basic understanding of Linux containers and networking principles

What you'll learn

  • Understand ops and monitoring best practices and gain exposure to advanced analytics tools

Description

A recent study by New Relic shows that 46% of deployed containers run for one hour and 27% run for about five minutes—talk about short lived. In such a fast-paced, disposable computing environment, cloud operators have a difficult time dealing with workloads and keeping the container environment from turning into unmanageable chaos.

Today, more and more applications are being packaged into containers and deployed in microservices architecture. Containerization and microservices go in hand in hand. When applications are scaled out across multiple host systems to keep up with growing demands, the ability to manage each host system and abstract away the complexity of the underlying platform becomes attractive. At a macro level, being able to provide a seamless network for multicloud (say on-premises private with a public cloud) becomes imperative.

Cloud operators must consider how to schedule containers to prevent resource contention, how to implement container isolation to ensure security containment (in case of a breach), what it means to network containers together, what it means for provisioning, load balancing, and availability, and how to perform analysis and troubleshooting of containers in spite of the short life span.

However, the downside of networking in microservices architecture is that it often creates more components to manage and more endpoints to secure. Thus, keeping configurations consistent and maintaining security policies becomes even more challenging than it already is.

This is where advanced schedulers and network virtualization come into play. Advanced scheduling technologies, such as Kubernetes, allow much more control over the containers running on the infrastructure. Containers can be labeled, grouped, and given their own subnet for communication.

Giuseppe de Candia explains how to take the chaos out of these short-lived computing engines and the security implications to consider along the way.

Topics include:

  • How advanced scheduling, container orchestration, and open source network virtualization work together to provide the automation for networking containers, load balancing, and making the network highly available
  • How labels and security groups can provide fine-grain control that allows cloud operators to implement their own tenant-level, protocol-level, port-level security on the containers
  • How advanced analytics tools, such as Elasticsearch and Logstash, can provide context on what the containers are actually doing and troubleshoot application performance issues without having to track down the container or identify the host or security policies that were applied to them
Photo of Giuseppe de Candia

Giuseppe de Candia

Midokura

Giuseppe Pino de Candia is CTO at Midokura, where he leads technical innovation and evolution of its flagship technology, MidoNet. Pino joined Midokura as a software engineer, where he built the early versions of MidoNet and led the Network Controller team as engineering lead and the Architecture team as chief architect in Barcelona. Previously, Pino built Dynamo, a highly available NoSQL data store for Amazon.com. Amazon technologies similar to Dynamo are used to power the Amazon S3 service today. Pino holds a master of engineering and a bachelor of science, both in computer science, from Cornell University.