Put open source to work
July 16–17, 2018: Training & Tutorials
July 18–19, 2018: Conference
Portland, OR

Containers and anycast IPs at DigitalOcean

Andrew Kim (DigitalOcean)
4:15pm4:55pm Thursday, July 19, 2018
Level: Intermediate
Average rating: *****
(5.00, 2 ratings)

Who is this presentation for?

  • Software engineers, DevOps engineers, and system administrators

Prerequisite knowledge

  • Experience running containers in production
  • A basic understanding of Linux networking, ideally in container networking
  • Experience administering container orchestrators, such as Kubernetes and Mesos (useful but not required)

What you'll learn

  • Learn how DigitalOcean builds highly available and scalable container networks on top of container orchestrators like Kubernetes
  • Explore Linux networking tools and protocols
  • Better understand data center networking

Description

Today’s container networking technology has made it significantly easier to build distributed systems on top of container orchestrators such as Kubernetes, Mesosphere, and Docker Swarm. Container networking technologies use Linux primitives such as iptables and IPVS to provide load-balancing capabilities for network traffic across containers in a cluster. These simple yet powerful tools are a cornerstone of successful containerized systems, as they provide highly available environments with little to no effort.

Despite the many benefits of container networking, running containerized applications that must be latency sensitive and globally distributed is an extremely challenging task. Container networking is mainly scoped for in-cluster traffic, leaving little room to globally distribute an application across multiple clusters. Moreover, extending a container network for external traffic requires many additional layers of abstraction, usually introducing points of failures in a cluster and increasing end-to-end latency.

Andrew Kim leads a technical deep dive into how DigitalOcean uses anycast IPs, BGP, and Kubernetes to run globally distributed services on containers. Along the way, Andrew discusses design considerations for scalability, architectural trade-offs, data center networking, lessons learned in production, and challenges to adopting containers for latency sensitive applications.

Photo of Andrew Kim

Andrew Kim

DigitalOcean

Andrew Kim is a software engineer at DigitalOcean, where he and his team provide a robust and comprehensive set of tools for delivering services to production. Andrew is an active member of the open source community and is a maintainer of projects such as Kubernetes.