Engineering the Future of Software
Feb 25–26, 2018: Training
Feb 26–28, 2018: Tutorials & Conference
New York, NY

The second-hardest part of microservices: Calling your services

Christian Posta (
4:50pm–5:40pm Wednesday, February 28, 2018
Secondary topics:  Best Practice, Framework-focused
Average rating: ****.
(4.67, 3 ratings)

Who is this presentation for?

  • DevOps practitioners, developers, and architects

Prerequisite knowledge

  • Familiarity with microservices, Netflix OSS, and other microservices toolkits

What you'll learn

  • Learn how Envoy Proxy and Service Mesh can solve your application networking problems
  • Understand how the next generation of microservices will push application networking concerns out of the application and into platforms like Kubernetes


We’ve been iterating on how we build services architectures for the past few decades, going from various flavors of RPC to messaging to RPC again. (Currently, enterprise architectures are dominated by cloud-native microservices.) Each new technology platform forces us to reevaluate our architectures to optimize for the holy grail: fast feedback loops of service change to reduce time to value. However, these systems architectures are still distributed systems, and the laws of distributed computing still hold.

As we strive toward architecture guidelines like microservices, we’re faced with the fact we’re making more network calls and need to do more integration to get our system to work. The problem with more integration is that we create more ways for our applications to break and for failures to propagate much faster than before. We need a way to get the benefits of microservices without the serious disadvantages of a practical implementation. We need a way to call our microservices and be resilient to distributed systems failures.

There are a number of tools that provide necessary things like resilience, routing, and observability, and they work fine if you have a homogenous stack. However, at most enterprises, this is not the case. For each combination of platform/language/framework used to build microservices you must solve for the following critical functions: routing, adaptive/client-side load balancing, service discovery, circuit breaking, timeouts/retries/budgets, rate limiting, metrics/logging/tracing, fault injection, and A/B testing/traffic shaping/request shadowing. Trying to do all of these things in application-layer libraries across all your languages and all your frameworks becomes incredibly complex and expensive to maintain. Christian Posta offers an overview of Envoy Proxy and Service Mesh, explaining how they solve application networking problems more elegantly by pushing these concerns down to the infrastructure layer and demonstrating how it all works.

Photo of Christian Posta

Christian Posta

Christian Posta is field CTO at, where he helps companies create and deploy large-scale, resilient, distributed architectures—many of what we now call serverless and microservices. Previously, Christian spent time at web-scale companies. He’s well known in the community as an author—of Istio in Action (Manning) and Microservices for Java Developers (O’Reilly)—a frequent blogger, a speaker, an open source enthusiast, and a committer on various open source projects, including Istio and Kubernetes. He enjoys mentoring, training, and leading teams to be successful with distributed systems concepts, microservices, DevOps, and cloud native application design. You can find Christian on Twitter as @christianposta.