Microservices architectures suggest the decomposition of large applications into many smaller processes, each of them developed, deployed, and maintained independently from the others. This decoupling has many advantages. A team can be fully responsible for a service, and pick the language, framework, and tools that are better suited for the job.
With microservices, deployments are no longer large, risky, or infrequent events. Deploying a single service is simpler, faster, and carries less risk, and can therefore become more frequent. This promotes continuous deployment practices, and yields much faster release cycles.
But microservices also have their own, specific constraints. Communication between services, using API and RPC, will typically be slower than internal functional calls. Service discovery, activation, and load balancing all become mandatory.
In this talk, after presenting the expected benefits of microservices, we will walk you through those new constraints, and demonstrate how to use Docker and containers to address them.
Jerome Petazzoni is a senior engineer at Docker, where he helps others to containerize all the things. In another life he built and operated Xen clouds when EC2 was just the name of a plane, developed a GIS to deploy fiber interconnects through the French subway, managed commando deployments of large-scale video streaming systems in bandwidth-constrained environments such as conference centers, operated and scaled the dotCloud PAAS, and various other feats of technical wizardry. When annoyed, he threatens to replace things with a very small shell script.
Comments on this page are now closed.
©2015, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org