Engineer for the future of Cloud
June 10-13, 2019
San Jose, CA

Isolate computing

Zack Bloom (Cloudflare)
3:50pm4:30pm Thursday, June 13, 2019
Serverless
Location: 230 A
Average rating: ****.
(4.80, 5 ratings)

Level

Intermediate

Prerequisite knowledge

  • Familiarity with existing methods of doing compute (servers, EC2, serverless)

What you'll learn

  • Understand when isolate-based serverless may be more appropriate to use than other forms of compute

Description

For 40 years, computation has been built around the idea of a process as the fundamental abstraction of a piece of code to be executed. In that time, how we write code has changed dramatically, culminating with serverless, but the nature of a process has not. Processes unfortunately incur a context-switching overhead as the operating system moves the processor from executing one serverless container to another, wasting CPU cycles. Processes also can only do I/O and other critical tasks by firing interrupts into the kernel, which wastes as much as 33% of the execution time of an I/O-bound function. Processes also incur start-up time as heavyweight virtual machines like Node.js are initialized, which we experience in the serverless world as a cold start. The fear of cold starts requires us to do complex work to warm serverless functions and requires even infrequently used functions to consume precious memory to avoid them.

There may be an alternative. Web browsers have solved the same problem, the need to run many instances of untrusted code with minimal overhead and start new code execution lightning fast, in an entirely different way. They run a single virtual machine and encapsulate each piece of code not in a process but in an isolate. These isolates can be started in 5 milliseconds, 100 times faster than a Node Lambda serverless function. They also consume one-tenth the memory. Beyond serverless, being able to initiate the execution of server-side code in less time than it takes for a web request to connect opens dramatic possibilities. Services can be scaled to millions of requests per second instantaneously. They can be deployed to hundreds of locations around the world with the same economics as deploying to just one. Even better, by eliminating process-related overhead, it brings us close to the economics of running on bare metal, but with the ergonomics of serverless programming.

Zack Bloom shares an understanding of where isolate-based serverless might be more appropriate than other forms of compute. In those situations, you’ll be able to deploy code that can be affordably run close to every internet visitor, which can autoscale instantaneously, and which can be as much as three times less expensive than container-based serverless systems per CPU cycle.

Photo of Zack Bloom

Zack Bloom

Cloudflare

Zack Bloom is the director of product for product strategy at Cloudflare. Previously, he was the cofounder of Eager (acquired by Cloudflare in 2016). Zack is the author of the JavaScript used in open source libraries that total more than 50,000 stars on GitHub, are included in Twitter Bootstrap, and are used on over a million websites.

Comments on this page are now closed.

Comments

jason holdeman | SR. SOFTWARE ENGINEER
06/21/2019 12:57am PDT

Zack,
I loved your presentation at Velocity last week. I want to share it with my company so I was wondering if you could share your slides?
Thanks!
Jason Holdeman
jason.holdeman@terumobct.com