If you’ve ever worked on concurrent or parallel systems, race conditions have invariably plagued your existence. They are difficult to identify, debug, and nearly impossible to test repeatably. While race conditions intuitively seem bad, it turns out there are cases in which we can use them to our advantage! In this talk, we’ll discuss a number of ways that race conditions — and correctly detecting them — are used in improving throughput and reducing latency in high-performance systems.
We begin this exploration with a brief discussion of the various types of locks, non-blocking algorithms, and the benefits thereof. We’ll look at a naive test-and-set spinlock and show how introducing a race condition on reads significantly improves lock acquisition throughput. From there, we’ll investigate non-blocking algorithms and how they incorporate detection of race events to ensure correct, deterministic, and bounded behavior by analyzing a durable, lock-free memory allocator written in C using the Concurrency Kit library.
Devon H. O’Dell is a recovering competitive Guitar Hero and Rock Band addict, but still occasionally enjoys rhythm games and jamming on guitar and drums. Today, he is a Senior Systems Engineer at Google. Prior to Google, Devon held software leadership positions at Fastly and Message Systems, implementing high performance and low latency network servers. His experience over the past 17 years ranges from web applications to embedded systems firmware (and most areas in-between). His primary technical interests are developing and debugging low-latency concurrent network systems software and related tools.
©2015, O’Reilly UK Ltd • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org