Any IT infrastructure that supports an AI and analytics workflow needs to be able to handle a large number of files and must have a large amount of data storage with high-throughput access to all the data. Legacy filesystems can’t supply high throughput and high file IOPS, as they were designed for HDD and are not suitable for the low-latency, small-file I/O and metadata-heavy workloads that are common in AI and analytics. This results in I/O starvation, a major problem for an AI system.
Liran Zvibel demonstrates why NVMe-optimized, distributed filesystems are ideal storage solutions to support AI applications and introduces a next-gen massively parallel shared filesystem that’s NAND flash and NVMe optimized, built to solve the I/O starvation problem. Liran also shares a case study detailing how a large autonomous driving car manufacturer maximizes its investment in GPUs by ensuring they are saturated with data and explains how to make I/O compute bound again and how to maximize data accessibility for your cluster.
This session is sponsored by WekaIO.
Liran Zvibel is cofounder and CEO at WekaIO, where he guides the company’s long-range technical strategies. Previously, Liran ran engineering at social startup and Fortune 100 organizations including Fusic, where he managed product definition, design, and development for a portfolio of rich social media applications and was responsible for the principal architecture of the hardware platform, clustering infrastructure, and overall systems integration for XIV Storage System (acquired by IBM in 2007). He holds a BSc in mathematics and computer science from Tel Aviv University.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org