At PayPal, achieving four nines of availability is the norm. In the pursuit of exponentially complex additional nines, the company has recently embarked on applying deep learning to forecasting datacenter metrics. Seq2Seq networks are ripe for application to this difficult problem, but little has been shared to the open community.
Aashish Sheshadri shines a light on how PayPal applies Seq2Seq networks to forecasting CPU and memory metrics at scale. Forecasting enables alerting flows to get a head start reducing MTTD, augment autoremidiation, and consequentially aid MTTR. Aashish also highlights how data scientists’s lives have been greatly simplified by the use of template notebooks stitched into stateful and stateless pipelines using PayPal’s open source PPExtensions and demonstrates how to use emplate notebooks as predictable execution drivers that enable abstraction and orchestration of TensorFlow distributed training and inference pipelines at scale on HPC clusters.
Aashish Sheshadri is a research engineer at PayPal, where he currently ideates and applies deep learning to new avenues and actively contributes to the Jupyter ecosystem and the SEIF Project. He holds an MS in computer science from the University of Texas at Austin, where his research focused on active learning with human-in-the-loop systems.
Comments on this page are now closed.
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com