Deep learning workloads are compute intensive, and training these type of models is better done with specialized hardware like GPUs. Luciano Resende outlines a pattern for building deep learning models using the Jupyter Notebook’s interactive development in commodity hardware and leveraging platforms and services such as Fabric for Deep Learning (FfDL) for cost-effective full dataset training of deep learning models.
This session is sponsored by IBM Watson.
Luciano Resende is an STSM and open source data science/AI platform architect at IBM CODAIT (formerly Spark Technology Center). He is a member of ASF, where he has been contributing to open source for over 10 years. He’s currently contributing to various big data-related Apache projects around the Apache Spark ecosystem as well as Jupyter Ecosystem projects, building a scalable, secure, and flexible enterprise data science platform.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org