Deep learning workloads are compute intensive, and training these type of models is better done with specialized hardware like GPUs. Luciano Resende outlines a pattern for building deep learning models using the Jupyter Notebook’s interactive development in commodity hardware and leveraging platforms and services such as Fabric for Deep Learning (FfDL) for cost-effective full dataset training of deep learning models.
This session is sponsored by IBM Watson.
Luciano Resende is a data science platform architect at IBM CODAIT (formerly the Spark Technology Center). A member of the ASF, Luciano has been contributing to open source at the ASF for over 10 years and is currently contributing to various big data-related Apache projects around the Apache Spark ecosystem as well as building a scalable, secure, and flexible enterprise data science platform within the Jupyter ecosystem.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org