Much of the success of deep learning in recent years can be attributed to scale—bigger datasets and more computing power—but scale can quickly become a problem. Distributed, asynchronous computing in heterogenous environments is complex, hard to debug, and hard to profile and optimize. Martin Wicke demonstrates how to automate or abstract away such complexity, using TensorFlow as an example. Martin covers the sources of complexity for large-scale machine-learning systems, explains how to mitigate such complexity, and touches upon the future avenues for this work, where, unsurprisingly, machine learning will be used to understand and improve machine learning.
Martin Wicke is a software engineer at Google working on making sure that TensorFlow is a thriving open source project. Previously, Martin worked in a number of startups and did research on computer graphics at Berkeley and Stanford.
©2016, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org
Apache Hadoop, Hadoop, Apache Spark, Spark, and Apache are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries, and are used with permission. The Apache Software Foundation has no affiliation with and does not endorse, or review the materials provided at this event, which is managed by O'Reilly Media and/or Cloudera.