Data scientists and model developers routinely trade off data size or model complexity in order to fit within limited GPU memory resources. Scott Soutter and Jason Furmanek discuss IBM’s updates to TensorFlow, which dramatically increase memory and model size.
This technique, which is being upstreamed to the open source community, provides the ability to load the entire model in system memory. Tensors in the model graph are “paged into” GPU memory from system memory, which acts like the cache in this case. The tensors are swapped back and forth as they are needed during graph training and execution, opening up the opportunity for more complex matrices, full-scale data elements (or full resolution images), and larger training batch sizes.
Scott Soutter is a global offering manager with the IBM Cognitive Systems business, with responsibility for solutions in deep learning and artificial intelligence and high-performance computing. Within this role, Scott has helped governmental agencies, scientific and research communities, and commercial customers incorporate novel approaches to applied artificial intelligence to the most complex compute problems globally. Previously, Scott was a global technical sales manager with IBM’s Software Defined Infrastructure business, where he led a team of global solution architects with expertise in cluster computing, high-performance filesystems, and complex industry architectures. During his 20-year career with IBM, Scott has been a global cluster sales executive for high-performance and technical computing, a technical architect for IBM Unix systems sales, and a business development executive who helped build IBM’s largest x86 OEM customer. Scott holds a BA in anthropology and an MBA with a focus on organizational change. Scott resides in Portland, Oregon, with his family. In his spare time he enjoys fly fishing, amateur photography, and recreational swimming.
Jason Furmanek works at IBM.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com