Despite the incredible advances made and the outstanding successes of deep learning in a wide variety of tasks in both academia and a few specific industries, a majority of industries, ranging from consumer electronics to security to autonomous vehicles, still face considerable barriers, given a number of critical pain points in the deep learning development cycle that makes them hard to design and build, hard to scale for deployment, and hard to understand.
Alex Wong discusses some of the operational challenges associated with scalability and explainability in deep learning for real-world, operational scenarios and explains how these are being tackled to enable more seamless, accessible, deployable, and transparent deep learning design and development through advancements made in the respective areas.
Alexander Wong is chief scientist at DarwinAI, a Waterloo-based startup that enables deep learning optimization and explainability by way of its patented Generative Synthesis technology, as well as the Canada Research Chair in Artificial Intelligence and Medical Imaging, a founding member of the Waterloo Artificial Intelligence Institute, and an associate professor in the Department of Systems Design Engineering at the University of Waterloo. Alex has published over 450 refereed journal and conference papers and holds patents in various fields such as computational imaging and artificial intelligence. He has received numerous awards for his work in artificial intelligence, including best paper awards at the prestigious NIPS conference in 2017 and 2016 on transparent and interpretable machine learning and efficient methods for deep neural networks, respectively.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com