Deep learning is used broadly at the forefront of research, achieving state-of-the-art results across a variety of domains. However, that doesn’t mean it’s a fit for all tasks—especially when the constraints of production are considered. While in some cases, deep learning can be applied without thought, most domains require understanding the task and the trade-offs involved when crafting a specific solution, especially when the system is designed with production in mind.
Exploring successes in both research and production, Stephen Merity investigates what tasks deep learning excels at, what tasks trigger a failure mode, and where current research is looking to remedy the situation. By pulling apart specific examples, such as Google’s Neural Machine Translation architecture or Salesforce Research’s quasi-recurrent neural network, Stephen analyzes the trade-offs made when stepping away from research toward production systems, noting when deep learning is likely the wrong tool of choice, especially when factoring in real-world restrictions, such as training a custom model for each customer or tackling vast datasets.
Stephen Merity is a senior research scientist at Salesforce Research (formerly MetaMind), where he works on researching and implementing deep learning models for vision and text, with a focus on memory networks and neural attention mechanisms for computer vision and natural language processing tasks. Previously, Stephen worked on big data at Common Crawl, data analytics at Freelancer.com, and online education at Grok Learning. Stephen holds a master’s degree in computational science and engineering from Harvard University and a bachelor of information technology from the University of Sydney.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com