Until recently, Jupyter notebooks were primarily a tool for individual data scientists working on their own machines. Software engineers used them mostly for doing exploratory or one-off analyses or at most, early-stage development of things that would eventually need to move elsewhere. When it came time to move to production, a project would need to be exported to ordinary Python scripts and worked on like any other code.
That mindset is finally changing in many large organizations, as Jupyter has become a first-rate member of enterprise-scale data science stacks. But there’s no one right way to use Jupyter in production. With the ability to run notebooks in the background, data scientists have the option of keeping all of their code in Jupyter while still maintaining the reliability and automation capability of standard Python scripts.
But just because you can stay in Jupyter, should you? Andrew Therriault walks you through several different production workflows for combining Jupyter with standard Python scripts, modules, and packages. Using real-world examples from his own experience, Andrew covers the pros and cons of each approach, giving you the knowledge you need to apply to your own projects.
Andrew Therriault is the chief data officer for the City of Boston, where he leads Boston’s Analytics team, a nationally recognized leader in using data science to improve city operations and make progress in critical areas such as public safety, education, transportation, and health. Previously, Andrew was director of data science for the Democratic National Committee and served as editor of Data and Democracy: How Political Data Science Is Shaping the 2016 Elections from O’Reilly. He holds a PhD in political science from NYU and completed a postdoctoral research fellowship at Vanderbilt.
©2017, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com