The honeymoon era of data science is ending, and accountability is coming. Not content to wait for results that may or may not arrive, successful data science leaders deliver measurable impact on an increasing share of an enterprise’s KPIs. Joshua Poduska and Patrick Harrison detail how leading organizations have taken a holistic approach to people, process, and technology to build a sustainable competitive advantage.
Outline:
How to select the right data science project: Many organizations start with the data and look for something “interesting” rather than building a deep understanding of the existing business process and then pinpointing the decision point that can be augmented or automated.
How to organize data science within the enterprise: There are trade-offs between centralized and federated models; alternatively, you could use a hybrid approach with something like a center of excellence.
Why rapid prototyping and design sprints aren’t just for software developers: Leading organizations put prototyping ahead of the data collection process to ensure that stakeholder feedback is captured, increasing the probability of adoption. Some organizations even create synthetic data and naive baseline models to show how the model would impact existing business processes.
Why order of magnitude ROI math should be on every hiring checklist: The ability to estimate the potential business impact of a change in a statistical measure is one the best predictors of success for a data science team.
The difference between “pure research” and “applied templates”: 80% of data scientists think they’re doing the former, but realistically, the vast majority are applying well-known templates to novel business cases. Knowing which is which and how to manage them differently improves morale and output.
Defining a stakeholder-centric project management process: The most common failure mode is when data science delivers results that are either too late or don’t fit into how the business works today, so results gather dust. Share insights early and often.
Building for the scale that really matters: Many organizations optimize for scale of data but ultimately are overwhelmed by the scale of the growing data science team and its business stakeholders. Team throughput grinds to a crawl as information loss compounds from the number of interactions in a single project, much less a portfolio of hundreds or thousands of projects.
Why time to iterate is the most important metric: Many organizations consider model deployment to be a moonshot, when it really should be laps around a racetrack. Minimal obstacles (without sacrificing rigorous review and checks) to test real results is another great predictor of data science success. Facebook and Google deploy new models in minutes, whereas large financial services companies can take 18 months.
Why delivered is not done: Many organizations have such a hard time deploying a model into production that the data scientists breathe a sigh of relief and move on to the next project. Yet this neglects the critical process of monitoring to ensure the model performs as expected and is used appropriately.
Measure everything, including yourself: Ironically, data scientists live in the world of measurement yet rarely turn that lens on themselves. Tracking patterns in aggregate workflows helps create modular templates and guides investment in internal tooling and people to alleviate bottlenecks.
Risk and change management aren’t just for consultants: Data science projects don’t usually fail because of the math but rather because of the humans who use the math. Establish training, provide predetermined feedback channels, and measure usage and engagement to ensure success.
Josh Poduska is the chief data scientist at Domino Data Lab. He has 17 years of experience in analytics. His work experience includes leading the statistical practice at one of Intel’s largest manufacturing sites, working on smarter cities data science projects with IBM, and leading data science teams and strategy with several big data software companies. Josh holds a master’s degree in applied statistics from Cornell University.
Patrick Harrison started and leads the data science team at S&P Global Market Intelligence (S&P MI), a business and financial intelligence firm and data provider. The team employs a wide variety of data science tools and techniques, including machine learning, natural language processing, recommender systems, graph analytics, among others. Patrick is the coauthor of the forthcoming book Deep Learning with Text from O’Reilly Media, along with Matthew Honnibal, creator of spaCy, the industrial-strength natural language processing software library, and is a founding organizer of a machine learning conference in Charlottesville, Virginia. He is actively involved in building both regional and global data science communities. Patrick holds a BA in economics and an MS in systems engineering, both from the University of Virginia. His graduate research focused on complex systems and agent-based modeling.
Comments on this page are now closed.
For exhibition and sponsorship opportunities, email strataconf@oreilly.com
For information on trade opportunities with O'Reilly conferences, email partners@oreilly.com
View a complete list of Strata Data Conference contacts
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • confreg@oreilly.com
Comments
I sent the slides to O’Reilly the day of the presentation so they should be loaded soon. In the meantime, feel free to email me and I’ll send you a copy. josh.poduska@dominodatalab.com
Are the slides from this presentation going to be posted. Very interested to see them again
Would you be able to share the slides for this talk? I was unable to attend this talk, would love to learn.