Get the free Ebook:
Private and Open Data in Asia: A Regional Guide.
Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. In a broader sense, it covers a variety of topics, including but not limited to: failure prediction, failure diagnosis, failure type classification, and recommendation of maintenance actions after failure. Along with the emerging demand of Internet of Things (IoT) applications and the maturity of supporting technologies, it is gaining increasing attention in the manufacturing industry, where maintenance should either be conducted for an asset of interest, or a complex manufacturing process. This talk introduces the landscape and challenges of predictive maintenance applications in the manufacturing industry, including different problem coverage, different applicable predictive models based on data available, and the knowledge about what data is recommended to collect for performing predictive maintenance tasks.
In the context of predictive maintenance applications, we focus on the big data analytics aspects. We review the predictive maintenance problems from two perspectives: from the view of the traditional reliability-centered maintenance field; and from the view of the IoT applications. We emphasize bridging the data-driven approach and the problem-driven approach by articulating what types of data are requested for different predictive maintenance applications. We strive to bring the attention of IoT industry leaders to the necessary data acquisition required before conducting effective predictive maintenance applications.
This talk is targeted to data scientists, students, researchers, and non-technical professionals who are interested in data-driven predictive maintenance applications in the manufacturing industry. The audience will learn hands-on experience about how to formulate a predictive maintenance problem into three different machine learning models (regression, binary classification, and multi-class classification) through a real-world example. This is illustrated by showing a step-by-step procedure of data input, data preprocessing, and data labeling, and feature engineering to prepare the training/testing data from the raw data. Last, we illustrate how various types of learning models can be trained and compared with different algorithms.
Danielle Dean is a principal data scientist lead in AzureCAT within the Cloud AI Platform Division at Microsoft, where she leads an international team of data scientists and engineers to build predictive analytics and machine learning solutions with external companies utilizing Microsoft’s Cloud AI platform. Previously, she was a data scientist at Nokia, where she produced business value and insights from big data through data mining and statistical modeling on data-driven projects that impacted a range of businesses, products, and initiatives. Danielle holds a PhD in quantitative psychology from the University of North Carolina at Chapel Hill, where she studied the application of multilevel event history models to understand the timing and processes leading to events between dyads within social networks.
Comments on this page are now closed.
©2015, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org
Apache Hadoop, Hadoop, Apache Spark, Spark, and Apache are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries, and are used with permission. The Apache Software Foundation has no affiliation with and does not endorse, or review the materials provided at this event, which is managed by O'Reilly Media and/or Cloudera.