Presented By O'Reilly and Cloudera
Make Data Work
22–23 May 2017: Training
23–25 May 2017: Tutorials & Conference
London, UK

Multinode restricted Boltzmann machines for big data

11:1511:55 Thursday, 25 May 2017
Data science and advanced analytics
Location: Hall S21/23 (A)
Secondary topics:  Deep learning
Level: Intermediate
Average rating: ****.
(4.00, 2 ratings)

Who is this presentation for?

  • Data scientists and machine-learning enthusiasts

Prerequisite knowledge

  • A basic understanding of Hadoop and Spark

What you'll learn

  • Explore restricted Boltzmann machines, their typical use cases, and the challenges of running them at scale


In the age of big data, there has been unprecedented growth in the amount of data available for analysis, but handling unstructured and semistructured data is a challenging task that prompts organizations to discard a substantial amount of data.

Artificial neural networks (ANNs) have been successfully used for imposing structure over unstructured data, by means of unsupervised feature extraction and nonlinear pattern detection. Restricted Boltzmann machines (RBMs), for example, have been shown to have a wide range of applications in this context: they can be used as generative models for dimensionality reduction, classification, collaborative filtering, extraction of semantic document representation, and more. RBMs are also used as building blocks for the multilayer learning architecture of deep belief networks.

Training RBMs against a big dataset, however, is problematic. When operating with millions and billions of parameters, the parameter estimation process for a conventional, nonparallelized RBM can take weeks. In addition, the constraints of using a single machine for model fitting introduces another limitation that negatively impacts scalability.

Numerous attempts have been made to overcome the aforementioned limitation—most of them involving computations using GPUs. Studies have shown that this approach can reduce the training time for an RBM-based deep belief network from several weeks to a single day. On the other hand, using GPU-based training also presents certain challenges. GPUs impose a limit on the amount of memory available for the computation, thus limiting the model in terms of size. Stacking multiple GPUs together is inefficient due to the communication-induced overhead and the increased economic costs. There are also limitations arising from memory transfer times and thread synchronization.

Nikolay Manchev explores an implementation of a CPU-based, parallelized version of the restricted Boltzmann machine created as a collaboration between IBM and City University London. The research team created a custom implementation of a restricted Boltzmann machine that runs on top of Apache SystemML, a declarative large-scale machine-learning platform, and carried out a number of tests with various datasets, using RBMs as feature extractors and feeding the outputs to different classification algorithms (support vector machines, decision trees, multinomial logistic regression, etc.). Nikolay offers an overview of the research and the current state of this stochastic ANN model in the context of big data, as well as future plans. Along the way, he also discusses how SystemML alleviates certain big data challenges (e.g., using cost-based optimization for distributed matrix operations) and why the team chose it as a foundation for its machine-learning problem.

Photo of Nikolay Manchev

Nikolay Manchev


Nikolay Manchev is a data scientist on IBM’s Big Data technical team. He specializes in machine learning, data science, and big data. He is a speaker, blogger, and the organizer of the London Machine Learning Study Group meetup. Nikolay holds an MSc in software technologies and an MSc in data science, both from City University London.