Presented By O'Reilly and Cloudera
Make Data Work
March 28–29, 2016: Training
March 29–31, 2016: Conference
San Jose, CA

Atom smashing using machine learning at CERN

Siddha Ganju (NVIDIA)
2:40pm–3:20pm Thursday, 03/31/2016
Tags: science
Average rating: ***..
(3.64, 11 ratings)

Prerequisite knowledge

Attendees should have a basic knowledge of Python and machine-learning libraries in Python.


Siddha Ganju explains how CERN uses machine-learning models to predict which datasets will become popular over time. This helps to replicate the datasets that are most heavily accessed, which improves the efficiency of physics analysis in CMS. Analyzing this data leads to useful information about the physical processes.

Reproducibility is necessary so that any process can be simulated at different times. Some processes may be more popular and hence need to be made easily accessible. Users access this data from replicas of data stored in specified places, but creating numerous replicas of every dataset is not feasible, so predicting which datasets might become popular is necessary. Siddha explains how CERN solved the classification problem which finds if a dataset will become popular or not by calculating binary values of popular (1 / TRUE) or unpopular (0 / FALSE), giving an example with toy data. (Actual data cannot be disclosed.)

After finding which dataset is popular, CERN still had to decide which machine-learning algorithm suits the procedure best. Three algorithms were employed, naive Bayes, stochastic gradient descent, and random forest. These models were combined into an ensemble to check which algorithm offers the best true positive, true negative, false positive, or false negative value.

Siddha details how this process offers better data analysis, leading to parallel, real-time processing of the distributed data that is abundantly available in CMS.

Photo of Siddha Ganju

Siddha Ganju


Siddha Ganju is a self-driving architect at NVIDIA. She was featured on the Forbes 30 under 30 list, and she guides teams at NASA as an AI domain expert and is a featured jury member in several informational tech competitions. Previously, she developed deep learning models for resource-constrained edge devices at DeepVision. She earned her degree from Carnegie Mellon University, and her work ranges from visual question answering to generative adversarial networks to gathering insights from CERN’s petabyte-scale data and has been published at top-tier conferences including CVPR and NeurIPS.