In most academic and industry conferences, we get to learn about the success stories of security analytics systems, but we rarely explore what to do when these systems don’t work as intended. Ram Shankar addresses the gap, sharing the lessons learned from deploying failed machine learning intrusion detection systems and more importantly, how to fix them. Drawing on work from Microsoft Research, Azure Security Data Science, and the Microsoft’s Cyber Defense Operations Center, Ram focuses on the results of unsuccessful experiments that attempted to solve three important security scenarios: detecting lateral movement in the cloud environment, identifying anomalous executables on the host, and automating incident response.
Ram begins by describing the problem of detecting lateral movement in the cloud and explaining how the same class of attacks in traditional bare-metal server settings manifest differently in the cloud and hence need different analytics systems. Ram walks you through how lateral movement happens in the cloud, followed by how Microsoft’s machine learning trained in the bare-metal setting failed to meet its detection efficacy standards, due to architectural variations, telemetry differences, and paradigm changes. He then demonstrates a solution using an ensemble learner over Azure management logs.
Ram then focuses on detecting malicious executables, explaining why defenders cannot simply translate attackers’ tools, tactics, and procedures (TTPs) into machine learning models without accounting for infrastructure idiosyncrasies. He demonstrates this by showing that using LSTM deep learning architectures did not yield results. Our rationale for was that attackers perform a series of anomalous activities like installing never before seen software when they gain access to a box – a common TTP. To Microsoft’s surprise, service engineers and security analysts also perform a series of “legitimate anomalous” activities when troubleshooting—they install weird debugging software! Ram explains why application whitelisting combined with domain knowledge-based Markov chain models can be a potent solution.
Ram concludes by discussion why traditional bookkeeping during incident response doesn’t scale and provides little to no insight. He shares the results of a large-scale security incident that left Microsoft with hundreds of indicators of compromise after querying tens of different log sources. When trying to piece the IOCs and log sources, the company found that existing tools failed to scale given the large amount of evidence that was gathered. Ram discusses a solution using graph-based systems, which provided a distinctive advantage: Microsoft could now tell a coherent narrative. Using centrality measures and graph inference algorithms, the company gathered insights into how incident responders query the security logs and identity which fields were of most value to the security experts. This awareness helped to preemptively serve relevant context information during subsequent security incidents, which in turn reduced mean time to respond.
Ram Shankar is a security data wrangler in Azure Security Data Science, where he works on the intersection of ML and security. Ram’s work at Microsoft includes a slew of patents in the large intrusion detection space (called “fundamental and groundbreaking” by evaluators). In addition, he has given talks in internal conferences and received Microsoft’s Engineering Excellence award. Ram has previously spoken at data-analytics-focused conferences like Strata San Jose and the Practice of Machine Learning as well as at security-focused conferences like BlueHat, DerbyCon, FireEye Security Summit (MIRCon), and Infiltrate. Ram graduated from Carnegie Mellon University with master’s degrees in both ECE and innovation management.
For exhibition and sponsorship opportunities, email email@example.com
For information on trade opportunities with O'Reilly conferences, email firstname.lastname@example.org
View a complete list of Strata Data Conference contacts
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com