What if we could predict when and where crimes will be committed? Crimes in Chicago, a publicly published dataset of reported incidents of crime that have occurred in Chicago since 2001, contains as many as 6.4 million rows, and each row includes crime type, geographical location, and date and time when the crime occurred. This extensive data source is very valuable and can form the basis for a machine learning model. One direct and immediate motivation for the dataset is making crime counts predictions for specific crimes, which would assist the police in deciding which areas and times to increase their resources, having a concrete impact on citizens’ safety. However, previous work done on this dataset has been mostly descriptive—explorations made at a high level of the current state and counts (i.e., how many crimes have been committed up to a specific point in time)—rather than focused on predictive models.
Or Herman-Saffar and Ran Taig offer an overview of Crimes in Chicago and explain how to use this data to explore committed crimes to find interesting trends and make predictions for the future. Or and Ran conclude by exploring the development of a machine learning model that predicts crime counts for specific crime type on a given day in a specific district within Chicago and cover lessons and insights learned.
Or Herman-Saffar is data scientist at Dell. She holds an MSc in biomedical engineering, where her research focused on breast cancer detection using breath signals and machine learning algorithms, and a BS in biomedical engineering specializing in signal processing from Ben-Gurion University, Israel.
Ran Taig is a senior data scientist at Dell EMC, where he leads data science projects, especially in domain of hardware failure prediction, and plays a key roll in designing the team engagement models and work structure, serving as a consultant to EMC’s business data lake team. Ran is also responsible for the team’s academic relations and continues to teach theory courses for CS students. Previously, Ran taught the Design of Algorithms and other CS theory courses at Ben-Gurion University. He holds a PhD in computer science from Ben-Gurion University, Israel, where he specialized in artificial intelligence. His research mainly focused on automated planning.
Comments on this page are now closed.
For exhibition and sponsorship opportunities, email email@example.com
For information on trade opportunities with O'Reilly conferences, email firstname.lastname@example.org
View a complete list of Strata Data Conference contacts
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com
(Reiterating my point asked in person at the talk for visitors of this page because I think it’s really important to keep in mind for people doing similar work)
Crime prediction has very serious real-world impact and it is imperative to keep in mind the real-world biases and consequences at play. One can not treat this as simply an exercise in training and evaluating an ML model, especially if the intention is to give this model to policymakers or law enforcement.
For example, white people use drugs at the same frequency as non-white people, but people prosecuted for drug crimes are overwhelmingly non-white, because police intentionally look for such crimes (e.g. via random traffic stops) in non-white neighborhoods, but will rarely do the same in a white neighborhood. The crime dataset would predict that police should hunt in non-white neighborhoods even more, and in white neighborhoods even less – thus further perpetuating the racial disparity in prosecution for drug crimes.
Addressing this is an active research area called “machine learning fairness”. http://onlinelibrary.wiley.com/doi/10.1111/j.1740-9713.2016.00960.x/full in fact concerns exactly the topic of this talk. There’s a class about it https://fairmlclass.github.io/ and a series of conferences https://www.fatml.org/ and so on.