A deep learning model to detect coordinated frauds using patterns in user content
Who is this presentation for?Data scientists or analysts
Online fraud is flourishing as online services extend to more industries, including financial service providers, insurance companies, online retailers, social networks, and news aggregators. Fraudsters generate and control fake accounts that they use to spread false information, manipulate product reviews, sell nonexistent products, abuse promotions, spend money from stolen credit cards, or file fake insurance claims. To scale, the attackers typically automate the generation content for the accounts they control. This creates a pattern: the introductions, messages, names, or nicknames of coordinated malicious accounts are more similar than what is generally expected for unrelated users.
Nicola Corradi explains a novel deep learning architecture that was designed by DataVisor to train models to identify previously unknown suspicious patterns across multiple industries (e.g., ecommerce, financial, news aggregators), alphabets (e.g., Latin, Chinese), languages (e.g., English, Spanish, Turkish), and content types (e.g., short messages, but also full names and emails). The company designed its architecture so the models would understand and use the meaning of nonverbal characters like emoji and emoticons, including rare examples.
Crucially, it was able to train the model using labelled data gathered for several consolidated use cases and to deploy it on new platforms or industries where labelled data isn’t available. This model is already being deployed with several production clients where it proved able to effectively detect sophisticated suspicious patterns that were previously ignored by handcrafted features and other rule-based methods before the malicious accounts could affect the platform or other users.
- General knowledge of machine learning and deep learning
What you'll learn
- Learn how to identify what patterns can be observed in coordinated attacks
- Discover deep learning architecture can be used to identify novel suspicious patterns in user-generated content
Nicola Corradi is a Research Scientist at DataVisor, where he uses his vast experience with neural networks to design and train deep learning models to recognize malicious patterns in user behaviour. He earned a PhD in cognitive science (University of Padua) and did a post-doc at Cornell in computational neuroscience and computer vision, focusing on the integration of computational model of the neurons with neural networks.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
Premier Diamond Sponsors
Premier Exhibitor Plus
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires