A deep learning model to detect coordinated content abuse
Who is this presentation for?
- Data scientists, machine learning engineers and researchers, and people with experience in the prevention of online content abuse
The problem of online fraud is growing worse as more industries move their services online; this includes financial service providers, insurance companies, online retailers, social networks, and news aggregators. Fraudsters generate and control fake accounts that they use to spread false information, manipulate product reviews, sell nonexistent products, abuse promotions, spend money from stolen credit cards, or file fake insurance claims. Attackers typically automate the generation of content for the accounts they control in order to scale their operations. This creates a pattern: the introductions, messages, and names or nicknames of coordinated malicious accounts are more similar than what’s generally observed with unrelated users.
Nicola Corradi digs into a novel deep learning architecture that DataVisor created to train models to identify previously unknown suspicious patterns across multiple industries (e.g., ecommerce, financial, news aggregators), alphabets (e.g., Latin, Chinese), languages (e.g., English, Spanish, Turkish), and content types (e.g., messages, names, emails). DataVisor designed the architecture so that the models would understand and use the meaning of nonverbal characters like emoji and emoticons, including rare examples.
Crucially, DataVisor was able to train the model using labelled data gathered for several consolidated use cases and to deploy it on new platforms and industries where labelled data isn’t available. This model is already being deployed with several production clients, where it has proven able to effectively detect sophisticated suspicious patterns that went undetected by manually created features and other rule-based methods. These malicious accounts were detected by the solution before they could negatively impact the platform or its users.
- Familiarity with deep learning and interest in the prevention of online content abuse
What you'll learn
- Learn how large-scale fraud attacks are detected and prevented by revealing correlated patterns across accounts and actions that indicate coordinated malicious activity
- Discover how a deep learning architecture can expose the suspicious patterns behind content abuse and fraudulent user-generated content
Nicola Corradi is a Research Scientist at DataVisor, where he uses his vast experience with neural networks to design and train deep learning models to recognize malicious patterns in user behaviour. He earned a PhD in cognitive science (University of Padua) and did a post-doc at Cornell in computational neuroscience and computer vision, focusing on the integration of computational model of the neurons with neural networks.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
Premier Diamond Sponsors
Premier Exhibitor Plus
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires