Put AI to Work
April 15-18, 2019
New York, NY
Please log in

Regularization of RNNs through Bayesian networks

vishal hawa (Vanguard)
11:05am11:45am Thursday, April 18, 2019
Case Studies, Machine Learning
Location: Sutton South
Secondary topics:  AI case studies, Financial Services, Models and Methods

Who is this presentation for?

  • Data scientists, marketing managers, and business analysis leads

Level

Advanced

Prerequisite knowledge

  • Familiarity with common deep learning architectures (e.g., RNN-based architectures) as well as Bayesian and probabilistic theory
  • A basic understanding of channel interactions and problem of multitouch in a client's or lead’s journey (useful but not required)

What you'll learn

  • Explore a technique that combines Bayesian networks and RNNs to produce desired outcomes

Description

Deep learning has shown significant promise for model performance, but any DL technique may require large volumes of data. Without it, DL models can quickly become untenable, particularly when data size falls short of the problem space—a common challenge while training RNNs. RNNs can quickly memorize and overfit when the data size is small to medium.

On the other hand, Bayesian techniques (particularly Bayesian networks) are more robust in the face of missing data, noise, and data size, but they lack order or sequence information. However, by combining these modeling techniques, you can harness the power of RNNs at the expense of data size.

Drawing on a marketing channel attribution modeling use case, Vishal Hawa exposes the shortcomings of RNNs and demonstrates how a combination of RNNs and Bayesian networks (PGM) can not only overcome them but also improve the sequence modeling behavior of RNNs.

While attempting to attribute credits to a channel, it’s important to take channel interactions, the number of impressions of channel on the leads, and the order in which the channel was touched in a lead’s journey into account. First, each lead’s journey or path is processed through Bayesian nets, which produce posterior distribution; this posterior distribution can then be trained alongside the RNN architecture, stacked LSTMs and GRU architecture, to capture the effectiveness of the order in which the channels are touched for the marketing campaign. However, since the posterior distribution is composed of positive and negative cases, the solution uses a hyperparameter for regularization to best segregate positive and negative distributions. The length of the sequence (channel touches) needs to be trimmed so that the combined architecture will effectively generalize the order and sequence impact on the attribution. The combined trained architecture can then be used to score each lead (its path journey) and arrive at odds of becoming a client. This technique not only assess effectiveness of a path but also provides optimal points of interception, even at the expense of missing or limited data size.

Photo of vishal hawa

vishal hawa

Vanguard

Vishal “Vish” Hawa is a principal data scientist at Vanguard, where he works closely with marketing managers to design attribution, propensity, and attrition modeling. Vish has over 15 years of experience in the retail and financial services industries. He has training in executive management from the Wharton School and holds postgraduate degrees in information sciences, statistics, and computer engineering from the Indian Statistical Institute.