Put AI to Work
April 15-18, 2019
New York, NY

A reinforcement learning approach to optimizing preference on a social network

Matthew REYES (Technergetics)
4:55pm5:35pm Thursday, April 18, 2019
Models and Methods
Location: Trianon Ballroom
Secondary topics:  Media, Marketing, Advertising, Models and Methods, Reinforcement Learning
Average rating: *....
(1.00, 1 rating)

Who is this presentation for?

  • Data scientists, marketers, and executives



Prerequisite knowledge

  • Familiarity with Monte Carlo simulation, the graphical representation of a social network, estimating parameters from data, and the notion of utility and choice models

What you'll learn

  • Gain perspective on how to view the problem of optimizing preference on a social network


The problem of influencing preference toward products on social networks has attracted considerable attention over the past couple of decades. Previous approaches have suffered from two subtle yet significant drawbacks. The first is that they model consumer decision making as best-response, deterministic maximization of some numerical utility. The second is that their decomposition of utility does not include influence by marketers for the respective companies.

Matthew Reyes casts consumer decision making within the framework of random utility. Random utility theory views so-called utility as a parametrization of observed frequencies of choice. The decomposition of utility corresponds to variables that are either observable through data collection or under the control of an external agent, in this case a company. The decomposition of utility that Matthew presents explicitly includes influence by marketers from two competing companies.

Incorporating the marketer into the model of consumer decision making allows a company to evaluate the effect of different marketing allocations on the evolution of preferences on the network. The combination of a random choice model and the inclusion of marketers into the model allow this important problem to be cast in the reinforcement learning paradigm. Matthew outlines a simplified scenario illustrating the steps in a company’s allocation decision, from learning parameters from data to evaluating the consequences of different marketing allocations.

Photo of Matthew REYES

Matthew REYES


Matthew Reyes is a consultant and an independent researcher developing a reinforcement learning-based approach to influence the maximization on social networks. Previously, he spent more than four years at MIT’s Lincoln Laboratory. He holds a PhD and MS in electrical engineering systems from the University of Michigan, Ann Arbor, and an MS and BS in math from Wichita State University.

Matthew Reyes is a contractor at Technergetics working on deep learning and FPGAs. He earned his B.S. and M.S. in mathematics at Wichita State University in Wichita, KS. He then earned his M.S. and Ph.D. in EE:Systems at the University of Michigan in Ann Arbor, MI doing a thesis on compression of Markov random fields. He worked for four years at MIT Lincoln Laboratory doing work on sensor calibration. From early 2015 to early 2019, Matt conducted independent research on compression, belief propagation, and interpolation of Markov fields, and is currently developing a model of social decision-making based on random utility.