Don’t beat the market; beat the bots: Adversarial networks in finance
Who is this presentation for?
- Product managers, ML engineers, and anyone else who wants their production models to be resilient
One of the amazing things about AI is how it can simultaneously appear superior and inferior to human intelligence, such as a self-driving car that can react instantly to an accident ahead of it but get confused by a pedestrian walking a bicycle through the street. A massive effort has been undertaken by the ML community to better peer into the black box and understand why and how AI models make the decisions that they do. Unfortunately, the human brain hits a wall when it tries to comprehend a billion-parameter function approximation. The only real candidate for understanding how an AI works is another AI.
Garrett Lander and Al Kari use a sample of a financial market to build a simulation in which the players (a mix of erratic human traders and predictable automated traders) attempt to predict the activity of the market by purchasing and selling their holdings. The automated traders are trained on historical data of the holdings, while the humans trade reactively based on biases and the successes or failures of their previous actions. Then TensorFlow 2.0’s robust new reinforcement learning tools construct an adversarial network, and the fun begins.
The adversarial network, with only limited capital, learns how to exploit the patterns of the other players to manipulate the market either for gain (maximizing its own holdings) or anarchy (maximizing market volatility). Not only will you get to watch this unfold through a live visualization, you’ll gain firsthand experience with the newest imperative in machine learning: F1, accuracy, root-mean-square error (RMSE), and the like are meaningless if your model isn’t robust to the exploitability of its own pattern recognition.
- General knowledge of deep learning (neural networks and the basic tenants of how they're able to approximate learning)
What you'll learn
- Understand that not only is AI fallible, it's fallible in ways humans are not, the pattern recognition that allows a model to function can be used against it, and practitioners need to add resilience to their model evaluation metrics—how easily your model is deliberately fooled
Garrett Lander is a machine learning architect at Manceps, an ML consulting agency based out of Portland, Oregon. Garrett works with clients ranging from those taking their first steps into automation to seasoned ML practitioners looking to optimize their production models. Garrett is especially interested in the growing areas of AI pen-tests and ethicality, as well as the effort to build models that improve on human decision making without inheriting its shortcomings.
Al Kari is CEO and principal consultant at Manceps, where he leads the company’s mission to augment human capabilities with machine intelligence, with a focus on blending machine learning and artificial intelligence with cloud computing and big data technologies. Al is a Google Developer Expert (GDE) in machine learning, organizer of the TensorFlow-Northwest and OpenStack Northwest user groups, and a strong advocate for open source AI and cloud technologies. Previously, Al was a global cloud evangelist at Microsoft, where he helped top-tier ISV partners onboard on the Microsoft Azure Platform. Al started his career in the mid-’90s as a software architect by founding Softwarehouse overseas before moving to the United States. He later held product and services leadership roles at Dell, where he helped build the company’s virtualization and cloud computing services portfolio; cofounded DetaCloud, a boutique OpenStack engineering powerhouse; and was a principal cloud architect at Red Hat, where he was responsible for helping customers build enterprise-ready cloud infrastructure. A frequent speaker at major industry conventions, Al has been an outspoken advocate for building the future of open artificial intelligence and cloud technologies in support of academic, industrial, and scientific development. He is a standing member of the Cloud Advisory Council, the Linux Professional Institute, and the OpenStack Foundation.
Comments on this page are now closed.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires
Session summary and GitHub open-source code links are available at https://www.manceps.com/articles/experiments/beat-the-bots