11:05am–11:45am Wednesday, April 17, 2019
As BuzzFeed’s content production and social networks grow, curation becomes increasingly difficult. The company first built publishing tools that let people work more efficiently, then built artificial intelligence tools that let people work more intelligently. Join Lucy Wang and Swara Kantaria to learn more about this evolution.
Read more.
11:05am–11:45am Wednesday, April 17, 2019
Pamela Vagata explains how Stripe has applied deep learning techniques to predict fraud from raw behavioral data. Join in to learn how the deep learning model outperforms a feature-engineered model both on predictive performance and in the effort spent on data engineering, model construction, tuning, and maintenance.
Read more.
1:00pm–1:40pm Wednesday, April 17, 2019
Twitter is a company with massive amounts of data, so it's no wonder that the company applies machine learning in myriad of ways. Cibele Montez Halasz and Satanjeev Banerjee describe one of those use cases: timeline ranking. They share some of the optimizations that the team has made—from modeling to infrastructure—in order to have models that are both expressive and efficient.
Read more.
1:50pm–2:30pm Wednesday, April 17, 2019
New AI solutions in question answering, chatbots, structured data extraction, text generation, and inference all require deep understanding of the nuances of human language. David Talby shares challenges, risks, and best practices for building NLU-based systems, drawing on examples and case studies from products and services built by Fortune 500 companies and startups over the past seven years.
Read more.
2:40pm–3:20pm Wednesday, April 17, 2019
Andrew Chin and Celia Chen offer an overview of data science applications within the asset management industry, covering use cases on using ML to derive better investment insights and improve client engagement.
Read more.
4:05pm–4:45pm Wednesday, April 17, 2019
Using AI to combat financial crime is more than strong fraud detection models monitoring transactions. Banks follow significant anti-money laundering (AML) and "know your customer" (KYC) laws and procedures, wrought with growth chained to cost and requiring auditable automation. Kyle Hoback walks you through a series of case studies that utilize AI-powered RPA that address AML and KYC.
Read more.
4:05pm–4:45pm Wednesday, April 17, 2019
Pradip Bose details a next-generation AI research project focused on creating "self-aware" AI systems that have built-in autonomic detection and mitigation facilities to avoid faulty or undesirable behavior in the field—in particular, cognitive bias and inaccurate decisions that are perceived as being unethical.
Read more.
4:55pm–5:35pm Wednesday, April 17, 2019
Companies are increasingly building modeling platforms to empower their researchers to efficiently scale the development and productionalization of their models. Scott Clark and Matt Greenwood share a case study from a leading algorithmic trading firm to illustrate best practices for building these types of platforms in any industry.
Read more.
11:05am–11:45am Thursday, April 18, 2019
While deep learning has shown significant promise for model performance, it can quickly become untenable particularly when data size is short. RNNs can quickly memorize and overfit. Vishal Hawa explains how a combination of RNNs and Bayesian networks (PGM) can improve the sequence modeling behavior of RNNs.
Read more.
1:00pm–1:40pm Thursday, April 18, 2019
Automated machine learning (AutoML) enables both data scientists and domain experts (with limited machine learning training) to be productive and efficient. AutoML is a fundamental shift in how organizations approach machine learning. Francesca Lazzeri and Wee Hyong Tok demonstrate how to use AutoML to automate the selection of machine learning models and automate tuning of hyperparameters.
Read more.
1:50pm–2:30pm Thursday, April 18, 2019
There's significant interest in applying deep learning-based solutions to problems in medicine and healthcare. Eric Oermann and Katie Link identify actionable medical problems, recast them as tractable deep learning problems, and discuss techniques to solve them.
Read more.
2:40pm–3:20pm Thursday, April 18, 2019
Machine learning models are often susceptible to adversarial deception of their input at test time, which leads to poorer performance. Alina Matyukhina investigates the feasibility of deception in source code attribution techniques in real-world environments and explores attack scenarios on users' identities in open source projects—along with possible protection methods.
Read more.
2:40pm–3:20pm Thursday, April 18, 2019
Clinical radiology currently faces several clinical issues: improving imaging efficiency, reducing risks, and developing higher imaging quality. Enhao Gong and Greg Zaharchuk explain how Subtle Medical's deep learning/AI solution addresses these problems by enabling faster MRI and faster PET and low-dose scans, providing real clinical and financial benefit to hospitals.
Read more.
4:05pm–4:45pm Thursday, April 18, 2019
Aric Whitewood details WilmotML's research on the application of AI to investment management and offers an overview of the company's prediction engine, GAIA (the Global AI Allocator), which has been running in production since January 2018.
Read more.
4:05pm–4:45pm Thursday, April 18, 2019
Chakri Cherukuri demonstrates how to apply machine learning techniques in quantitative finance, covering use cases involving both structured and alternative datasets. The focus of the talk will be on promoting reproducible research (through Jupyter notebooks and interactive plots) and interpretable models.
Read more.
4:05pm–4:45pm Thursday, April 18, 2019
Tammy Bilitzky shares a case study that details lights-out automation and explains how DCL uses AI to transform massive volumes of confidential disparate data into searchable and structured information. Along the way, she outlines considerations for architecting a solution that processes a continuous flow of 5M+ “pages” of complex work units.
Read more.
4:55pm–5:35pm Thursday, April 18, 2019
Andrew Caosun discusses a framework that unifies hidden Markov models and deep learn algorithms (RNN) with modeling components that consider long-term memory and semantics of music (LSTM and convolution). It takes users' original creations as input, modifies the raw scores, and generates musically appropriate melodies.
Read more.