Put AI to Work
April 15-18, 2019
New York, NY

Speaker slides & video

Presentation slides will be made available after the session has concluded and the speaker has given us the files. Check back if you don't see the file you're looking for—it might be available later! (However, please note some speakers choose not to share their presentations.)

Martial Hebert (First Child Designs)
Martial Hebert offers a brief overview of current challenges in AI for robotics and a glimpse of the exciting developments emerging in current research.
Adam Straw (Intel), Adam Procter (Intel AI), Robert Earhart (Intel)
The rapid growth of deep learning in demanding large-scale real-world applications has led to a rapid increase in demand for high-performance training and inference solutions. Adam Straw, Adam Procter, and Robert Earhart offer a comprehensive overview of Intel's nGraph deep learning compiler.
Matthew REYES (Technergetics)
Matthew Reyes casts consumer decision making within the framework of random utility and outlines a simplified scenario of optimizing preference on a social network to illustrate the steps in a company’s allocation decision, from learning parameters from data to evaluating the consequences of different marketing allocations.
Nanda Vijaydev (BlueData)
Nanda Vijaydev shares practical examples of—and lessons learned from—ML/DL use cases in financial services, healthcare, and other industries. You'll learn how to quickly deploy containerized multinode environments for TensorFlow and other ML/DL tools in a multitenant architecture either on-premises, in the cloud, or in a hybrid environment.
Ben Lorica (O'Reilly), Roger Chen (Computable)
Keynote by Ben Lorica and Roger Chen
Humayun irshad (Figure Eight)
Humayun Irshad offers an overview of an active learning framework that uses a crowdsourcing approach to solve parking sign recognition—a real-world problem in transportation and autonomous driving for which a large amount of unlabeled data is available. The solution generates an accurate model, quickly and cost-effectively, despite the unevenness of the data.
Tom Sabo (SAS)
Sources of international human trafficking data contain a wealth of textual information that is laborious to assess using manual methods. Tom Sabo demonstrates text-based machine learning, rule-based text extraction to generate training data for modeling efforts, and interactive visualization to improve international trafficking response.
Nick Curcuru (Mastercard)
Nick Curcuru, VP, Data Analytics and Cyber Security, will discuss Mastercard’s commitment to AI and its recent investments and developments.
Jeff Thompson (Stevens Institute of Technology)
What's it like to be a mobile phone or to attach a wind sensor to a neural network? Jeff Thompson outlines several recent creative projects that push the tools of AI in new directions. Part technical discussion and part case study for embedding artists in technical institutions, this talk explores the ways that artists and scientists can collaborate to expand the ways that AI can be used.
Danielle Dean (iRobot)
Automated ML is at the forefront of Microsoft’s push to make Azure ML an end-to-end solution for anyone who wants to build and train models that make predictions from data and then deploy them anywhere. Join Danielle Dean for a surprising conversation about a data scientist’s dilemma, a researcher’s ingenuity, and how cloud, data, and AI came together to help build automated ML.
Ruchir Puri (IBM)
Ruchir Puri discusses the next revolution in automating AI, which strives to deploy AI to automate the task of building, deploying, and managing AI tasks, accelerating enterprises' journey to AI.
Chang Ming-Wei (Google)
Ming-Wei Chang offers an overview of a new language representation model called BERT (Bidirectional Encoder Representations from Transformers). Unlike recent language representation models, BERT is designed to pretrain deep bidirectional representations by jointly conditioning on both left and right context in all layers.
Scott Clark (SigOpt), Matt Greenwood (Two Sigma Investments)
Companies are increasingly building modeling platforms to empower their researchers to efficiently scale the development and productionalization of their models. Scott Clark and Matt Greenwood share a case study from a leading algorithmic trading firm to illustrate best practices for building these types of platforms in any industry.
If you’d like to make new professional connections and hear ideas for supporting diversity in the tech community, come to the diversity and inclusion networking lunch on Wednesday.
Maryam Jahanshahi (TapRecruit)
Word embeddings such as word2vec have revolutionized language modeling. Maryam Jahanshahi discusses exponential family embeddings, which apply probabilistic embedding models to other data types. Join in to learn how TapRecruit implemented a dynamic embedding model to understand how tech skill sets have changed over three years.
Kristian Hammond (Northwestern Computer Science)
Even as AI technologies move into common use, many enterprise decision makers remain baffled about what the different technologies actually do and how they can be integrated into their businesses. Rather than focusing on the technologies alone, Kristian Hammond provides a practical framework for understanding your role in problem solving and decision making.
Sarah Bird (Microsoft)
Sarah Bird offers an overview of ML Ops (DevOps for machine learning), sharing solutions and best practices for an end-to-end pipeline for data preparation, model training, and model deployment while maintaining a comprehensive audit trail. Join in to learn how to build a cohesive and friction-free ecosystem for data scientists and app developers to collaborate together and maximize impact.
Jeremy Lewi (Google), Hamel Husain (GitHub)
Turning ML into magical products often requires complex distributed systems that bring with them a unique ML-specific set of infrastructure problems. Using AI to label GitHub issues as an example, Jeremy Lewi and Hamel Husain demonstrate how to use Kubeflow and Kubernetes to build and deploy ML products.
YU DONG (Facebook)
Yu Dong offers an overview of the why, what, and how of building a production-scale ML platform based on ongoing ML research trends and industry adoptions.
Anoop Katti (SAP)
Anoop Katti explores the shortcomings of the existing techniques for understanding 2D documents and offers an overview of the Character Grid (Chargrid), a new processing pipeline pioneered by data scientists at SAP.
Bill Roberts (Deloitte Consulting LLP)
Bill Roberts discusses artificial intelligence for strategic business insight and for the solution of new business problems using advanced cognitive algorithms. Along the way, he highlights the importance of using the right algorithm for a given business challenge, using real-world examples.
Marcel Kurovski (inovex)
Recommender systems support decision making with personalized suggestions and have proven useful in ecommerce, entertainment, and social networks. Sparse data and linear models are a burden, but the application of deep learning sets new boundaries and offers remarkable results. Join Marcel Kurovski to explore a use case for vehicle recommendations at Germany's biggest online vehicle market.
Garrett Hoffman (StockTwits)
Garrett Hoffman walks you through deep learning methods for natural language processing and natural language understanding tasks, using a live example in Python and TensorFlow with StockTwits data. Methods include word2vec, recurrent neural networks and variants (LSTM, GRU), and convolutional neural networks.
Mathew Salvaris (Microsoft), Fidan Boylu Uz (Microsoft)
Interested in deep learning models and how to deploy them on Kubernetes at production scale? Not sure if you need to use GPUs or CPUs? Mathew Salvaris and Fidan Boylu Uz help you out by providing a step-by-step guide to creating a pretrained deep learning model, packaging it in a Docker container, and deploying as a web service on a Kubernetes cluster.
Nick Curcuru (Mastercard), Anthony Dina (Dell EMC)
There are many different decisions to make when choosing the right solutions and infrastructure. Drawing on real-world considerations, use cases, and solutions, Nick Curcuru discusses different decisions—and the associated considerations and best practices—Mastercard exercised to build and deploy a successful AI.
Chris Butler (IPsoft)
Purpose, a well-defined problem, and trust from people are important factors to any system, especially those that employ AI. Chris Butler leads you through exercises that borrow from the principles of design thinking to help you create more impactful solutions and better team alignment.
Vinay Mohta (Manifold)
The significant hype bubble building up around AI has convinced many executives that if they’re not already tech savvy, they might not be ready for AI’s “transformative power.” However, the reality is that AI is just another tool that can help your business, and you’re probably not that far behind. Vinay Seth Mohta explains how to evaluate AI as you would any other strategic investment.
Jana Eggers (Nara Logics)
Jana Eggers explores explainability and transparency as both required and unachievable goals for AI, with a focus on helping teams structure discussions about levels of explainability possible and needed for both user trust and regulatory requirements.
Larry Carin (Infinia ML), Michael Eagan (Korn Ferry)
Larry Carin, one of the world’s most published machine learning researchers, discusses the state of the art in machine learning and how it translates to business impact. Along the way, Larry shares examples of how modern machine learning is transforming business in several sectors, including healthcare delivery, security, and back-office business processing.
Paco Nathan (derwen.ai)
Effective data governance is foundational for AI adoption in enterprise, but it's an almost overwhelming topic. Paco Nathan offers an overview of its history, themes, tools, process, standards, and more. Join in to learn what impact machine learning has on data governance and vice versa.
Thomas M (Black Hills IP)
Three elements will control the AI market: technology, data, and IP rights. Leveraging rich patent data, Thomas Marlow uncovers the companies with the top patent holdings across the world in groundbreaking research and implementation technologies, surfacing insights into the sources and owners of AI technology as well as the hurdles and opportunities that those entering the field today face.
Gadi Singer (Intel)
Gadi Singer explores four real-world AI deployments at enterprise scale.
Pamela Vagata (Stripe)
Pamela Vagata explains how Stripe has applied deep learning techniques to predict fraud from raw behavioral data. Join in to learn how the deep learning model outperforms a feature-engineered model both on predictive performance and in the effort spent on data engineering, model construction, tuning, and maintenance.
Aric Whitewood (WilmotML)
Aric Whitewood details WilmotML's research on the application of AI to investment management and offers an overview of the company's prediction engine, GAIA (the Global AI Allocator), which has been running in production since January 2018.
Paris Buttfield-Addison (Secret Lab), Mars Geldard (University of Tasmania), Tim Nugent (Lonely Coffee)
Games are wonderful contained problem spaces, making them great places to explore AI—even if you're not a game developer. Paris Buttfield-Addison, Mars Geldard, and Tim Nugent teach you how to use Unity to train, explore, and manipulate intelligent agents that learn. You'll train a quadruped to walk, then train it to explore, fetch, and manipulate the world.
Will Nowak (Dataiku)
AI and machine learning are top priorities for nearly every company. Despite this, "productionalizing" machine learning processes is an underappreciated problem, and as a result, businesses often find themselves failing to maximize ROI from their data initiatives. Will Nowak identifies best practices and common pitfalls in bringing machine learning and AI models to production.
Yishay Carmiel (IntelligentWire)
In recent years, we've seen tremendous improvements in artificial intelligence, due to the advances of neural-based models. However, the more popular these algorithms and techniques get, the more serious the consequences of data and user privacy. Yishay Carmiel reviews these issues and explains how they impact the future of deep learning development.
Vijay Agneeswaran (Walmart Labs), Abhishek Kumar (Publicis Sapient)
Vijay Agneeswaran and Abhishek Kumar offer an overview of capsule networks and explain how they help in handling spatial relationships between objects in an image. They also show how to apply them to text analytics. Vijay and Abhishek then explore an implementation of a recurrent capsule network and benchmark the RCN with capsule networks with dynamic routing on text analytics tasks.
Aleksander Madry discusses major roadblocks that prevent current AI frameworks from having a broad impact and outlines approaches to addressing these issues and making AI frameworks truly human-ready.
Danny Lange (Unity Technologies)
Join Danny Lange to learn how to create artificially intelligent agents that act in the physical world (through sense perception and some mechanism to take physical actions, such as driving a car). You'll discover how observing emergent behaviors of multiple AI agents in a simulated virtual environment can lead to the most optimal designs and real-world practices.
Jack Dashwood (Intel), Anna Bethke (Intel)
The hardware, software, and algorithms that automatically tag our images or recommend the next book to read can also improve medical diagnosis and protect our natural resources. Jack Dashwood and Anna Bethke discuss a variety of technical projects at Intel that have enabled social good organizations and provide guidance on creating or engaging in these types of projects.
Alex Siegman (Dow Jones), Kabir Seth (Wall Street Journal)
Alex Siegman and Kabir Seth walk you through the steps necessary to appropriately leverage AI in a large organization. This includes ways to identify business opportunities that lend themselves to AI as well as best practices on everything from data intake and manipulation to model selection, output analysis, development, and deployment, all while navigating a complex organizational structure.
Tony Jebara (Columbia University | Netflix)
For many years, the main goal of the Netflix recommendation system has been to get the right titles in front of each member at the right time. Tony Jebara details the approaches Netflix uses to recommend titles to users and discusses how the company is working on integrating causality and fairness into many of its machine learning and personalization systems.
Joanna Bryson (University of Bath)
Although not a universally held goal, maintaining human-centric artificial intelligence is necessary for society’s long-term stability. Joanna Bryson discusses why this is so and explores both the technological and policy mechanisms by which it can be achieved.
Forough Poursabzi-Sangdeh (Microsoft Research NYC)
Forough Poursabzi-Sangdeh argues that to understand interpretability, we need to bring humans in the loop and run human-subject experiments. She describes a set of controlled user experiments in which researchers manipulated various design factors in models that are commonly thought to make them more or less interpretable and measured their influence on users’ behavior.
Cibele Halasz (Apple), Satanjeev Banerjee (Twitter)
Twitter is a company with massive amounts of data, so it's no wonder that the company applies machine learning in myriad of ways. Cibele Montez Halasz and Satanjeev Banerjee describe one of those use cases: timeline ranking. They share some of the optimizations that the team has made—from modeling to infrastructure—in order to have models that are both expressive and efficient.
Nanda Vijaydev (BlueData)
Nanda Vijaydev explains how to spin up instant ML/DL environments using containers—all while ensuring enterprise-grade security and performance. Find out how to provide your data science teams with on-demand access to the tools and data they need, whether on-premises or in the cloud.
Dmitry Petrov (Iterative AI), Ivan Shcheklein (Iterative AI)
ML model and dataset versioning is an essential first step in the direction of establishing a good process. Dmitry Petrov and Ivan Shcheklein explore open source tools for ML models and datasets versioning, from traditional Git to tools like Git-LFS and Git-annex and the ML project-specific tool Data Version Control or DVC.org.
Yoav Einav (GigaSpaces)
Yoav Einav and Vin Costello explain how to achieve faster analytical processing, leveraging in-memory performance for the cost of flash with persistent memory (~300% faster than SSD); smarter insights at optimized TCO, scaling the speed layer capacity for smarter real-time analytics with 7x lower footprint); and Agile automation of your ML and DL model CI/CD pipeline, for faster time to market.
Morten Dahl (Dropout Labs)
Morten Dahl reviews modern cryptographic techniques such as homomorphic encryption and multiparty computation, sharing concrete examples in TensorFlow using the open source library TF Encrypted. Join in to learn how to get started with privacy-preserving techniques today, without needing to master the cryptography.
Bruno Goncalves (Data For Science)
Time series are everywhere around us. Understanding them requires taking into account the sequence of values seen in previous steps and even long-term temporal correlations. Join Bruno Gonçalves to learn how to use recurrent neural networks to model and forecast time series and discover the advantages and disadvantages of recurrent neural networks with respect to more traditional approaches.
vishal hawa (Vanguard)
While deep learning has shown significant promise for model performance, it can quickly become untenable particularly when data size is short. RNNs can quickly memorize and overfit. Vishal Hawa explains how a combination of RNNs and Bayesian networks (PGM) can improve the sequence modeling behavior of RNNs.
Rajendra Prasad (Accenture)
After crossing the first AI implementation milestone, leaders often ask, "What’s next?" Based on experience implementing AI-led automation for more than 100 clients, Accenture has developed an easy-to-use methodology for scaling and sustaining reliable AI solutions. Rajendra Prasad (RP) explains how leaders and change makers in large enterprises can make AI adoption successful.
The whole of AI is greater than the sum of its parts, but achieving the best analytics edge often requires a mixture of technologies—chaining together AI technologies to build smart end-to-end processes. Katie Taylor explores use cases within key industries to uncover how companies are succeeding with AI through a layered technology stack.
Richard Tong (Squirrel AI Learning)
One of the most critical issues of traditional education is the lack of high-quality teachers for the personalized attention of individual student need. AI technology, especially the AI adaptive technology can enable the new generation of teachers to teach student much more effectively and improve the efficiency of the education industry.
Simon Crosby (SWIM.AI)
Today’s approach to processing streaming data is based on legacy big-data centric architectures, the cloud, and the assumption that organizations have access to data scientists to make sense of it all—leaving organizations increasingly overwhelmed. Simon Crosby shares a new architecture for edge intelligence that turns this thinking on its head.
Banu Nagasundaram offers an overview of Intel's Deep Learning Boost (Intel DL Boost) technology, featuring integer vector neural network instructions targeting future Intel Xeon scalable processors. Banu walks you through the 8-bit integer convolution implementation made in the Intel MKL-DNN library to demonstrate how this new instruction is used in optimized code.
Yi Zhuang (Twitter), Nicholas Leonard (Twitter)
Twitter is a large company with many ML use cases. Historically, there have been many ways to productionize ML at Twitter. Yi Zhuang and Nicholas Leonard describe the setup and benefits of a unified ML platform for production and explain how the Twitter Cortex team brings together users of various ML tools.
Kevin He (DeepMotion)
Digital character interaction is hard to fake, whether it’s between two characters, between users and characters, or between a character and its environment. Nevertheless, interaction is central to building immersive XR experiences, robotic simulation, and user-driven entertainment. Kevin He explains how to use physical simulation and machine learning to create interactive character technology.
Tammy Bilitzky shares a case study that details lights-out automation and explains how DCL uses AI to transform massive volumes of confidential disparate data into searchable and structured information. Along the way, she outlines considerations for architecting a solution that processes a continuous flow of 5M+ “pages” of complex work units.
Vladimir Starostenkov (Altoros), Siarhei Sukhadolski (Altoros Development)
Vladimir Starostenkov and Siarhei Sukhadolski discuss two ML solutions from Altoros: one was developed to facilitate the process of assessing car damage right at the accident scene, while the second helps to automate recognition, extraction, and analysis. Join in to see how to integrate both solutions into the existing workflows of insurance, car rental, and maintenance services.
David Talby (Pacific AI)
New AI solutions in question answering, chatbots, structured data extraction, text generation, and inference all require deep understanding of the nuances of human language. David Talby shares challenges, risks, and best practices for building NLU-based systems, drawing on examples and case studies from products and services built by Fortune 500 companies and startups over the past seven years.