9:00am–12:30pm Monday, April 30, 2018
Amy Unruh walks you through training a machine learning system using popular open source library TensorFlow, starting from conceptual overviews and building all the way up to complex classifiers. Along the way, you'll gain insight into deep learning and how it can be applied to complex problems in science and industry.
Read more.
9:00am–12:30pm Monday, April 30, 2018
Ashwin Vijayakumar gives you a hands-on overview of Intel's Movidius Neural Compute Stick, a miniature deep learning hardware development platform that you can use to prototype, tune, and validate your AI programs (specifically deep neural networks).
Read more.
1:40pm–5:10pm Monday, April 30, 2018
Location: Nassau East/West
Computer vision has led the artificial intelligence renaissance, and pushing it further forward is PyTorch, a flexible framework for training models. Mo Patel and Neejole Patel offer an overview of computer vision fundamentals and walk you through PyTorch code explanations for notable objection classification and object detection models.
Read more.
1:40pm–5:10pm Monday, April 30, 2018
Greg Werner walks you through using MXNet and TensorFlow to train deep learning models and deploy them using the leading serverless compute services in the market: AWS Lambda, Google Cloud Functions, and Azure Functions. You'll also learn how to monitor and iterate upon trained models for continued success using standard development and operations tools.
Read more.
1:40pm–5:10pm Monday, April 30, 2018
Ion Stoica, Robert Nishihara, and Philipp Moritz lead a deep dive into Ray, a new distributed execution framework for reinforcement learning applications, walking you through Ray's API and system architecture and sharing application examples, including several state-of-the art RL algorithms.
Read more.
11:05am–11:45am Tuesday, May 1, 2018
Location: Grand Ballroom West
Srinivasa Karlapalem demonstrates an approach for high-throughput single-shot multibox object detection (SSD) on edge devices using FPGAs, specifically for surveillance.
Read more.
11:05am–11:45am Tuesday, May 1, 2018
Drawing on Affectiva's experience building a multimodal emotion AI that can detect human emotions from face and voice, Taniya Mishra outlines various deep learning approaches for building multimodal emotion detection. Along the way, Taniya explains how to mitigate the challenges of data collection and annotation and how to avoid bias in model training.
Read more.
11:55am–12:35pm Tuesday, May 1, 2018
Location: Grand Ballroom West
Forecasting the long-term values of time series data is crucial for planning. But how do you make use of a recurrent neural network when you want to compute an accurate long-term forecast? How can you capture short- and long-term seasonality or discover small patterns from the data that generate the big picture? Mustafa Kabul shares a scalable technique addressing these questions.
Read more.
1:45pm–2:25pm Tuesday, May 1, 2018
Location: Grand Ballroom West
Episource is building a scalable NLP engine to help summarize medical charts and extract medical coding opportunities and their dependencies to recommend best possible ICD10 codes. Manas Ranjan Kar offers an overview of the wide variety of deep learning algorithms involved and the complex in-house training-data creation exercises that were required to make it work.
Read more.
1:45pm–2:25pm Tuesday, May 1, 2018
Location: Sutton North/Center
Data scientists and machine learning professionals face a quandary of choices when trying to figure out how to scale their data science experiments. Arshak Navruzyan details the landscape of available options and explains how to make best use of the free and open source tools available.
Read more.
1:45pm–2:25pm Tuesday, May 1, 2018
Location: Nassau East/West
The stock market is well known to be extremely random, making investment decisions difficult, but deep learning can help. Drawing on a concrete financial use case, Aurélien Géron explains how LSTM networks can be used for forecasting.
Read more.
2:35pm–3:15pm Tuesday, May 1, 2018
Location: Grand Ballroom East
While deep learning has enjoyed widespread empirical success, fundamental bottlenecks exist when attempting to develop deep learning applications at scale. Ameet Talwalkar shares research on addressing two core scalability bottlenecks: tuning the knobs of deep learning models (i.e., hyperparameter optimization) and training deep models in parallel environments.
Read more.
2:35pm–3:15pm Tuesday, May 1, 2018
Location: Nassau East/West
Financial econometric models are usually handcrafted using a combination of statistical methods, stochastic calculus, and dynamic programming techniques. Ambika Sukla explains how recent advancements in AI can help simplify financial model building by carefully replacing complex mathematics with a data-driven incremental learning approach.
Read more.
2:35pm–3:15pm Tuesday, May 1, 2018
The adversarial nature of security makes applying machine learning complicated. If attackers can evade signatures and heuristics, what is stopping them from evading ML models? Yacin Nadji evaluates, breaks, and fixes a deployed network-based ML detector that uses graph clustering. While the attacks are specific to graph clustering, the lessons learned apply to all ML systems in security.
Read more.
4:00pm–4:40pm Tuesday, May 1, 2018
Location: Nassau East/West
Pensieve is a natural language processing (NLP) project that classifies reviews for their sentiment, reason for sentiment, high-level content, and low-level content. Megan Yetman offers an overview of Pensieve as well as ways to improve model reporting and the ability for continuous model learning and improvement.
Read more.
4:00pm–4:40pm Tuesday, May 1, 2018
Drawing on NVIDIA’s system for detecting anomalies on various NVIDIA platforms, Joshua Patterson and Aaron Sant-Miller explain how to bootstrap a deep learning framework to detect risk and threats in operational production systems, using best-of-breed GPU-accelerated open source tools.
Read more.
4:50pm–5:30pm Tuesday, May 1, 2018
Location: Nassau East/West
Historically, the consumer loan industry has restricted itself to using relatively simple machine learning models and techniques to accept or deny loan applicants. However, more powerful (but also more complicated) methods can significantly improve business outcomes. Sean Kamkar shares a framework for evaluating, explaining, and managing these more complex methods.
Read more.
11:05am–11:45am Wednesday, May 2, 2018
Location: Grand Ballroom East
11:05am–11:45am Wednesday, May 2, 2018
Location: Sutton North/Center
Superresolution is a process for obtaining one or more high-resolution images from one or more low-resolution observations. Xiaoyong Zhu shares the latest academic progress in superresolution using deep learning and explains how it can be applied in various industries, including healthcare. Along the way, Xiaoyong demonstrates how the training can be done in a distributed fashion in the cloud.
Read more.
11:05am–11:45am Wednesday, May 2, 2018
Location: Nassau East/West
We're all familiar with the highly publicized stories of algorithms displaying overtly biased behavior toward certain groups, but what actually happens behind the scenes, and how can these situations be avoided? Lindsey Zuloaga shares experiences and lessons learned in the hiring space to help others prevent unfair modeling and explains how to establish best practices.
Read more.
11:05am–11:45am Wednesday, May 2, 2018
Determining abnormal conditions depends on maintaining a useful definition of normal. John Hebeler offers an overview of two deep learning methods to determine normal behavior, which when combined further improve performance.
Read more.
11:55am–12:35pm Wednesday, May 2, 2018
Location: Grand Ballroom West
Tim Kraska explains how fundamental data structures can be enhanced using machine learning with wide-reaching implications even beyond indexes, arguing that all existing index structures can be replaced with other types of models, including deep learning models (i.e., learned indexes).
Read more.
11:55am–12:35pm Wednesday, May 2, 2018
Andrew Ilyas, Logan Engstrom, and Anish Athalye share an approach to generate 3D adversarial objects that reliably fool neural networks in the real world, no matter how the objects are looked at.
Read more.
11:55am–12:35pm Wednesday, May 2, 2018
Harsh Kumar explains one way the energy industry is using AI and computer vision for security surveillance: a video analytics solution that can be optimized for the functional safety of workers in the loading and unloading zone of an oil and gas offshore rig.
Read more.
1:45pm–2:25pm Wednesday, May 2, 2018
Location: Grand Ballroom East
Over the last year, Steve Rennie and his colleagues have significantly advanced the state of the art in performance on two flagship challenges in AI: the Switchboard Evaluation Benchmark for Automatic Speech Recognition and the MSCOCO Image Captioning Challenge. Steve shares the innovations in deep learning research that have most advanced performance on these and other benchmark AI tasks.
Read more.
1:45pm–2:25pm Wednesday, May 2, 2018
Location: Nassau East/West
As machine learning algorithms and artificial intelligence continue to progress, we must take advantage of the best techniques from various disciplines. Funda Gunes demonstrates how combining well-proven methods from classical statistics can enhance modern deep learning methods in terms of both predictive performance and interpretability.
Read more.
1:45pm–2:25pm Wednesday, May 2, 2018
TensorFlow Lite—TensorFlow’s lightweight solution for Android, iOS, and embedded devices—enables on-device machine learning inference with low latency and a small binary size. Kazunori Sato walks you through using TensorFlow Lite, helping you overcome the challenges to bring the latest AI technology to production mobile apps and embedded systems.
Read more.
2:35pm–3:15pm Wednesday, May 2, 2018
Location: Grand Ballroom West
Yulia Tell and Maurice Nsabimana walk you through getting started with BigDL and explain how to write a deep learning application that leverages Spark to train image recognition models at scale. Along the way, Yulia and Maurice detail a collaborative project to design and train large-scale deep learning models using crowdsourced images from around the world.
Read more.
2:35pm–3:15pm Wednesday, May 2, 2018
Location: Nassau East/West
In the last few years, RNNs have achieved significant success in modeling time series and sequence data, in particular within the speech, language, and text domains. Recently, these techniques have been begun to be applied to session-based recommendation tasks, with very promising results. Nick Pentreath explores the latest research advances in this domain, as well as practical applications.
Read more.
2:35pm–3:15pm Wednesday, May 2, 2018
Sid Reddy shares Conversica's artificial intelligence approach to creating, deploying, and continuously improving an automated sales assistant that engages in a genuinely human conversation at scale with every one of an organization’s sales leads.
Read more.
4:00pm–4:40pm Wednesday, May 2, 2018
Location: Grand Ballroom East
Across the globe, people are voicing their opinion online. However, sentiment analysis is challenging for many of the world's languages, particularly with limited training data. Gerard de Melo demonstrates how to exploit large amounts of surrogate data to learn advanced word representations that are custom-tailored for sentiment and shares a special deep neural architecture to use them.
Read more.
4:00pm–4:40pm Wednesday, May 2, 2018
Location: Grand Ballroom West
Mike Ranzinger shares his research on composition-aware search and explains how the research led to the launch of AI technology that allows Shutterstock’s users to more precisely find the image they need within the company's collection of more than 150 million images.
Read more.
4:00pm–4:40pm Wednesday, May 2, 2018
Recent advances have made machines more autonomous, but much work remains for AI to collaborate with people. Emily Pavlini and Max Kleiman-Weiner share new insights inspired by the way humans accumulate knowledge and naturally work together that enable machines and people to work and learn as a team, discovering new knowledge in unstructured natural language content together.
Read more.
4:50pm–5:30pm Wednesday, May 2, 2018
Location: Grand Ballroom East
DoorDash is a last-mile delivery platform, and its logistics engine powers fulfillment of every delivery on its three-sided marketplace of consumers, Dashers, and merchants. Raghav Ramesh highlights AI techniques used by DoorDash to enhance efficiency and quality in its marketplace and provides a framework for how AI can augment core operations research problems like the vehicle routing problem.
Read more.
4:50pm–5:30pm Wednesday, May 2, 2018
Recommender systems suffer from concept drift and scarcity of informative ratings. Jorge Silva explains how SAS uses a Bayesian approach to tackle both problems by making the learning process online and active. Active learning prioritizes the most informative users and items by quantifying uncertainty in a principled, probabilistic framework.
Read more.