October 28–31, 2019

Speakers

Hear from innovative programmers, talented managers, and senior developers who are doing amazing things with TensorFlow and machine learning. More speakers will be announced; please check back for updates.

Grid viewList view

Alasdair Allan is a director at Babilim Light Industries and a scientist, author, hacker, maker, and journalist. An expert on the internet of things and sensor systems, he’s famous for hacking hotel radios, deploying mesh networked sensors through the Moscone Center during Google I/O, and for being behind one of the first big mobile privacy scandals when, back in 2011, he revealed that Apple’s iPhone was tracking user location constantly. He’s written eight books and writes regularly for Hackster.io, Hackaday, and other outlets. A former astronomer, he also built a peer-to-peer autonomous telescope network that detected what was, at the time, the most distant object ever discovered.

Presentations

Measuring embedded machine learning Session

The future of machine learning is on the edge and on small, embedded devices that can run for a year or more on a single coin-cell battery. Alasdair Allan dives deep into how using deep learning can be very energy efficient and allows you to make sense of sensor data in real time.

Karmel Allison is an engineering manager at Google, where she leads a team of engineers working to make TensorFlow high-level APIs easy to use and flawless to scale. Karmel has over 10 years of experience in software development and machine learning. Previously, she led engineering teams building a DNA sequencer at Genia and serving real-time recommendations at Quora. She holds a PhD in bioinformatics from the University of California, San Diego.

Presentations

Town Hall: Contributors’ perspectives Contributor Summit

Join in to hear about the rapid growth of open source ML communities and a future road map for community building. Learn best practices, use cases, and how to develop metrics for your project by learning from other contributors to TensorFlow.

Raziel Alverez is a senior staff engineer at Google, where he leads TensorFlow model optimization, aimed at making machine learning more efficient to deploy and execute. He’s a cofounder and engineering lead of TensorFlow Lite, and he developed the framework used to execute embedded ML models for Google’s speech recognition software (now in TensorFlow Lite) and lead the development of the latest iteration of the “Hey, Google” hotword recognizer. Previously, Raziel codesigned and implemented the Self-Assembling Interface Layer that forms the core of Appian’s (APPN) low-code development platform. He graduated summa cum laude from both the BS and master’s programs in computer science and machine learning at Mexico’s ITESM.

Presentations

TensorFlow model optimization: Quantization and pruning Session

Raziel Alverez walks you through best current practices and future directions in core TensorFlow technology.

Axel Antoniotti is a staff software engineer at Criteo. His work focuses on developing the platforms and tools that are used by all of Criteo to create any kind of machine learning model, train them, serve them online, and monitor their behavior. He holds an engineering master’s degree from EPITA, a French grande école specialized in computer science.

Presentations

How Criteo optimized and sped up its TensorFlow models by 10x and served them under 5 ms Session

Criteo's real-time bidding of ad spaces requires its TensorFlow (TF) models to make online predictions in less than 5 ms. Nicolas Kowalski and Axel Antoniotti explain why Criteo moved away from high-level APIs and rewrote its models from scratch, reimplementing cross-features and hashing functions using low-level TF operations in order to factorize as much as possible all TF nodes in its model.

Leonardo Apolonio is a machine learning engineer at Clarabridge, where he solves natural language processing (NLP) tasks, like detecting emotion, call reason, and expressed effort in the customer experience domain. He has experience maintaining and improving NLP pipelines to extract entities and topics from over 30 million websites daily, using the latest NLP and deep learning techniques. Leonardo has also built scalable analytics techniques for anomaly detection using datasets with billions of events.

Presentations

Enterprise AF solution for text classification (using BERT) Tutorial

Leonardo Apolonio takes a deep dive into BERT and explains how you can use BERT to solve problems.

Josh Baer is the machine learning platform lead at Spotify, building out the tools, processing, and infrastructure for robust ML experiences; enabling teams to leverage ML and AI sustainably in their products, research, and services; and providing a cohesive experience. Previously, Josh led the Hadoop and stream processing teams.

Presentations

Personalizing the infinite jukebox: ML and the TensorFlow ecosystem at Spotify Session

Josh Baer and Keshi Dai discuss how Spotify has historically used ML and explore how the introduction of TensorFlow and TFX in particular has standardized its ML workflows and improved its ability to bring ML-powered products to its users.

Paige Bailey is a TensorFlow developer advocate at Google.

Presentations

Swift for TensorFlow Session

Paige Bailey and Brennan Saeta walk you through Swift for TensorFlow, a next-generation machine learning platform that leverages innovations like first-class differentiable programming to seamlessly integrate deep neural networks with traditional AI algorithms and general purpose software development.

va barbosa is a developer advocate at the Center for Open-Source Data & AI Technologies at IBM, where he helps developers discover and use data and machine learning technologies. This is fueled by his passion to help others and guided by his enthusiasm for open source technology. Always looking to embrace new challenges and fulfill his appetite for learning, he immerses himself in a wide range of technologies and activities. When not focusing on the developer experience, he enjoys dabbling in photography. If you can’t find him in front of a computer, try looking behind a camera.

Presentations

Node-RED and TensorFlow.js: Developing deep learning IoT apps in the browser Session

Va Barbosa and Paul Van Ec highlight the benefits of using TensorFlow.js and Node-RED together as an educational tool to engage developers and provide you with a powerful, creativity-inspiring platform for interacting and developing with machine learning models.

David Beck is a practice leader and Google Cloud Platform Partner at Cognizant and is the main liaison for the TensorFlow partnership. He was one of 600 leaders selected for Cognizant’s small business incubator program developing ML and natural language processing (NLP) for healthcare companies. Previously, he’s held executive roles at startups and Fortune 500 companies across a variety of industries. David has led large-scale, customer-facing teams in innovative front office transformations for both clients and employers, yielding breakthrough results.

Presentations

TensorFlow business case study showcase Session

Deepak Bhadauria, Saurabh Mishra, Upendra Sahu, Bhushan Jagyasi, David Beck, and Rahul Sarda share four real-world TensorFlow success stories from the banking, insurance, med tech, and nonprofit industries.

Joana Carrasqueira is a developer relations program manager for TensorFlow at Google Brain, where she focuses on bringing together subject-matter experts in order to build an open source community around TensorFlow. Previously, she worked on innovation consulting at the Silicon Valley Innovation Center and managed the education department at the International Pharmaceutical Federation for the United Nations, where she helped develop new healthcare policies in more than 12 countries and coauthored the WHO guidelines on antimicrobial resistance. Joana’s research has been published in various scientific journals, and she holds a master’s in pharmaceutical sciences and an executive MBA from IE Business School.

Presentations

Getting involved in the TensorFlow community Session

Large-scale open source projects can be daunting, and one of the goals of TensorFlow is to be accessible to many contributors. Joana Carrasqueira and Nicole Pang share some great ways to get involved in TensorFlow, explain how its design and development works, and show you how to get started if you're new to machine learning or new to TensorFlow.

Getting involved in the TensorFlow community Contributor Summit

Learn how you can be a part of the growing TensorFlow (TF) ecosystem and become a contributor through code, documentation, education, or community leadership. Edd Wilder-James and Joana Filipa Bernardo Carrasqueira give you an overview of GitHub practices, request for comment (RFC) processes, and how to join the TF Special Interest Groups (SIGs) and make an impact in the community.

Deepak Bhadauria is a technology manager at Google, where he’s working with Google’s partners to bring TensorFlow-enabled AI transformation to enterprises.

Presentations

TensorFlow business case study showcase Session

Deepak Bhadauria, Saurabh Mishra, Upendra Sahu, Bhushan Jagyasi, David Beck, and Rahul Sarda share four real-world TensorFlow success stories from the banking, insurance, med tech, and nonprofit industries.

Joe Bowser is a senior computer scientist at Adobe, where he’s the lead developer on the sensei on device team that’s deploying machine learning technologies into various products at Adobe. Previously, he was the creator of PhoneGap for Android and the longest contributing committer to the PhoneGap and Apache Cordova projects. When he’s not contributing to open source at Adobe, he spends his spare time working on various hardware projects, most of which involve first-person-view miniquadcopters.

Presentations

Working with TensorFlow Lite on Android with C++ Session

There are many cases where developers on mobile write lower-level C++ code for their Android applications using the Android NDK, OpenCV and other technologies. Joe Bowser explores how to use TensorFlow Lite (TF Lite) with an existing C++ code base on Android by using the Android NDK and the TF Lite build tree.

Paris Buttfield-Addison is a cofounder of Secret Lab, a game development studio based in beautiful Hobart, Australia. Secret Lab builds games and game development tools, including the multi-award-winning ABC Play School iPad games, the BAFTA- and IGF-winning Night in the Woods, the Qantas airlines Joey Playbox games, and the Yarn Spinner narrative game framework. Previously, Paris was a mobile product manager for Meebo (acquired by Google). Paris particularly enjoys game design, statistics, blockchain, machine learning, and human-centered technology. He researches and writes technical books on mobile and game development (more than 20 so far) for O’Reilly; he recently finished writing Practical AI with Swift and is currently working on Head First Swift. He holds a degree in medieval history and a PhD in computing. Paris loves to bring machine learning into the world of practical and useful. You can find him on Twitter as @parisba.

Presentations

Swift for TensorFlow in 3 hours Tutorial

Mars Geldard, Tim Nugent, and Paris Buttfield-Addison are here to prove Swift isn't just for app developers. Swift for TensorFlow provides the power of TensorFlow with all the advantages of Python (and complete access to Python libraries) and Swift—the safe, fast, incredibly capable open source programming language; Swift for TensorFlow is the perfect way to learn deep learning and Swift.

Andy Chamberlain is a project manager in the Theoretical Ecology Lab at Stanford University. He specializes in GIS analysis, drone operations, and machine learning.

Presentations

Building deep learning applications using TensorFlow to combat schistosomiasis Session

Schistosomiasis is a debilitating parasitic disease that affects more than 250 million people worldwide. Zac Yung-Chun Liu, Andy Chamberlin, Susanne Sokolow, Giulio De Leo, and Ton Ngo detail how to build and deploy deep learning applications to detect disease transmission hotspots, make interventions more efficient and scalable, and help governments and stakeholders make data-driven decisions.

Charles Chen is a senior software engineer at Google on the Tensorflow Extended (TFX) team. He previously worked on Google Cloud Dataflow and Apache Beam. Prior to Google, he earned his bachelor’s and master’s degrees in Computer Science from Stanford University.

Presentations

TFX: Production ML pipelines with TensorFlow Session

ML development often focuses on metrics, delaying work on deployment and scaling issues. So Robert Crowe takes a deep dive into TensorFlow Extended.

Wen-Heng (Jack) Chung is a PMTS software development engineer at AMD, where he’s been working on the ROCm stack since its early inception. He has experience in compiler frontend, optimization passes, and run time for high-level languages. His focus has been TensorFlow XLA.

Presentations

Modular convolution considered beneficial Session

Jack Chung, Chao Liu, and Daniel Lowell explore breaking convolution algorithms into modular pieces to be better fused with graph compilers such as accelerated linear algebra (XLA).

Joseph Paul Cohen is a postdoctoral fellow with Yoshua Bengio at Mila and the University of Montreal. Joseph leads the medical research group at Mila, focusing on computer vision, genomics, and clinical data. He holds a PhD in computer science and machine learning from the University of Massachusetts Boston. His research interests include healthcare, bioinformatics, machine learning, computer vision, ad hoc networking, and cybersecurity. Joseph received a US National Science Foundation Graduate Fellowship as well as an IVADO Postdoctoral Fellowship. He’s the director of the Institute for Reproducible Research, which is dedicated to improving the process of scientific research using technology.

Presentations

TensorFlow.js: Bringing machine learning to JavaScript Keynote

JavaScript is the most widely used programming language in the world, and with TensorFlow.js, you can bring the power of TensorFlow and machine learning to your JavaScript application. Sandeep Gupta and Joseph Paul Cohen introduce the TensorFlow.js library and showcase the amazing possibilities of combining machine learning with JavaScript-based web, mobile, and server-side applications.

Unlocking the power of machine learning for your JavaScript applications with TensorFlow Session

Kangyi Zhang, Brijesh Krishnaswami, Joseph Paul Cohen, and Brendan Duke dive into the TensorFlow.js ecosystem: how to bring an existing machine learning model into your JavaScript (JS) app, retrain the model with your data, and go beyond the browser to other JS platforms with live demos of models and featured apps (WeChat virtual plugin from L’Oréal and a radiology diagnostic tool from Mila).

Robert Crowe is a data scientist and TensorFlow Developer Advocate at Google with a passion for helping developers quickly learn what they need to be productive. He’s used TensorFlow since the very early days and is excited about how it’s evolving quickly to become even better than it already is. Previously, Robert deployed production ML applications and led software engineering teams for large and small companies, always focusing on clean, elegant solutions to well-defined needs. In his spare time, Robert sails, surfs occasionally, and raises a family.

Presentations

ML in production: Getting started with TensorFlow Extended (TFX) Tutorial

Putting together an ML production pipeline for training, deploying, and maintaining ML and deep learning applications is much more than just training a model. Robert Crowe outlines what's involved in creating a production ML pipeline and walks you through working code.

TFX: Production ML pipelines with TensorFlow Session

ML development often focuses on metrics, delaying work on deployment and scaling issues. So Robert Crowe takes a deep dive into TensorFlow Extended.

Wisdom d’Almeida is a Visiting Researcher at Mila, working with Yoshua Bengio on System 2 reasoning with deep learning models, based on the Consciousness Prior. His other research interests include grounded language learning and AI explainability. In the past, Wisdom worked on natural language understanding for common-sense reasoning, with application to areas such as healthcare—his master’s dissertation was about medical report generation with natural language explanations. Wisdom’s works in AI won a Government of India National Award in 2018. Previously, he interned at Google in San Francisco and demoed at Google Cloud Next 2018. Wisdom holds a master’s degree from KIIT in India and a BS from Université de Lomé in Togo, where he grew up. In his spare time, you can see him struggling with his vocal cords and his guitar strings.

Presentations

Diagnose and explain: Neural X-ray diagnosis with visual and textual evidence Session

Wisdom d'Almeida walks you through how to design an encoder-decoder model that takes a chest X-ray image as input and generates a radiology report with visual and textual explanations for interpretability. The model was designed with TensorFlow, trained on cloud TPUs, and deployed in the browser with TensorFlow.js. Wisdom provides a live demo of the model in action.

Jason (Jinquan) Dai is a senior principal engineer and CTO of big data technologies at Intel, where he is responsible for leading the global engineering teams (located in both Silicon Valley and Shanghai) on the development of advanced big data analytics (including distributed machine and deep learning), as well as collaborations with leading research labs (e.g., UC Berkeley AMPLab and RISELab). Jason is an internationally recognized expert on big data, cloud, and distributed machine learning; he is the program cochair of the O’Reilly AI Conference in Beijing, a founding committer and PMC member of Apache Spark, and the creator of BigDL, a distributed deep learning framework on Apache Spark.

Presentations

Building AI to play the FIFA video game using distributed TensorFlow Session

Shengsheng Huang and Jason Dai detail their experience and insights about building AI to play the FIFA video game using distributed TensorFlow.

Keshi Dai is a machine learning engineer at Spotify, working to build out ML infrastructure that supports hundreds of engineers and the growth of ML in products at Spotify. Previously, Keshi worked on the other side of ML as one of the engineers building out recommendation products at Spotify. He knows firsthand the challenges presented when productionizing ML and the benefit in using standard infrastructure in many parts of the workflow.

Presentations

Personalizing the infinite jukebox: ML and the TensorFlow ecosystem at Spotify Session

Josh Baer and Keshi Dai discuss how Spotify has historically used ML and explore how the introduction of TensorFlow and TFX in particular has standardized its ML workflows and improved its ability to bring ML-powered products to its users.

Shajan Dasan is a staff machine learning engineer at Twitter, where he works on the company’s prediction service, enabling different services to perform high-scale inference. Previously, he built distributed systems for information retrieval (web crawler and indexer for Bing), data storage (video, photo, and large-object store at Twitter), and video transcoding (video backend at Twitter) and worked on the first version of C# language, where he implemented the type safety verifier.

Presentations

Reliable, high-scale TensorFlow inference pipelines at Twitter Session

Twitter heavily relies on Scala and the Java Virtual Machine (JVM) and contains a lot of expertise knowledge. Shajan Dasan and Briac Marcatté detail the problems Twitter has had to overcome to make its offering reliable and provide a reliable TensorFlow inference to Twitter customer teams.

Pooya Davoodi is a senior software engineer at NVIDIA working on accelerating TensorFlow on NVIDIA GPUs. Previously, Pooya worked on Caffe2, Caffe, CUDNN, and other CUDA libraries.

Presentations

Accelerating training, inference, and ML applications on NVIDIA GPUs Tutorial

Maggie Zhang, Nathan Luehr, Josh Romero, Pooya Davoodi, and Davide Onofrio give you a sneak peek at software components from NVIDIA’s software stack so you can get the best out of your end-to-end AI applications on modern NVIDIA GPUs. They also examine features and tips and tricks to optimize your workloads right from data loading, processing, training, inference, and deployment.

Giulio De Leo is a theoretical ecologist by formation. He’s interested in investigating factors and processes driving the dynamics of natural and harvested populations and in understanding how to use this knowledge to inform practical management. He’s the scientific director of the newly established Center for Disease Ecology, Health, and the Environment at Stanford.

Presentations

Building deep learning applications using TensorFlow to combat schistosomiasis Session

Schistosomiasis is a debilitating parasitic disease that affects more than 250 million people worldwide. Zac Yung-Chun Liu, Andy Chamberlin, Susanne Sokolow, Giulio De Leo, and Ton Ngo detail how to build and deploy deep learning applications to detect disease transmission hotspots, make interventions more efficient and scalable, and help governments and stakeholders make data-driven decisions.

Jeff Dean is a Google senior fellow in Google’s Research Group, where he cofounded and leads the Google Brain team, Google’s deep learning and artificial intelligence research team. He and his collaborators are working on systems for speech recognition, computer vision, language understanding, and various other machine learning tasks. During his time at Google, Jeff has codesigned and implemented many generations of Google’s crawling, indexing, and query serving systems, major pieces of Google’s initial advertising and AdSense for content systems, and Google’s distributed computing infrastructure, including the MapReduce, BigTable, and Spanner systems, protocol buffers, LevelDB, systems infrastructure for statistical machine translation, and a variety of internal and external libraries and developer tools. Jeff is a fellow of the ACM and the AAAS, a member of the US National Academy of Engineering, and a recipient of the ACM-Infosys Foundation Award in the Computing Sciences. He holds a PhD in computer science from the University of Washington, where he worked with Craig Chambers on whole-program optimization techniques for object-oriented languages, and a BS in computer science and economics from the University of Minnesota.

Presentations

Opening keynote Keynote

Jeff Dean explains why Google originally open-sourced TensorFlow almost four years ago. Join in to learn about TensorFlow's progress and how it can solve the problems you care about.

Victor Dibia is a research engineer at Cloudera’s Fast Forward Labs, where his work focuses on prototyping state-of-the-art machine learning algorithms and advising clients. He’s passionate about community work and serves as a Google Developer Expert in machine learning. Previously, he was a research staff member at the IBM TJ Watson Research Center. His research interests are at the intersection of human-computer interaction, computational social science, and applied AI. He’s a senior member of IEEE and has published research papers at conferences such as AAAI Conference on Artificial Intelligence and ACM Conference on Human Factors in Computing Systems. His work has been featured in outlets such as the Wall Street Journal and VentureBeat. He holds an MS from Carnegie Mellon University and a PhD from City University of Hong Kong.

Presentations

Handtrack.js: Building gesture-based interactions in the browser using TensorFlow.js Session

Victor Dibia explores the state of the art for machine learning in the browser using Tensorflow.js and dives into its use in the design of Handtrack.js—a library for prototyping real-time hand-tracking interactions in the browser.

Tulsee Doshi is the product lead for Google’s ML fairness effort, where she leads the development of Google-wide resources and best practices for developing more inclusive and diverse products. Previously, Tulsee worked on the YouTube recommendations team. She earned her BS in symbolic systems and MS in computer science from Stanford University.

Presentations

Build more inclusive TensorFlow pipelines with fairness indicators Session

ML continues to drive monumental change across products and industries. But as we expand ML to even more sectors and users, it's ever more critical to ensure that these pipelines work well for all users. Tulsee Doshi and Christina Greer announce the launch of Fairness Indicators, built on top of TensorFlow Model Analysis, which allows you to measure and improve algorithmic bias.

Brendan Duke is a machine learning researcher at ModiFace, where he worked on Nail Polish Try-On, acne and skin analysis, and on optimized conversion and deployment of research models to production hardware and software backends. He earned a master’s degree under the supervision of Graham Taylor at the University of Guelph, where he worked on machine learning for human activity recognition focused on multimodal interactions.

Presentations

Unlocking the power of machine learning for your JavaScript applications with TensorFlow Session

Kangyi Zhang, Brijesh Krishnaswami, Joseph Paul Cohen, and Brendan Duke dive into the TensorFlow.js ecosystem: how to bring an existing machine learning model into your JavaScript (JS) app, retrain the model with your data, and go beyond the browser to other JS platforms with live demos of models and featured apps (WeChat virtual plugin from L’Oréal and a radiology diagnostic tool from Mila).

Jared Duke is a software engineer on the Google Brain team leading performance efforts for TensorFlow Lite. Previously, he worked to improve mobile VR for Daydream and mobile web browsing for Chrome at Google.

Presentations

TensorFlow Lite: ML for mobile and IoT devices Keynote

TensorFlow Lite makes it really easy to execute machine learning on mobile phones and microcontrollers. Jared Duke and Sarah Sirajuddin explore on-device ML and the latest updates to TensorFlow Lite, including model conversion, optimization, hardware acceleration, and a ready-to-use model gallery. They also showcase demos and production use cases for TensorFlow Lite on phones and microcontrollers.

Yann Dupis is a machine learning engineer and privacy researcher at Dropout Labs. Previously, he was an actuary at the largest insurance company in Canada in reinsurance and then in research and development, and he managed a data science team at Deloitte in San Francisco, working with several Fortune 500 enterprises in the consumer and product industry. He holds an MASc in electrical and computer engineering from Institut Superieur d’Electronique de Paris. In his free time, you can find him surfing at Ocean Beach or indoor rock climbing in San Francisco.

Presentations

Privacy-preserving machine learning with TensorFlow Tutorial

Jason Mancuso and Yann Dupis demonstrate how to build and deploy privacy-preserving machine learning models using TF Encrypted, PySyft-TensorFlow, and the TensorFlow ecosystem.

Matthew Du Puy has been a software engineer at Arm for 8 years and is currently working on AI at the edge and IoT technology. Previously he worked on Android, open-source math libraries and the Linux Kernel. He is also the second American to have climbed Annapurna, K2, and Everest.

Presentations

TensorFlow Lite: Solution for running ML on-device Session

Pete Warden, Nupur Garg, and Matthew Dupuy take you through TensorFlow Lite, TensorFlow’s lightweight cross-platform solution for mobile and embedded devices, which enables on-device machine learning inference with low latency, high performance, and a small binary size.

Kemal El Moujahid is the product director for TensorFlow at Google. He’s passionate about solving big problems with AI and building vibrant developer communities. Previously, Kemal led M, Facebook’s virtual assistant, the Messenger Platform, and Wit.ai. Kemal holds degrees from the École Polytechnique and Telecom Paris and an MBA from the Stanford Graduate School of Business.

Presentations

TensorFlow community announcements Keynote

Kemal El Moujahid divulges exciting developments for the TensorFlow community. Join in to learn how the TensorFlow team provides new and improved resources for developers and enterprises to succeed.

Úlfar Erlingsson is a research scientist on the Brain team at Google, working primarily on privacy and security of deep learning systems. Previously, Úlfar led computer security research at Google and was a researcher at Microsoft Research, associate professor at Reykjavik University, cofounder and CTO of the internet security startup GreenBorder Technologies, and director of privacy protection at deCODE genetics. Úlfar holds a PhD in computer science from Cornell University.

Presentations

TensorFlow Privacy: Learning with differential privacy for training data Session

When evaluating ML models, it can be difficult to tell the difference between what the models have generalized from the training and what the models have simply memorized. And that difference can be crucial in some ML tasks, such as when ML models are trained using sensitive data. Úlfar Erlingsson explains how to offer strong privacy guarantees for ML training data by using TensorFlow Privacy.

Pengfei Fan is a senior heterogeneous computing engineer at Alibaba Cloud. Previously, he worked on GPU compute architecture at NVIDIA. Pengfei is focused on designing and implementing virtualization and scheduling systems for heterogeneous infrastructure to accelerate AI applications and improve hardware use.

Presentations

HARP: An efficient and elastic GPU-sharing system Session

Pengfei Fan and Lingling Jin offer an overview of an efficient and elastic GPU-sharing system for users who do research and development with TensorFlow.

Yifei Feng is a software engineer on TensorFlow team, where she focuses on open source tooling, issue management, and building and distributing TensorFlow. Previously, she worked on Hololens and XboX at Microsoft. Yifei holds a master’s degree from Stanford University and a BS from Franklin W. Olin College of Engineering.

Presentations

Building TensorFlow: Libraries and custom op Contributor Summit

TensorFlow is a huge project with many parts both integrated and increasingly separate. Building all these components so they work together requires care. Jason Zaman and Yifei Feng demystify the main components and dependencies within TensorFlow and explore how to add custom functionality easily using custom op.

Will Fletcher is a machine learning (ML) researcher at Datatonic, where he concentrates on the technological progress of the company. He contributes an understanding of the most advanced methods in ML, along with experience in research and an eye for innovation. Previously, his academic career began as a chemist at Oxford; later, he moved to UCL for a further MSc in computational statistics and ML. Project and research work aside, Will delivers training days for companies to help them get started with ML. He believes in continuous education and learning as an essential part of technical excellence. This passion extends into his personal life, where he plays with math, programming and puzzles.

Presentations

Effective sampling methods within TensorFlow input functions Session

Many real-world machine learning applications require generative or reductive sampling of data. Laxmi Prajapat and William Fletcher demonstrate sampling techniques applied to training and testing data directly inside the input function using the tf.data API.

Patricia Florissi is the global CTO for sales and a distinguished engineer at Dell EMC. Patricia holds more than 20 patents, has another 20 pending, and has published in Computer Networks and IEEE Proceedings. Patricia delivered a keynote at the O’Reilly Strata Data Conference in New York and a TEDx talk. She’s the creator, author, and narrator of the educational video series Dell EMC Big Ideas on emerging technologies and trends, which have received over 750,000 views (with some videos localized in 10 languages). She holds a PhD in computer science from Columbia University and graduated as valedictorian with an MBA from New York University’s Stern Business School. She earned an MS and BS in computer science from UFPE/Brazil.

Presentations

Cloud-nativing AI (sponsored by VMware) Session

Patricia Florissi identifies some intrinsic patterns in the anatomy of emerging digital fabrics, including those demanding agility in adapting to change, in dynamically creating connectivity meshes, and in scaling in size and complexity to unprecedented rates.

Nupur Garg is a software engineer on the TensorFlow Lite team at Google Brain. She holds an MS in computer science from Cal Poly in San Luis Obispo.

Presentations

TensorFlow Lite: Solution for running ML on-device Session

Pete Warden, Nupur Garg, and Matthew Dupuy take you through TensorFlow Lite, TensorFlow’s lightweight cross-platform solution for mobile and embedded devices, which enables on-device machine learning inference with low latency, high performance, and a small binary size.

Marina Rose Geldard (Mars) is a technologist from Down Under in Tasmania. Entering the world of technology relatively late as a mature-age student, she has found her place in the world: an industry where she can apply her lifelong love of mathematics and optimization. She compulsively volunteers at industry events, dabbles in research, and serves on the executive committee for her state’s branch of the Australian Computer Society (ACS) as well as the AUC. She’s writing Practical Artificial Intelligence with Swift for O’Reilly and working on machine learning projects to improve public safety through public CCTV cameras in her hometown of Hobart.

Presentations

Swift for TensorFlow in 3 hours Tutorial

Mars Geldard, Tim Nugent, and Paris Buttfield-Addison are here to prove Swift isn't just for app developers. Swift for TensorFlow provides the power of TensorFlow with all the advantages of Python (and complete access to Python libraries) and Swift—the safe, fast, incredibly capable open source programming language; Swift for TensorFlow is the perfect way to learn deep learning and Swift.

Aurélien Géron is a machine learning consultant at Kiwisoft and author of the best-selling O’Reilly book Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow. Previously, he led YouTube’s video classification team, was a founder and CTO of Wifirst, and was a consultant in a variety of domains: finance (JPMorgan and Société Générale), defense (Canada’s DOD), and healthcare (blood transfusion). He also published a few technical books (on C++, WiFi, and internet architectures), and he’s a lecturer at the Dauphine University in Paris. He lives in Singapore with his wife and three children.

Presentations

Natural language processing using transformer architectures Session

Transformer architectures have taken the field of natural language processing (NLP) by storm and pushed recurrent neural networks to the sidelines. Aurélien Géron examines transformers and the amazing language models based on them (e.g., BERT and GPT 2) and shows how you can use them in your projects.

Production ML pipelines with TensorFlow Extended (TFX) 2-Day Training

Aurélien Géron dives into creating production ML pipelines with TensorFlow Extended (TFX) and using TFX to move from ML coding to ML engineering. You'll walk through the basics and put your first pipeline together, then learn how to customize TFX components and perform deep analysis of model performance.

Josh Gordon is a developer advocate at Google AI and teaches applied deep learning at Columbia University and machine learning at Pace University. He has over a decade of machine learning experience to share. You can find him on Twitter as @random_forests.

Presentations

Introduction to TensorFlow 2.0: Easier for beginners and more powerful for experts Session

TensorFlow 2.0 is all about ease of use, and there has never been a better time to get started. Joshua Gordon walks you through three styles of model-building APIs, complete with code examples.

Martin Gorner is a developer advocate at Google, where he focuses on parallel processing and machine learning. Martin is passionate about science, technology, coding, algorithms, and everything in between. He spent his first engineering years in the Computer Architecture Group of ST Microelectronics, then spent the next 11 years shaping the nascent ebook market at Mobipocket, which later became the software part of the Amazon Kindle and its mobile variants. He’s the author of the successful TensorFlow Without a PhD series. He graduated from Mines Paris Tech.

Presentations

Fast and lean data science with TPUs Session

Neural networks are now shipping in consumer-facing projects. Enterprises need to train and ship them fast, and data scientists want to waste less time on endless training. Martin Gorner explains how Google's tensor processing units (TPUs) are here to help.

Recurrent neural networks without a PhD Tutorial

Many problems deemed "impossible" only five years ago have now been solved by deep learning—from playing Go to recognizing what’s in an image to translating languages. Martin Gorner leads a hands-on introduction to recurrent neural networks and TensorFlow. Join in to discover what makes RNNs so powerful for time series analysis.

Christina Greer is a software engineer on the Google Brain team. She focuses specifically on machine learning fairness in the context of model evaluation and understanding, and scaling up solutions for ML fairness to support many teams across Google. Previously, Christina worked on building infrastructure to support diverse Google products: Google Assistant, Cloud Dataflow, and ads. Working in this area of ML fairness allows her to combine building infrastructure at Google scale with advancing efforts to avoid creating or reinforcing existing biases. Christina earned her BS in computer science from the University of Kansas.

Presentations

Build more inclusive TensorFlow pipelines with fairness indicators Session

ML continues to drive monumental change across products and industries. But as we expand ML to even more sectors and users, it's ever more critical to ensure that these pipelines work well for all users. Tulsee Doshi and Christina Greer announce the launch of Fairness Indicators, built on top of TensorFlow Model Analysis, which allows you to measure and improve algorithmic bias.

Gunhan Gulsoy is a software engineer at Google Brain, where he works on TensorFlow. He holds a PhD from the University of Florida.

Presentations

Modular TensorFlow Contributor Summit

TensorFlow has become very successful, leading to a rich community growing around TensorFlow. However, as the community grew, the experience of collaboration became more and more difficult. Gunhan Gulsoy provides an overview of Google's efforts to create a framework that empowers the TensorFlow community to build and distribute on top of TensorFlow.

Priya Gupta is a software engineer on the TensorFlow team at Google, where she works on making it easier to run TensorFlow in a distributed environment. She’s passionate about technology and education and wants machine learning to be accessible to everyone. Previously, she worked at Coursera and on the mobile ads team at Google.

Presentations

Performant, scalable models in TensorFlow 2.0 with tf.data, tf.function, and tf.distribute Session

Join Taylor Robie and Priya Gupta to learn how you can use tf.distribute to scale your machine learning model on a variety of hardware platforms ranging from commercial cloud platforms to dedicated hardware. You'll learn tools and tips to get the best scaling for your training in TensorFlow.

Sandeep Gupta is a product manager at Google, where he helps develop and drive the road map for TensorFlow—Google’s open source library and framework for machine learning—for supporting machine learning applications and research. His focus is on improving TensorFlow’s usability and driving adoption in the community and enterprise. Sandeep is excited about how machine learning and AI are transforming lives in a variety of ways, and he works with the Google team and external partners to help create powerful, scalable solutions for all. Previously, Sandeep was the technology leader for advanced imaging and analytics research and development at GE Global Research with specific emphasis on medical imaging and healthcare analytics.

Presentations

Introduction to machine learning in JavaScript using TensorFlow.js Tutorial

Join Sandeep Gupta and Brijesh Krishnaswami to learn how to build and deploy machine learning models using JavaScript, with official documentation, examples, and code labs from the TensorFlow team.

TensorFlow.js: Bringing machine learning to JavaScript Keynote

JavaScript is the most widely used programming language in the world, and with TensorFlow.js, you can bring the power of TensorFlow and machine learning to your JavaScript application. Sandeep Gupta and Joseph Paul Cohen introduce the TensorFlow.js library and showcase the amazing possibilities of combining machine learning with JavaScript-based web, mobile, and server-side applications.

Adam Hammond is a solution architect at Quantiphi, a deep learning and artificial intelligence solutions company, where he’s actively involved in developing and delivering solutions in the healthcare and insurance industries (both of which often call for interpretable models). Adam holds an MBA from Bentley University and an undergraduate degree in economics.

Presentations

Tagging cancer recurrence through machine learning Session

Asif Hasan and Adam Hammond dive into how TensorFlow and the Cloud Machine Learning Engine (CMLE) helped a healthcare provider develop a solution designed to predict the patient encounters associated with recurrence of cancer.

Hannes Hapke is a senior data scientist at SAP ConcurLabs. He’s been a machine learning enthusiast for many years and is a Google Developer Expert for machine learning. Hannes has applied deep learning to a variety of computer vision and natural language problems, but his main interest is in machine learning engineering and automating model workflows. Hannes is a coauthor of the deep learning publication Natural Language Processing in Action and he’s working on a book about Building Machine Learning Pipelines with TensorFlow Extended (O’Reilly). When he isn’t working on a deep learning project, you’ll find him outdoors running, hiking, or enjoying a good cup of coffee with a great book.

Presentations

Advanced model deployments with TensorFlow Serving Session

Hannes Hapke leads a deep dive into deploying TensorFlow models within minutes with TensorFlow Serving and optimizing your serving infrastructure for optimal throughput.

Asif Hasan is the cofounder of Quantiphi, a category-defining applied AI and big data software and services provider. He has over 15 years of experience in technology services, healthcare, and financial services industries working on a variety of initiatives such as building applied AI and advanced analytics capabilities at a global scale, postmerger integration, supply-chain operations, business transformation, and professional services. 

Previously, Asif led a global team of analytics and data science professionals focused on developing leading-edge analytical algorithms and solutions for business decision support for a multi-billion-dollar global healthcare services business including customer experience, service delivery, supply chain, and professional services. He holds an MBA from McCallum Graduate school of Business at Bentley University and participated in executive education programs at Harvard Business School.

Presentations

Tagging cancer recurrence through machine learning Session

Asif Hasan and Adam Hammond dive into how TensorFlow and the Cloud Machine Learning Engine (CMLE) helped a healthcare provider develop a solution designed to predict the patient encounters associated with recurrence of cancer.

Christian Hidber is a software engineer at bSquare, where he applies machine learning to industrial hydraulics simulation, part of a product with 7,000 installations in 42 countries. He holds a PhD in computer algebra from ETH Zurich, which he followed with a postdoc at UC Berkeley, where he researched online data mining algorithms.

Presentations

EasyAgents: Reinforcement Learning for people who want to solve real-world problems

Reinforce Learning can be a game changer when you do not have training data, but are instead able to simulate an environment. Unfortunately, the theory of Reinforcement Learning is complex and the vast number of algorithms in that area adds to the burden for getting started. Easyagents takes some of the burden by making it a one-liner to run a Reinforcement Learning algorithm on your problem.

Khoa Ho is a solutions architect at NVIDIA, working on natural language processing (NLP) applications and general deep learning at scale. He regularly runs and troubleshoots multinode DL workloads on both cloud and on-premises GPU clusters.

Presentations

Running TensorFlow at scale on GPUs (sponsored by NVIDIA) Session

Neil Truong, Kari Briski, and Khoa Ho walk you through their experience running TensorFlow at scale on GPU clusters like the DGX SuperPod and the Summit supercomputer. They explore the design of these large-scale GPU systems and detail how to run TensorFlow at scale using BERT and AI plus high-performance computing (HPC) applications as examples.

Andy Hock is the director of product for Cerebras Systems, an AI hardware startup out to accelerate deep learning and change compute forever. He has 10 years of experience in product management, technical program management, and enterprise business development; over 15 years of experience in research, algorithm development, and data analysis for image processing; and 5 years’ experience in applied machine learning and AI. Previously, Andy was the product manager of data and analytics for Terra Bella at Google, where he led the development of machine learning-powered data products from satellite imagery; was senior director for advanced technology programs at Skybox Imaging (which became Terra Bella following acquisition by Google in 2014); and was a senior program manager and senior scientist at Areté. He has a PhD in geophysics and space physics from the University of California, Los Angeles, and a BA in astronomy-physics from Colgate University.

Presentations

TensorFlow on the Cerebras Wafer-Scale Engine Session

Manjunath Kudlur and Andy Hock describe the software that compiles TensorFlow to the recently announced Cerebras Wafer-Scale Engine (WSE) for deep learning.

Keqiu Hu is a staff software engineer at LinkedIn, where he’s working on LinkedIn’s big data platforms, primarily focusing on TensorFlow and Hadoop.

Presentations

Scaling TensorFlow at LinkedIn Session

Keqiu Hu, Jonathan Hung, and Abin Shahab explore the challenges LinkedIn encountered and resolved to scale TensorFlow.

Shengsheng (Shane) Huang is a software architect at Intel and an Apache Spark committer and PMC member, leading the development of large-scale analytical applications and infrastructure on Spark in Intel. Her area of focus is big data and distributed machine learning, especially deep (convolutional) neural networks. Previously at the National University of Singapore (NUS), her research interests are large-scale vision data analysis and statistical machine learning.

Shengsheng(Shane)Huang是英特尔的软件架构师,也是Apache Spark的贡献者和PMC成员。她领导着英特尔基于Spark的大规模分析应用和基础架构的开发。她关注的领域是大数据和分布式机器学习,尤其是深度(卷积)神经网络。她之前就读于新加坡国立大学(NUS),研究兴趣是大规模视觉数据分析和统计机器学习。

Presentations

Building AI to play the FIFA video game using distributed TensorFlow Session

Shengsheng Huang and Jason Dai detail their experience and insights about building AI to play the FIFA video game using distributed TensorFlow.

Jonathan Hung is a senior software engineer on the Hadoop development team at LinkedIn.

Presentations

Scaling TensorFlow at LinkedIn Session

Keqiu Hu, Jonathan Hung, and Abin Shahab explore the challenges LinkedIn encountered and resolved to scale TensorFlow.

Hamel Husain is a data scientist at GitHub who is focused on creating the next generation of developer tools powered by machine learning. His work involves extensive use of natural language and deep learning techniques to extract features from code and text. Previously, Hamel was a data scientist at Airbnb, where he worked on growth marketing, and at DataRobot, where he helped build automated machine learning tools for data scientists. Hamel can be reached on Twitter.

Presentations

Automating your developer workflow on GitHub with TensorFlow Session

Software development is central to machine learning, regardless of if you're prototyping in a Jupyter notebook or building a service for millions of users. Hamel Husain, Omoju Miller, Michał Jastrzębski, and Jeremy Lewi show you how to use a freely available, natural language dataset to build practical applications for anyone who writes software using TensorFlow.

Bhushan Jagyasi is an AI research scientist at Accenture, leading the speech intelligence area within Accenture’s Artificial Intelligence Group in India. He also heads the AI solutions for chemicals, natural resources, energy, and utilities industries. Previously, Bhushan was with TCS Research Labs on agritech initiatives. He’s keenly interested contributing toward AI for Good and innovating for the societal benefits. Bhushan holds a PhD from IIT Bombay, and he was a recipient of MIT’s India TR35 Young Innovator award in 2010.

Presentations

TensorFlow business case study showcase Session

Deepak Bhadauria, Saurabh Mishra, Upendra Sahu, Bhushan Jagyasi, David Beck, and Rahul Sarda share four real-world TensorFlow success stories from the banking, insurance, med tech, and nonprofit industries.

Ankit Jain is a senior research scientist at Uber AI Labs, the machine learning research arm of Uber. His work primarily involves the application of deep learning methods to a variety of Uber’s problems ranging from forecasting and food delivery to self-driving cars. Previously, he worked in variety of data science roles at Bank of America, Facebook, and other startups. He coauthored a book on machine learning titled TensorFlow Machine Learning Projects. Additionally, he’s been a featured speaker in many of the top AI conferences and universities across the US, including UC Berkeley and the O’Reilly AI Conference, among others. He earned his MS from UC Berkeley and BS from IIT Bombay (India).

Presentations

Enhance recommendations in Uber Eats with graph convolutional networks Session

Ankit Jain and Piero Molino detail how to generate better restaurant and dish recommendations in Uber Eats by learning entity embeddings using graph convolutional networks implemented in TensorFlow.

Michał Jastrzębski is staff data engineer at GitHub, where he builds machine learning infrastructure for internal use. Previously, he was an architect at Intel’s Open Source Technology Center. Michał has a long experience in cloud technologies like OpenStack and Kubernetes, both as an operator and contributor. As former leader of OpenStack Kolla, he managed a community of more than 200 people and almost 40 companies. Michal has been involved with machine learning on Kubernetes communities like Kubeflow.

Presentations

Automating your developer workflow on GitHub with TensorFlow Session

Software development is central to machine learning, regardless of if you're prototyping in a Jupyter notebook or building a service for millions of users. Hamel Husain, Omoju Miller, Michał Jastrzębski, and Jeremy Lewi show you how to use a freely available, natural language dataset to build practical applications for anyone who writes software using TensorFlow.

Tony Jebara is the head of machine learning and vice president of engineering at Spotify. Previously, Tony was the director of machine learning at Netflix, where he launched improvements to its personalization algorithms. He’s also a professor (on leave) at Columbia University and holds a PhD from MIT.

Presentations

Personalization of Spotify Home and TensorFlow Keynote

Tony Jebara explains how Spotify improved user satisfaction with Home by building various components of the TFX ecosystem into its core ML infrastructure.

Lingling Jin is a senior manager at Alibaba, where she focuses on heterogeneous infrastructures to accelerate AI applications and improve hardware use. Previously, she was part of NVIDIA’s Compute Architecture Group. She earned her PhD at the University of California, Riverside.

Presentations

HARP: An efficient and elastic GPU-sharing system Session

Pengfei Fan and Lingling Jin offer an overview of an efficient and elastic GPU-sharing system for users who do research and development with TensorFlow.

Pengchong Jin is a senior software engineer on the TensorFlow-E2E team at Google Brain, focusing on computer vision model development. He works closely with various autonomous driving companies on delivering object detection E2E solution on TPU and TensorRT inference. Previously, he worked on developing the internal object detector to serve various Google products, including photos, lens, image search.

Presentations

Train and serve object detectors for autonomous driving Session

Pengchong Jin walks you through a typical development workflow on GCP for training and deploying an object detector to a self-driving car. He demonstrates how to train the state-of-the-art RetinaNet model fast using Cloud TPUs and scale up the model effectively on Cloud TPU pods. Pengchong also explains how to export a Tensor-RT optimized mode on GPU for inference.

Da-Cheng Juan is a senior software engineer at Google Research, exploring graph-based machine learning, deep learning, and their real-world applications. Da-Cheng was the recipient of the 2012 Intel PhD Fellowship. His current research interests span across semi-supervised learning, convex optimization, and large-scale deep learning. He received his PhD from the Department of Electrical and Computer Engineering and his master’s degree from the Machine Learning Department, both at Carnegie Mellon University. Da-Cheng has published more than 30 research papers in the related fields; in addition to research, he also enjoys algorithmic programming and has won several awards in major programming contests.

Presentations

Neural structured learning in TensorFlow Session

Da-Cheng Juan and Sujith Ravi explain neural structured learning (NSL), an easy-to-use TensorFlow framework that both novice and advanced developers can use for training neural networks with structured signals.

Megan Kacholia is a vice president of engineering within Google’s Research organization. Her team’s work spans machine learning in research as well as production, including products such as TensorFlow. Her passion is building effective teams and addressing barriers to help Googlers do their best work. Previously, Megan had a long tenure in Google’s Ads organization, where she ran the serving system for Google’s DisplayAds business.

Presentations

The latest from TensorFlow Keynote

Megan Kacholia outlines the latest TensorFlow product announcements and updates. You'll learn more about how Google's latest innovations provide a comprehensive ecosystem of tools for developers, enterprises, and researchers who want to push state-of-the-art machine learning and build scalable ML-powered applications.

Vikrant Kahlir is a solution architect at Amazon Web Services.

Presentations

TensorFlow on AWS 2-Day Training

Amazon Web Services (AWS) offers a breadth and depth of services to easily build, train, and deploy TensorFlow models. Shashank Prasanna, Vikrant Kahlir, and Rama Thamman give you hands-on experience working with these services.

Ujval Kapasi is the vice president of deep learning software at NVIDIA, where he works on software, algorithms, and tools for deep learning, machine learning, and HP.

Presentations

Accelerating TensorFlow for research and deployment (sponsored by NVIDIA) Keynote

Machine learning on NVIDIA GPUs and systems allows developers to solve problems that seemed impossible just a few years ago. Ujval Kapasi explains how software and hardware advances on GPUs impact development efforts across the community, both today and in the future.

Al Kari is CEO and principal consultant at Manceps, where he leads the company’s mission to augment human capabilities with machine intelligence, with a focus on blending machine learning and artificial intelligence with cloud computing and big data technologies. Al is a Google Developer Expert (GDE) in machine learning, organizer of the TensorFlow-Northwest and OpenStack Northwest user groups, and a strong advocate for open source AI and cloud technologies. Previously, Al was a global cloud evangelist at Microsoft, where he helped top-tier ISV partners onboard on the Microsoft Azure Platform. Al started his career in the mid-’90s as a software architect by founding Softwarehouse overseas before moving to the United States. He later held product and services leadership roles at Dell, where he helped build the company’s virtualization and cloud computing services portfolio; cofounded DetaCloud, a boutique OpenStack engineering powerhouse; and was a principal cloud architect at Red Hat, where he was responsible for helping customers build enterprise-ready cloud infrastructure. A frequent speaker at major industry conventions, Al has been an outspoken advocate for building the future of open artificial intelligence and cloud technologies in support of academic, industrial, and scientific development. He is a standing member of the Cloud Advisory Council, the Linux Professional Institute, and the OpenStack Foundation.

Presentations

Don’t beat the market; beat the bots: Adversarial networks in finance Session

Automated investing has brought an immense amount of stability to the market, but it has also brought predictability. Garrett Lander and Al Kari examine if an adversarial network can game the behavior of automated investors by learning the patterns in market activity to which they are most vulnerable.

Konstantinos (Gus) Katsiapis is the über tech lead of TensorFlow Extended (TFX), an end-to-end machine learning platform based on TensorFlow. He’s worked on Sibyl, a massive-scale machine learning system (precursor to TensorFlow) widely used at Google, and was an avid user of machine learning infrastructure while leading the mobile display ads quality machine learning team at Google. Previously, Gus gathered knowledge and experience at Amazon, Calian, the Ontario Ministry of Finance, Independent Electricity System Operator, and Computron. He holds a master’s degree in computer science with a specialization in artificial intelligence from Stanford University and a bachelor’s degree in mathematics, majoring in computer science and minoring in economics, from the University of Waterloo.

Presentations

TFX: An end-to-end ML platform for everyone Keynote

Konstantinos Katsiapis and Anusha Ramesh offer an overview of TensorFlow Extended (TFX), which has evolved as the ML platform solution within Alphabet over the past decade.

Meenakshi Kaushik is a product manager for Cisco Container Platform, an enterprise-grade Kubernetes offering that supports GPU and Kubeflow for hybrid AI and ML workloads. Meenakshi is interested in the AI and ML space and is excited to see how the technology can enhance human well-being and productivity.

Presentations

Hyperparameter tuning for TensorFlow using Katib and Kubeflow Tutorial

Neelima Mukiri and Meenakshi Kaushik demonstrate how to automate hyperparameter tuning for a given dataset using Katib and Kubeflow. Katib can be easily run on a laptop or in a distributed production deployment, and Katib jobs and configuration can be easily ported to any Kubernetes cluster.

Nicolas Kowalski is a senior software engineer at Criteo. His work focuses on developing the platforms and tools that are used by all of Criteo to create any kind of machine learning model, train them, serve them online, and monitor their behavior. Previously, Nicolas earned a PhD in applied mathematics from Paris University Pierre and Marie Curie and spent some time in academia, where he published eight papers in international journals and conferences, including the best paper at the 2012 International Meshing Roundtable.

Presentations

How Criteo optimized and sped up its TensorFlow models by 10x and served them under 5 ms Session

Criteo's real-time bidding of ad spaces requires its TensorFlow (TF) models to make online predictions in less than 5 ms. Nicolas Kowalski and Axel Antoniotti explain why Criteo moved away from high-level APIs and rewrote its models from scratch, reimplementing cross-features and hashing functions using low-level TF operations in order to factorize as much as possible all TF nodes in its model.

Brijesh Krishnaswami is a technical program manager on the TensorFlow team at Google. He has a master’s degree in computer science and two decades of experience in software development at various technology companies. You can find him on LinkedIn.

Presentations

Introduction to machine learning in JavaScript using TensorFlow.js Tutorial

Join Sandeep Gupta and Brijesh Krishnaswami to learn how to build and deploy machine learning models using JavaScript, with official documentation, examples, and code labs from the TensorFlow team.

Unlocking the power of machine learning for your JavaScript applications with TensorFlow Session

Kangyi Zhang, Brijesh Krishnaswami, Joseph Paul Cohen, and Brendan Duke dive into the TensorFlow.js ecosystem: how to bring an existing machine learning model into your JavaScript (JS) app, retrain the model with your data, and go beyond the browser to other JS platforms with live demos of models and featured apps (WeChat virtual plugin from L’Oréal and a radiology diagnostic tool from Mila).

Manjunath Kudlur is the technical lead for Cerebras Systems’s compiler software project, mapping neural networks to a revolutionary new deep learning accelerator with a wafer-scale processor. He’s an engineer with expertise in compilers, machine learning, and parallel computing. Previously, he worked at Google in Brain on TensorFlow and at NVIDIA on compilers and programming languages research. Manjunath has a PhD in computer science and engineering from the University of Michigan.

Presentations

TensorFlow on the Cerebras Wafer-Scale Engine Session

Manjunath Kudlur and Andy Hock describe the software that compiles TensorFlow to the recently announced Cerebras Wafer-Scale Engine (WSE) for deep learning.

Valliappa Lakshmanan is tech lead at Google Cloud focusing on data and machine learning. He’s the author of Data Science on GCP (O’Reilly), coauthor of BigQuery: The Definitive Guide (O’Reilly), and an instructor for multiple Coursera courses.

Presentations

End-to-end machine learning with TensorFlow 2.0 on Google Cloud Platform 2-Day Training

Valliappa Lakshmanan shows you how to use Google Cloud Platform to design and build machine learning (ML) models and how to deploy them into production. You'll walk through the process of building a complete machine learning pipeline from ingest and exploration to training, evaluation, deployment, and prediction.

Garrett Lander is a machine learning architect at Manceps, an ML consulting agency based out of Portland, Oregon. Garrett works with clients ranging from those taking their first steps into automation to seasoned ML practitioners looking to optimize their production models. Garrett is especially interested in the growing areas of AI pen-tests and ethicality, as well as the effort to build models that improve on human decision making without inheriting its shortcomings.

Presentations

Don’t beat the market; beat the bots: Adversarial networks in finance Session

Automated investing has brought an immense amount of stability to the market, but it has also brought predictability. Garrett Lander and Al Kari examine if an adversarial network can game the behavior of automated investors by learning the patterns in market activity to which they are most vulnerable.

Chris Lattner is a distinguished engineer at Google leading the TensorFlow infrastructure and Swift for TensorFlow teams. His work cross-cuts a wide range of compiler, runtime, and other system infrastructure projects for high-performance machine learning accelerators, including CPUs, GPUs, TPUs, and mobile accelerators. Chris is the founder and chief architect of the LLVM and Clang projects and creator of the Swift programming language, and he drives the MLIR project at Google. He also serves on the LLVM Foundation’s board of directors and the Swift core team.

Presentations

MLIR: Accelerating AI Keynote

MLIR is TensorFlow's open source machine learning compiler infrastructure that addresses the complexity caused by growing software and hardware fragmentation and makes it easier to build AI applications. Chris Lattner and Tatiana Shpeisman explain how MLIR is solving this growing hardware and software divide and how it impacts you in the future.

Vitaly Lavrukhin is a senior applied research scientist at NVIDIA, working on deep learning algorithms for speech and language technologies. Previously, he conducted research to solve computer vision problems with deep learning methods at Samsung R&D Institute Russia.

Presentations

Speech recognition with OpenSeq2Seq Session

OpenSeq2Seq provides a large set of state-of-the-art models and building blocks for automatic speech recognition (Jasper, wav2letter, DeepSpeech2), speech synthesis (Centaur, Tacotron2), and natural language processing. Jason Li and Vitaly Lavrukhin explore large vocabulary speech recognition and speech command recognition tasks to solve these problems with OpenSeq2Seq.

Joohoon Lee is a principal product manager for AI inference software at NVIDIA. Previously, he led the automotive deep learning software solutions team focusing on the production deployment of neural networks in DRIVE AGX platform using TensorRT. His expertise includes quantization, sparsity optimization, compilers, GPU, and AI accelerator architecture design. Joohoon received his BS and MS in electrical and computer engineering from Carnegie Mellon University.

Presentations

Faster inference in TensorFlow 2.0 with TensorRT Session

TensorFlow 2.0 offers high performance for deep learning inference through a simple API. Siddharth Sharma and Joohoon Lee explain how to optimize an app using TensorRT with the new Keras APIs in TensorFlow 2.0. You'll learn tips and tricks to get the highest performance possible on GPUs and see examples of debugging and profiling tools by NVIDIA and TensorFlow.

Jeremy Lewi is a cofounder and lead engineer for the Kubeflow project at Google—an effort to help developers and enterprises deploy and use ML cloud natively everywhere. He’s been building on Kubernetes since its inception, starting with Dataflow and then moving onto Cloud ML Engine and now Kubeflow.

Presentations

Automating your developer workflow on GitHub with TensorFlow Session

Software development is central to machine learning, regardless of if you're prototyping in a Jupyter notebook or building a service for millions of users. Hamel Husain, Omoju Miller, Michał Jastrzębski, and Jeremy Lewi show you how to use a freely available, natural language dataset to build practical applications for anyone who writes software using TensorFlow.

Jason (Jing Yao) Li is a deep learning software engineer on the AI applications team at NVIDIA. He earned his BASc and MScAC at the University of Toronto working with Roger Grosse and Jimmy Ba. His research focus is on sequence-to-sequence models and speech, specifically in the domains of speech synthesis and speech recognition.

Presentations

Speech recognition with OpenSeq2Seq Session

OpenSeq2Seq provides a large set of state-of-the-art models and building blocks for automatic speech recognition (Jasper, wav2letter, DeepSpeech2), speech synthesis (Centaur, Tacotron2), and natural language processing. Jason Li and Vitaly Lavrukhin explore large vocabulary speech recognition and speech command recognition tasks to solve these problems with OpenSeq2Seq.

Tommy Li is a software developer at IBM focusing on cloud, container, and infrastructure technology. He’s worked on various developer journeys that provide use cases on cloud-computing solutions, such as Kubernetes, microservices, and hybrid cloud deployments. He’s passionate about machine learning and big data.

Presentations

Running TFX end to end in hybrid clouds leveraging Kubeflow Pipelines Session

TensorFlow Extended (TFX) is an end-to-end platform for deploying production ML pipelines. It provides a configuration framework and shared libraries to integrate common components needed to define, launch, and monitor your machine learning system. Animesh Singh, Pete MacKinnon, and Tommy Li demonstrate how to run TFX in hybrid cloud environments.

Mike Liang is a senior product manager for TensorFlow at Google Research. He has over a decade of experience in machine learning and digital advertising from leading Google’s Asia Pacific display ads product strategy to building big data startups in China. Mike holds a PhD from Stanford University and a BS from the University of California, San Diego.

Presentations

TensorFlow Hub: The platform to share and discover pretrained models for TensorFlow Keynote

Machine learning is a difficult skill to master for the many developers who are starting to use TensorFlow. Many developers use TensorFlow today, yet the majority of software developers out there have yet to learn machine learning. Mike Liang takes you through TensorFlow Hub, designed to help developers make better and faster user of machine learning in their products.

Chao Liu is a software developer at AMD, where he works on the open source high-performance deep learning library miOpen. His interests include the development of parallel algorithms and numerical methods for a variety of applications, including deep learning and physics based simulation. Previously, he developed techniques for computational fluid dynamics, finite element analysis, iterative solvers, and mesh generations on shared and distributed-memory machines.

Presentations

Modular convolution considered beneficial Session

Jack Chung, Chao Liu, and Daniel Lowell explore breaking convolution algorithms into modular pieces to be better fused with graph compilers such as accelerated linear algebra (XLA).

Zac Yung-Chun Liu is a machine learning engineer and data scientist. He specializes in machine learning, artificial intelligence applications, remote sensing data processing, biological data processing, and geospatial analysis. His deep learning works and open source projects involve computer vision, image classification, segmentation, object detection, and natural language processing, related to disease ecology and shark conservation.

Presentations

Building deep learning applications using TensorFlow to combat schistosomiasis Session

Schistosomiasis is a debilitating parasitic disease that affects more than 250 million people worldwide. Zac Yung-Chun Liu, Andy Chamberlin, Susanne Sokolow, Giulio De Leo, and Ton Ngo detail how to build and deploy deep learning applications to detect disease transmission hotspots, make interventions more efficient and scalable, and help governments and stakeholders make data-driven decisions.

Ben Lorica is the chief data scientist at O’Reilly. Ben has applied business intelligence, data mining, machine learning, and statistical analysis in a variety of settings, including direct marketing, consumer and market research, targeted advertising, text mining, and financial engineering. His background includes stints with an investment management company, internet startups, and financial services.

Presentations

Thursday keynote welcome Keynote

TensorFlow World program chairs Ben Lorica and Edd Wilder-James welcome you to the second day of keynotes.

Thursday opening welcome Keynote

Program Chairs, Ben Lorica and Edd Wilder-James open the second day of keynotes.

Wednesday keynote welcome Keynote

TensorFlow World program chairs Ben Lorica and Edd Wilder-James welcome you to the first day of keynotes.

Wednesday opening welcome Keynote

Program Chairs, Edd Wilder-James and Ben Lorica open the first day of keynotes.

Daniel Lowell is the team lead and software architect for miOpen, AMD’s deep learning GPU kernels library. Previously, he worked at AMD Research in the high-performance computing (HPC) arena, in compiler technology and reliability. His interests include deep learning, brain-machine interfaces, autocode generation, and HPC.

Presentations

Modular convolution considered beneficial Session

Jack Chung, Chao Liu, and Daniel Lowell explore breaking convolution algorithms into modular pieces to be better fused with graph compilers such as accelerated linear algebra (XLA).

Nathan Luehr is a senior developer technology engineer at NVIDIA, where he works to accelerate deep learning frameworks. His background is in theoretical chemistry. He holds a doctoral degree from Stanford University, where he worked to accelerate electronic structure calculations on GPUs.

Presentations

Accelerating training, inference, and ML applications on NVIDIA GPUs Tutorial

Maggie Zhang, Nathan Luehr, Josh Romero, Pooya Davoodi, and Davide Onofrio give you a sneak peek at software components from NVIDIA’s software stack so you can get the best out of your end-to-end AI applications on modern NVIDIA GPUs. They also examine features and tips and tricks to optimize your workloads right from data loading, processing, training, inference, and deployment.

Maxim Lukiyanov is a principal program manager on the Azure Machine Learning team at Microsoft. He works on large-scale deep learning training.

Presentations

Hands-on deep learning with TensorFlow 2.0 and Azure 2-Day Training

Maxim Lukiyanov, Vaidyaraman Sambasivam, Mehrnoosh Samekihow, and Santhosh Pillai demonstrate how AzureML helps data scientists be more productive when working through developing TensorFlow models for production. You'll see the whole model development lifecycle from training to deployment and ML ops to model interpretability.

Pete MacKinnon is a principal software engineer in the AI Center of Excellence at Red Hat. He’s actively involved in the open source Kubeflow project to bring TensorFlow machine learning workloads to container environments (Kubernetes and OpenShift).

Presentations

Running TFX end to end in hybrid clouds leveraging Kubeflow Pipelines Session

TensorFlow Extended (TFX) is an end-to-end platform for deploying production ML pipelines. It provides a configuration framework and shared libraries to integrate common components needed to define, launch, and monitor your machine learning system. Animesh Singh, Pete MacKinnon, and Tommy Li demonstrate how to run TFX in hybrid cloud environments.

Jason Mancuso is a research scientist at Dropout Labs, the founder of Cleveland AI, and an active member of the AI Village at DEF CON and the OpenMined community. He works on novel methods of making machine learning more performant for privacy-preserving techniques, most notably by contributing to the TF Encrypted project. He’s worked on a variety of safety and security problems, including safe reinforcement learning, secure and verifiable agent auditing, and neural network robustness. His work with the Cleveland Clinic established a state-of-the-art blood test classification and demonstrated that machine learning can virtually eliminate the problem of medical malpractice due to contaminated blood samples.

Presentations

Privacy-preserving machine learning with TensorFlow Tutorial

Jason Mancuso and Yann Dupis demonstrate how to build and deploy privacy-preserving machine learning models using TF Encrypted, PySyft-TensorFlow, and the TensorFlow ecosystem.

Briac Marcatté is a staff machine learning engineer at Twitter.

Presentations

Reliable, high-scale TensorFlow inference pipelines at Twitter Session

Twitter heavily relies on Scala and the Java Virtual Machine (JVM) and contains a lot of expertise knowledge. Shajan Dasan and Briac Marcatté detail the problems Twitter has had to overcome to make its offering reliable and provide a reliable TensorFlow inference to Twitter customer teams.

Margaret Maynard-Reid is a machine learning engineer and Google Developer Expert (GDE) at Tiny Peppers, and she’s a contributor to the open source ML framework TensorFlow. She writes blog posts and speaks at conferences about on-device ML, deep learning, computer vision, TensorFlow, and Android. Margaret is passionate about community building and helping others get started with AI and ML. She’s a community leader of GDG Seattle and Seattle Data/Analytics/Machine Learning Meetup.

Presentations

Deep learning for Android with TensorFlow Session

Margaret Maynard-Reid walks you through end-to-end tf.Keras to TFLite to Android, with or without ML Kit.

Town Hall: Contributors’ perspectives Contributor Summit

Join in to hear about the rapid growth of open source ML communities and a future road map for community building. Learn best practices, use cases, and how to develop metrics for your project by learning from other contributors to TensorFlow.

Clemens Mewald is the director of product management, machine learning and data science at Databricks, where he leads the product team. Previously, he spent four years on the Google Brain team building ML infrastructure for Google, Google Cloud, and open source users, including TensorFlow and TensorFlow Extended (TFX). Clemens holds an MSc in computer science from UAS Wiener Neustadt, Austria, and an MBA from MIT Sloan.

Presentations

Managing the full deployment lifecycle of TensorFlow models with the MLflow Model Registry (sponsored by Databricks) Session

Clemens Mewald offers an overview of the latest component of MLflow, a model registry that provides a collaborative hub where teams can share ML models, work together from experimentation to online testing and production, integrate with approval and governance workflows, and monitor ML deployments and their performance.

Omoju Miller is a machine learning engineer with GitHub. Previously, she co-led the nonprofit investment in computer science education for Google and served as a volunteer advisor to the Obama administration’s White House Presidential Innovation Fellows. She’s a member of the World Economic Forum Expert Network in AI.

Presentations

Automating your developer workflow on GitHub with TensorFlow Session

Software development is central to machine learning, regardless of if you're prototyping in a Jupyter notebook or building a service for millions of users. Hamel Husain, Omoju Miller, Michał Jastrzębski, and Jeremy Lewi show you how to use a freely available, natural language dataset to build practical applications for anyone who writes software using TensorFlow.

Why is machine learning seeing exponential growth in its communities? Contributor Summit

Omoju Miller provides a data-driven historical trace of the rise of applied machine learning (ML) and dives into the product features being built at GitHub to make the lives of maintainers and ML professionals better.

Saurabh Mishra is applied AI practice lead at Quantiphi with 8+ years of experience is delivering machine learning and applied AI solutions. He’s responsible for managing Quantiphi’s partnership with Google Cloud and leading business development and delivery for US East Coast.

Presentations

TensorFlow business case study showcase Session

Deepak Bhadauria, Saurabh Mishra, Upendra Sahu, Bhushan Jagyasi, David Beck, and Rahul Sarda share four real-world TensorFlow success stories from the banking, insurance, med tech, and nonprofit industries.

Piero Molino is a cofounder and senior research scientist at Uber AI Labs, where he works on natural language understanding and conversational AI. He’s the author of the open source platform Ludwig, a code-free deep learning toolbox.

Presentations

Enhance recommendations in Uber Eats with graph convolutional networks Session

Ankit Jain and Piero Molino detail how to generate better restaurant and dish recommendations in Uber Eats by learning entity embeddings using graph convolutional networks implemented in TensorFlow.

Sean Morgan is a lead research engineer at Two Six Labs. He’s an OSS maintainer and software enthusiast who blends an engineering background with expertise in machine learning. He’s researched and deployed production models for the semiconductor and defense industry. In his free time, he enjoys working as the SIG lead for TensorFlow add-ons and contributing to various other OSS libraries. He earned a masters in electrical engineering from the University of Virginia and a bachelors in chemical engineering from the University of Maryland.

Presentations

Town Hall: Contributors’ perspectives Contributor Summit

Join in to hear about the rapid growth of open source ML communities and a future road map for community building. Learn best practices, use cases, and how to develop metrics for your project by learning from other contributors to TensorFlow.

Laurence Moroney is a developer advocate on the Google Brain team at Google, working on TensorFlow and machine learning. He’s the author of dozens of programming books, including several best sellers, and a regular speaker on the Google circuit. When not Googling, he’s also a published novelist, comic book writer, and screenwriter.

Presentations

Zero to ML hero with TensorFlow 2.0 Tutorial

Get a programmer's perspective on machine learning with Laurence Moroney, from the basics all the way up to building complex computer vision scenarios using convolutional neural networks and natural language processing with recurrent neural networks.

Neelima Mukiri is a principal engineer in the Cloud Platform Solutions Group at Cisco, working on the architecture and development of Cisco’s Container Platform. Previously, she worked on the core virtualization layer at VMware and systems software in Samsung Electronics.

Presentations

Hyperparameter tuning for TensorFlow using Katib and Kubeflow Tutorial

Neelima Mukiri and Meenakshi Kaushik demonstrate how to automate hyperparameter tuning for a given dataset using Katib and Kubeflow. Katib can be easily run on a laptop or in a distributed production deployment, and Katib jobs and configuration can be easily ported to any Kubernetes cluster.

Ankur Narang is the vice president of AI and data technologies at Hike. Hike is a homegrown AI- and ML-led internet startup behind some of the country’s most innovative platforms such as Hike Messenger and more recently Hike Sticker Chat. He leads state-of-the-art research and development projects on natural language processing (NLP), chatbots, computer vision, speech recognition and related AI and ML areas. Ankur has over 25 years of experience in senior technology leadership positions across multinational corporation (MNCs), including IBM Research India and Sun Research Labs (Oracle), California. He was among the top 10 data scientists in India in 2017 (Analytics India Magazine) in recognition of solid scientific and industry contributions to the field of data science and artificial intelligence. In 2018, he was given the Top 50 Analytics Award at the Machine Conference in recognition of exemplary leadership and contributions to ML and AI (Analytics India Magazine). He was also conferred Top 100 Innovative CIO Award in 2019 for distinguished leadership in innovative technologies-based digital transformation (CIO Axis). In 2002, he was awarded Sun Microsystem’s prestigious Innovation Leadership Award for significant contributions to the Phaser Hardware Acceleration Project. He holds a BTech and a PhD, both in CS&E, from IIT Delhi and has 40+ publications in top international computer science and machine learning conferences and journals, along with 15 granted US patents. He has held multiple industrial track and workshop chair positions and has given invited talks in multiple international conferences. His areas of interest and expertise include artificial intelligence and machine learning, big data analytics, high-performance computing, distributed systems, parallelizing compilers, and IT for healthcare and oil and gas.

Presentations

Sticker recommendation and AI-driven innovations on the Hike messaging platform Keynote

Ankur Narang offers an overview of the cutting-edge AI-driven innovations on the Hike messaging platform, such as sticker recommendation with multilingual support—a key innovation driven by sophisticated natural language processing (NLP) algorithms.

Robby Neale is a senior software engineer at Google. He leads the tf.text effort on the NLX infrastructure team, focusing on expanding the capabilities of the TensorFlow platform to make creation of text-based models easier for developers.

Presentations

Building models with tf.text Session

There are many resources for building models from numeric data, which means processing text had to occur outside the model. Robby Neale walks you through ragged tensors and tf.text.

Ton Ngo is a senior software developer in the IBM Cognitive OpenTech Group at the IBM Silicon Valley Lab. Previously, he was with the IBM Research Lab at Yorktown and Almaden. He’s been active in the open source community for four years and is working on TensorFlow and deep learning. He was a core contributor in OpenStack for Magnum and Heat-Translator, focusing on the networking and storage support for container orchestrator such as Kubernetes. Ton frequently gives talks and programming tutorials on TensorFlow in San Francisco, Seattle, and New York and at OpenStack Summits worldwide. He has published papers on a wide range of subjects.

Presentations

Building deep learning applications using TensorFlow to combat schistosomiasis Session

Schistosomiasis is a debilitating parasitic disease that affects more than 250 million people worldwide. Zac Yung-Chun Liu, Andy Chamberlin, Susanne Sokolow, Giulio De Leo, and Ton Ngo detail how to build and deploy deep learning applications to detect disease transmission hotspots, make interventions more efficient and scalable, and help governments and stakeholders make data-driven decisions.

Dave Norman is the director of machine learning frameworks at Graphcore, where he heads the frameworks team and is the creator of the intelligence procession unit and Poplar Software. He’s been in software engineering for over 25 years, specializing in real-time, high-performance and embedded systems. Previously, he was at Hewlett Packard, writing control software for experimental wireless and broadband modems; has worked for various companies on drivers for novel hardware, 3G/4G base stations, and the tools chain for an FPGA-like architecture; and worked abroad in New Zealand developing real-time weather graphics.

Presentations

Targeting high-performance ML accelerators using XLA Session

Victoria Rege and David Norman dive into the software optimization for new accelerators using TensorFlow and accelerated linear algebra (XLA).

Tim Nugent pretends to be a mobile app developer, game designer, tools builder, researcher, and tech author. When he isn’t busy avoiding being found out as a fraud, Tim spends most of his time designing and creating little apps and games he won’t let anyone see. He also spent a disproportionately long time writing his tiny little bio, most of which was taken up trying to stick a witty sci-fi reference in…before he simply gave up. He’s writing Practical Artificial Intelligence with Swift for O’Reilly and building a game for a power transmission company about a naughty quoll. (A quoll is an Australian animal.)

Presentations

Swift for TensorFlow in 3 hours Tutorial

Mars Geldard, Tim Nugent, and Paris Buttfield-Addison are here to prove Swift isn't just for app developers. Swift for TensorFlow provides the power of TensorFlow with all the advantages of Python (and complete access to Python libraries) and Swift—the safe, fast, incredibly capable open source programming language; Swift for TensorFlow is the perfect way to learn deep learning and Swift.

Babusi Nyoni is a Zimbabwean innovator focused on the uses of artificial intelligence on the African continent. In 2016, he created what Forbes magazine described as, “the world’s first AI football commentator” for the UEFA Champions League final. In the same year, he created a prototype for the prediction of human displacement in Africa using AI, and thereafter worked with UNHCR Innovation to actualize a pilot project in the same field. He founded the Ulwazi Accelerator in 2018 to equip young Zimbabweans with the skills needed to contribute to the global digital economy. In 2019, he created an app for the early diagnosis of Parkinson’s disease and presented his findings at Oxford University on the Skoll World Forum stage. Babusi has a strong passion for fresh new ideas that will change the lives of those around him and is a firm believer that AI is shaping the technological zeitgeist worldwide.

Presentations

From dance to diagnosis: How Tensorflow.js is shaping AI in Africa Session

In 2018 Triple Black created a dance app that used Tensorflow.js-powered pose estimation on mobile phones to rate a popular South African dance known as "iVosho." Babusi Nyoni unpacks the possibilities for AI in disadvantaged African communities and explains how and why the company turned this dance app into a tool to diagnose Parkinson's disease.

Shin-ichiro Okamoto is the vice president of data science division at Actapio. Shin-ichiro works at Actapio on behalf of Yahoo! JAPAN. He develops AutoML with TensorFlow Extended and leads the AI research and development of Yahoo! JAPAN. Previously, he was the divisional chief technology officer in the Data and Science Solution Management Group at Yahoo! JAPAN.

Presentations

Introduction to Hilbert AutoML with TensorFlow Extended (TFX) at Yahoo! JAPAN Session

Hilbert is an AI framework that works with TensorFlow Extended (TFX) at Yahoo! JAPAN, which provides AutoML to create production-level deep learning models automatically. Hilbert is currently used by over 20 services of Yahoo! JAPAN. Shin-Ichiro Okamoto details how to achieve production-level AutoML and explores service use cases at Yahoo! JAPAN.

Davide Onofrio is a senior deep learning software technical marketing engineer at NVIDIA. He’s focused on development and presentation of deep learning technical developer-oriented content at NVIDIA. Davide has several years of experience working as a computer vision and machine learning engineer in biometrics, VR, and the automotive industry. He earned a PhD in signal processing at the Politecnico di Milano.

Presentations

Accelerating training, inference, and ML applications on NVIDIA GPUs Tutorial

Maggie Zhang, Nathan Luehr, Josh Romero, Pooya Davoodi, and Davide Onofrio give you a sneak peek at software components from NVIDIA’s software stack so you can get the best out of your end-to-end AI applications on modern NVIDIA GPUs. They also examine features and tips and tricks to optimize your workloads right from data loading, processing, training, inference, and deployment.

Krzys Ostrowski is a research scientist at Google AI, focusing on developing programming abstractions for machine learning in large-scale distributed environments. He holds a PhD in computer science from Cornell University, where he focused on distributed systems and programming languages.

Presentations

A journey into the world of federated learning with TensorFlow Federated Session

Krzysztof Ostrowski dives into federated learning (FL)—an approach to machine learning where a shared model is trained across many clients that keep their training data local—and goes hands-on with FL using TensorFlow Federated (TFF). He demonstrates step-by-step how to train your TensorFlow model in a federated environment, implement custom federated computations, and set up large simulations.

Nicole Pang is a product manager for TensorFlow at Google Brain. She leads TensorFlow’s worldwide efforts user growth and education initiatives. Previously, she was the product lead for user safety at Tumblr, a social networking company, and later, the product lead for growth at an AI startup in San Francisco. Nicole received her BS in computer science and engineering from MIT.

Presentations

Getting involved in the TensorFlow community Session

Large-scale open source projects can be daunting, and one of the goals of TensorFlow is to be accessible to many contributors. Joana Carrasqueira and Nicole Pang share some great ways to get involved in TensorFlow, explain how its design and development works, and show you how to get started if you're new to machine learning or new to TensorFlow.

Sean Park is a senior malware scientist in the Machine Learning Group at Trend Micro, as part of an elite team of researchers solving highly difficult problems in the battle against cybercrime. His main research focus is deep learning-based threat detection, including generative adversarial malware clustering, metamorphic malware detection using semantic hashing and Fourier transform, malicious URL detection with attention mechanism, macOS malware outbreak detection, semantic malicious script autoencoder, and heterogeneous neural networks for Android APK detection. Previously, he worked for Kaspersky, FireEye, Symantec, and Sophos. He also created a critical security system for banking malware at a top Australian bank.

Presentations

Generative malware outbreak detection Session

Practical defense systems require precise detection during malware outbreaks with only a handful of available samples. Sean Park demonstrates how to detect in-the-wild malware samples with a single training sample of a kind, with the help of TensorFlow's flexible architecture in implementing a novel variable-length generative adversarial autoencoder.

Aalok Patwa (he/him) is a sophomore at Archbishop Mitty High School, California, interested in machine learning and healthcare. He’s done several research projects in the past that have won awards at the regional, state, and national level. He’s also committed to outreach, imparting his knowledge about computer science and medicine to the broader public. Aalok is the president of the computer programming club at his high school and an avid participant in speech and debate. He won the first place category award in the Synopsys science fair in 2018 and 2019, was a national finalist at the Broadcom MASTERS Science Fair in 2016, earned a Raytheon achievement award at the California State science fair in 2016, and was a speech and debate national qualifier.

Presentations

TensorFlow and medicine: Using deep learning for real-time segmentation of colon polyps Session

The public health sector is growing rapidly, and with new methods of data collection comes a need for new analyzation methods. Aalok Patwa explains how to use TensorFlow to create a deep learning model that detects, localizes, and segments colon polyps from colonoscopy image and video. You'll gain technical knowledge of TensorFlow, Keras, and ideas for the application of TensorFlow in medicine.

Santhosh Pillai is a principal program manager with the Azure machine learning team at Microsoft. Santhosh is responsible for data scientists’ experimentation experience with Azure machine learning service, specifically its highly optimized ML workflow orchestration engine, AzureML Pipelines, that can stitch together multistep workflows across heterogeneous computes. He’s been working on the machine learning platform (infrastructure, SDK, and graph authoring UX) for Microsoft and its customers over the last several years.

Presentations

Hands-on deep learning with TensorFlow 2.0 and Azure 2-Day Training

Maxim Lukiyanov, Vaidyaraman Sambasivam, Mehrnoosh Samekihow, and Santhosh Pillai demonstrate how AzureML helps data scientists be more productive when working through developing TensorFlow models for production. You'll see the whole model development lifecycle from training to deployment and ML ops to model interpretability.

Laxmi Prajapat is a senior data scientist at Datatonic, with involvement in end-to-end project delivery, including stakeholder management, data exploration, machine learning, algorithm design, automation, and productionization solutions on Google Cloud. After a masters in astrophysics from UCL, Laxmi has held several technical roles in industry. She’s at her happiest when learning new things and challenging herself. Laxmi is always looking to expand her knowledge and apply it practically, especially in the fields of machine learning and engineering. Outside of work, she enjoys exploring new cuisines or finding a book to get lost in.

Presentations

Effective sampling methods within TensorFlow input functions Session

Many real-world machine learning applications require generative or reductive sampling of data. Laxmi Prajapat and William Fletcher demonstrate sampling techniques applied to training and testing data directly inside the input function using the tf.data API.

Shashank Prasanna is a senior AI and machine learning evangelist at Amazon Web Services, where he focuses on helping engineers, developers, and data scientists solve challenging problems with machine learning. Previously, he worked at NVIDIA, MathWorks (makers of MATLAB), and Oracle in product marketing and software development roles focused on machine learning products. Shashank holds an MS in electrical engineering from Arizona State University.

Presentations

TensorFlow on AWS 2-Day Training

Amazon Web Services (AWS) offers a breadth and depth of services to easily build, train, and deploy TensorFlow models. Shashank Prasanna, Vikrant Kahlir, and Rama Thamman give you hands-on experience working with these services.

Karthik Ramachandran is a product manager on the Cloud AI team at Google. He focuses on AI platform notebooks, deep learning virtual machines (VMs), and deep learning containers. Previously, he was a software engineer, engineering manager, and product manager at a number of different organizations including In-Q-Tel’s Lab41, Premise, and Primer.ai. Karthik has a master’s degree from Georgetown University and a bachelor’s degree from Carnegie Mellon University.

Presentations

Maximizing the performance and longevity of your TensorFlow applications on Google Cloud Platform (sponsored by Google Cloud) Session

Karthik Ramachandran and Kaz Sato take a look at how you can use AI platform notebooks, deep learning virtual machines, and deep learning containers to build TensorFlow applications. You'll learn to maximize TensorFlow performance on Google Cloud by eliminating I/O bottlenecks and some tips and tricks for ensuring the longevity and reliability of your AI-powered enterprise applications.

Anusha Ramesh is a product manager for TensorFlow at Google Brain. She works on TensorFlow Extended, which a production-scale machine learning platform. Previously, Anusha was a product lead at a fashion tech startup that builds personalized recommendations for women’s fashion. She has a master’s degree in information networking from Carnegie Mellon.

Presentations

TFX: An end-to-end ML platform for everyone Keynote

Konstantinos Katsiapis and Anusha Ramesh offer an overview of TensorFlow Extended (TFX), which has evolved as the ML platform solution within Alphabet over the past decade.

Sujith Ravi is a senior staff research scientist and senior manager at Google AI, where he leads the company’s large-scale graph-based machine learning platform and on-device machine learning efforts for products used by millions of people everyday in Search, Gmail, Photos, Android, and YouTube. These technologies power features like Smart Reply, image search, on-device predictions in Android, and platforms like Neural Structured Learning and Learn2Compress. Sujith has authored over 90 scientific publications and patents in top-tier machine learning and natural language processing conferences, and his work won the SIGDIAL Best Paper Award in 2019 and ACM SIGKDD Best Research Paper Award in 2014. His work has been featured in Wired, Forbes, Forrester, New York Times, TechCrunch, VentureBeat, Engadget, New Scientist, among others, and he’s a mentor for Google Launchpad startups. Sujith was the cochair (AI and deep learning) for the 2019 National Academy of Engineering (NAE) Frontiers of Engineering symposium. He was the cochair for ICML 2019, NAACL 2019, and NeurIPS 2018 ML workshops and regularly serves as senior/area chair and PC of top-tier machine learning and natural language processing conferences.

Presentations

Neural structured learning in TensorFlow Session

Da-Cheng Juan and Sujith Ravi explain neural structured learning (NSL), an easy-to-use TensorFlow framework that both novice and advanced developers can use for training neural networks with structured signals.

Victoria Rege is the head of strategic partnerships at Graphcore, where she works with key customers and leads research and universities AI engagements. She has over a decade of experience in the semiconductor space. Previously, she held several leadership positions at NVIDIA from global alliances, product marketing, and campaigns to the founding of the GPU Technology Conference; and she has worked in the hedge fund space as executive director for the Hedge Fund Business Operations Association. Victoria is a frequent contributor to ACM SIGGRAPH and is AR, MR & VR Chair for the SIGGRAPH 2019 Conference. She’s also an active member of the Consumer Technology Association’s AI Working Group.

Presentations

Targeting high-performance ML accelerators using XLA Session

Victoria Rege and David Norman dive into the software optimization for new accelerators using TensorFlow and accelerated linear algebra (XLA).

Fred Reiss is the chief architect at the IBM Spark Technology Center in San Francisco and is one of the founding employees of the center. Previously, he was at IBM Research – Almaden, where he worked on the SystemML and SystemT projects, as well as on the research prototype of DB2 with BLU Acceleration. Fred has over 25 peer-reviewed publications and six patents. He earned his PhD from UC Berkeley.

Presentations

TensorFlow, open source, and IBM (sponsored by IBM) Keynote

IBM has a long history of contributing to the open source projects that make the most difference to its clients, and the company has been working to build responsible solutions to enterprise data science problems for many years. Join Frederick Reiss to hear about IBM's role in open source software, TensorFlow, building AI solutions, and what IBM is excited about with this latest (2.0) release.

Taylor Robie is software engineer at Google, where he’s a member of the TensorFlow high-level APIs team focusing on performance with a particular emphasis on out-of-the-box performance of Keras. Previously, he was a maintainer of the TensorFlow official models repository and optimized several of the Google MLPerf submissions.

Presentations

Performant, scalable models in TensorFlow 2.0 with tf.data, tf.function, and tf.distribute Session

Join Taylor Robie and Priya Gupta to learn how you can use tf.distribute to scale your machine learning model on a variety of hardware platforms ranging from commercial cloud platforms to dedicated hardware. You'll learn tools and tips to get the best scaling for your training in TensorFlow.

Josh Romero is a developer technology engineer at NVIDIA. He has extensive experience in GPU computing from porting and optimizing high-performance computing (HPC) applications to more recent work with deep learning. Josh earned his PhD from Stanford University, where his research focused on developing new computational fluid dynamics methods to better exploit GPU hardware.

Presentations

Accelerating training, inference, and ML applications on NVIDIA GPUs Tutorial

Maggie Zhang, Nathan Luehr, Josh Romero, Pooya Davoodi, and Davide Onofrio give you a sneak peek at software components from NVIDIA’s software stack so you can get the best out of your end-to-end AI applications on modern NVIDIA GPUs. They also examine features and tips and tricks to optimize your workloads right from data loading, processing, training, inference, and deployment.

Anna S. Roth a PM for the computer vision cloud team at Microsoft. Previously she worked at Microsoft Technology & Research, on the team which launched Microsoft Cognitive Services. Say hello on Twitter at @AnnaSRoth.

Presentations

"Human error": How can we help people build models that do what they expect Keynote

It's never been easier to train machine learning models. With excellent open source tooling, lower compute techniques, and incredible educational material online, really anybody can start to train their own models today. Yet, Anna Roth explains, when domain experts try to transfer their expertise to an ML model, the results can be unpredictable.

Brennan Saeta is a software engineer on the Google Brain team leading the Swift for TensorFlow project. Previously, he was the TensorFlow tech lead for Cloud TPUs.

Presentations

Swift for TensorFlow Session

Paige Bailey and Brennan Saeta walk you through Swift for TensorFlow, a next-generation machine learning platform that leverages innovations like first-class differentiable programming to seamlessly integrate deep neural networks with traditional AI algorithms and general purpose software development.

Upendra Sahu is a machine learning engineer at Quantiphi who has a knack for solving complex problems and experience in designing and building robust end-to-end ML solutions. He brings a mixture of developer and entrepreneurial spirit. He’s developed several AI-driven projects in the fields of computer vision and natural language processing (NLP). In his free time, he enjoys solving puzzles.

Presentations

TensorFlow business case study showcase Session

Deepak Bhadauria, Saurabh Mishra, Upendra Sahu, Bhushan Jagyasi, David Beck, and Rahul Sarda share four real-world TensorFlow success stories from the banking, insurance, med tech, and nonprofit industries.

Vaidyaraman Sambasivam is a principal program manager with Azure AI – Machine Learning team at Microsoft. He is responsible for building a highly secure, scalable and performant ML inference platform enabling users to run inferences on their ML models in different ways by effectively managing their performance/cost tradeoff without adding complexity.

Presentations

Hands-on deep learning with TensorFlow 2.0 and Azure 2-Day Training

Maxim Lukiyanov, Vaidyaraman Sambasivam, Mehrnoosh Samekihow, and Santhosh Pillai demonstrate how AzureML helps data scientists be more productive when working through developing TensorFlow models for production. You'll see the whole model development lifecycle from training to deployment and ML ops to model interpretability.

Mehrnoosh Sameki is a technical program manager at Microsoft, responsible for leading the product efforts on machine learning interpretability within the Azure Machine Learning platform. Previously, she was a data scientist at Rue Gilt Groupe, incorporating data science and machine learning in the retail space to drive revenue and enhance customers’ personalized shopping experiences. She earned her PhD degree in computer science at Boston University.

Presentations

Hands-on deep learning with TensorFlow 2.0 and Azure 2-Day Training

Maxim Lukiyanov, Vaidyaraman Sambasivam, Mehrnoosh Samekihow, and Santhosh Pillai demonstrate how AzureML helps data scientists be more productive when working through developing TensorFlow models for production. You'll see the whole model development lifecycle from training to deployment and ML ops to model interpretability.

Rahul Sarda is the global practice head, big data, and distinguished member at Wipro, where he has supported the world’s largest consumer device firm in the US, successfully delivered secured big data enabled real-time complex-event processing platform that caters to real-time millions of messages and allows data scientists to plug and play various domain-specific machine learning models and workflows for fraud detection, recommendations, etc. This platform also provides a secured semantic data layer on a Hadoop platform to perform data exploration and discovery for data scientists on the petabyte-scale historical data generated through these events. Rahul is a recognized thought leader on the information management platform covering big data, enterprise data warehouse, predictive analytics, and enterprise application integration areas, helping organizations formulate strategies to leverage information as a strategic corporate asset for competitive differentiation. He’s played enterprise architect roles and worked with multiple customers on building and supporting the data strategy for them, including Apple, NV Energy, Capital One, ZNA, UBS, Pfizer, etc. He’s supported strategy around big data platforms for various industries, including manufacturing, energy, retail, supply chain, media, telecom, banking, and insurance.

Presentations

TensorFlow business case study showcase Session

Deepak Bhadauria, Saurabh Mishra, Upendra Sahu, Bhushan Jagyasi, David Beck, and Rahul Sarda share four real-world TensorFlow success stories from the banking, insurance, med tech, and nonprofit industries.

Kaz Sato is a staff developer advocate on the cloud platform team at Google, where he leads the developer advocacy team for machine learning and data analytics products such as TensorFlow, the Vision API, and BigQuery. Kaz has been leading and supporting developer communities for Google Cloud for over seven years. He’s a frequent speaker at conferences, including Google I/O 2016, Hadoop Summit 2016 San Jose, Strata + Hadoop World 2016, and Google Next 2015 NYC and Tel Aviv, and he has hosted FPGA meetups since 2013.

Presentations

AutoML Vision and Edge TPU: Bringing TensorFlow Lite models to edge devices Session

Kaz Sato walks you through AutoML Vision, which allows you to upload labeled images, press a "train" button, and wait for a day to get an image recognition model with state-of-the-art accuracy. Without any ML expertise, you can easily train the model in the cloud, export the TensorFlow Lite model, and use it on mobile devices, Rasberry Pi, and Edge TPU with super low latency and power consumption.

Maximizing the performance and longevity of your TensorFlow applications on Google Cloud Platform (sponsored by Google Cloud) Session

Karthik Ramachandran and Kaz Sato take a look at how you can use AI platform notebooks, deep learning virtual machines, and deep learning containers to build TensorFlow applications. You'll learn to maximize TensorFlow performance on Google Cloud by eliminating I/O bottlenecks and some tips and tricks for ensuring the longevity and reliability of your AI-powered enterprise applications.

Robert Schroll is a data scientist in residence at the Data Incubator. Previously, he held postdocs in Amherst, Massachusetts, and Santiago, Chile, where he realized that his favorite parts of his job were teaching and analyzing data. He made the switch to data science and has been at the Data Incubator since. Robert holds a PhD in physics from the University of Chicago.

Presentations

Introduction to TensorFlow 2-Day Training

The TensorFlow library provides for the use of computational graphs with automatic parallelization across resources, ideal architecture for implementing neural networks. Robert Schroll introduces TensorFlow's capabilities in Python, moving from building machine learning algorithms piece by piece to using the Keras API provided by TensorFlow with several hands-on applications.

Andrew Selle is a senior staff software engineer for TensorFlow Lite at Google and is one of its initial architects. He’s also worked on improvements to the core and API of TensorFlow. Previously, he worked extensively in research and development of highly parallel numerical physical simulation techniques for physical phenomena for film and physically based rendering. He worked on several Walt Disney Animation Films including Frozen and Zootopia. He holds a PhD in computer science from Stanford University.

Presentations

TensorFlow Lite: Beginner to expert Tutorial

Andrew Selle offers an introduction to TensorFlow Lite and takes you through the conversion, performance, and optimization path while using Android and iOS applications.

Sudipta Sengupta is a senior principal technologist and director at AWS, where he leads new initiatives in artificial intelligence and deep learning. Previously, he headed an end-to-end innovation agenda at Microsoft Research, spanning cloud networking, storage, and data management; was at Bell Labs working on internet routing, optical switching, network security, wireless networks, and network coding. He has shipped his research in many industry-leading, award-winning products and services. Sudipta is an ACM fellow and an IEEE fellow. He was awarded the IEEE William R. Bennett Prize and the IEEE Leonard G. Abraham Prize for his work on computer networking. Sudipta holds a PhD and an MS in EECS from MIT and a BTech in computer science and engineering from the Indian Institute of Technology, Kanpur, India. He was awarded the President of India Gold Medal at IIT-Kanpur for graduating at the top of his class across all disciplines.

Presentations

Integrating deep learning accelerators with TensorFlow Session

Sudipta Sengupta dives into his experience with Amazon Elastic Inference and AWS Inferentia with TensorFlow in the AWS cloud.

Presentations

How to create a perfect Pull Request and what to expect when you submit it Contributor Summit

Do you know what happens when you submit a PR? Do you want to create PRs that sail through smoothly and get merged quickly? Novice or Seasoned contributor, come learn all about the Pull Requests: PR life cycle, behind the scenes, common mistakes, best practices and more, during this mini dynamite session catered to you contributors, to write better Pull request.

Abin Shahab is a Staff Software Engineer at Linkedin. Since 2014 he has been working on containers and containerizing big data workloads. He’s a contributor to Docker, runc, lxc, cadvisor(part of Kubelet), YARN’s container runtime, and Kubeflow. Currently he’s leading Linkedin’s Deep learning infra team. His other passion is software architectures which was his focus during his graduate studies at Carnegie Mellon University. In his free time(usually after both his daughters are in bed) he reads sci-fi.

Presentations

Scaling TensorFlow at LinkedIn Session

Keqiu Hu, Jonathan Hung, and Abin Shahab explore the challenges LinkedIn encountered and resolved to scale TensorFlow.

Siddharth Sharma is a senior technical marketing manager for accelerated computing at NVIDIA. Previously, Siddharth was a product marketing manager for Simulink and Stateflow at MathWorks, working closely with automotive and aerospace companies to adopt model-based designs for creating control software.

Presentations

Faster inference in TensorFlow 2.0 with TensorRT Session

TensorFlow 2.0 offers high performance for deep learning inference through a simple API. Siddharth Sharma and Joohoon Lee explain how to optimize an app using TensorRT with the new Keras APIs in TensorFlow 2.0. You'll learn tips and tricks to get the highest performance possible on GPUs and see examples of debugging and profiling tools by NVIDIA and TensorFlow.

Tatiana Shpeisman is an engineering manager in Google Brain, where she leads the team working on TensorFlow graph compiler, MLIR, and TensorFlow infrastructure for GPUs and CPUs. Previously, she led Intel Labs’s efforts to deliver programmability and performance to modern parallel and heterogeneous computing platforms. Tatiana is passionate about using compiler technology to build better machine learning systems. She holds a PhD in computer science from the University of Maryland, College Park.

Presentations

MLIR: Accelerating AI Keynote

MLIR is TensorFlow's open source machine learning compiler infrastructure that addresses the complexity caused by growing software and hardware fragmentation and makes it easier to build AI applications. Chris Lattner and Tatiana Shpeisman explain how MLIR is solving this growing hardware and software divide and how it impacts you in the future.

Animesh Singh is a senior technical staff manager and program director at IBM, leading IBM AI OSS strategy working with the IBM Watson and Cloud Platform. He leads machine learning and deep learning initiatives and works with communities and customers to design and implement deep learning, machine learning, and cloud computing frameworks. He has a proven track record of driving design and implementation of private and public cloud solutions from concept to production. In his decade-plus at IBM, Animesh has worked on cutting-edge projects for IBM enterprise customers in the telco, banking, and healthcare industries, particularly focusing on cloud and virtualization technologies, and led the design and development of the first IBM public cloud offering.

Presentations

Running TFX end to end in hybrid clouds leveraging Kubeflow Pipelines Session

TensorFlow Extended (TFX) is an end-to-end platform for deploying production ML pipelines. It provides a configuration framework and shared libraries to integrate common components needed to define, launch, and monitor your machine learning system. Animesh Singh, Pete MacKinnon, and Tommy Li demonstrate how to run TFX in hybrid cloud environments.

Animesh Singh is a senior technical staff member (STSM) and program director for the IBM Watson and Cloud Platform, where he leads machine learning and deep learning initiatives on IBM Cloud and works with communities and customers to design and implement deep learning, machine learning, and cloud computing frameworks. He has a proven track record of driving design and implementation of private and public cloud solutions from concept to production. Animesh has worked on cutting-edge projects for IBM enterprise customers in the telco, banking, and healthcare industries, particularly focusing on cloud and virtualization technologies, and led the design and development first IBM public cloud offering.

Presentations

Trusted AI: Bringing trust back into AI through open source (sponsored by IBM) Session

Join Animesh Singh to learn how IBM leverages the power of open source to bring trust back in AI, using popular open source projects for adversarial AI defense and attacks, bias detection and mitigation, and datasets and model explainability.

Sarah Sirajuddin is an engineering director working on TensorFlow at Google. She leads the teams working on on-device machine learning, TensorFlow Extended, and efforts around training models for the best accuracy and performance with Google’s cutting-edge infrastructure, including TensorFlow and tensor processing units (TPUs).

Presentations

TensorFlow Lite: ML for mobile and IoT devices Keynote

TensorFlow Lite makes it really easy to execute machine learning on mobile phones and microcontrollers. Jared Duke and Sarah Sirajuddin explore on-device ML and the latest updates to TensorFlow Lite, including model conversion, optimization, hardware acceleration, and a ready-to-use model gallery. They also showcase demos and production use cases for TensorFlow Lite on phones and microcontrollers.

Susanne Sokolow is a senior research associate at Stanford University and UC Canta Barbara. She’s also the executive director of the newly founded Center for Disease Ecology, Health, and Development at Stanford University and is a cofounder and an executive board member of the Upstream Alliance, an initiative joining partners across the globe in research for schistosomiasis reduction. She studies basic and applied research at the interface of disease ecology, health, and development. Her research program seeks natural solutions to modern health and environmental problems plaguing the developing world.

Presentations

Building deep learning applications using TensorFlow to combat schistosomiasis Session

Schistosomiasis is a debilitating parasitic disease that affects more than 250 million people worldwide. Zac Yung-Chun Liu, Andy Chamberlin, Susanne Sokolow, Giulio De Leo, and Ton Ngo detail how to build and deploy deep learning applications to detect disease transmission hotspots, make interventions more efficient and scalable, and help governments and stakeholders make data-driven decisions.

Zak Stone is the product manager for Cloud TPUs on the Google Brain team and the founder of the TensorFlow Research Cloud (TFRC) at Google. He’s interested in making hardware acceleration for machine learning universally accessible and useful. Previously, Zak founded a mobile-focused deep learning startup that was acquired by Apple, and while at Apple, Zak contributed to the privacy-preserving on-device face identification technology in iOS 10 and macOS Sierra that was announced at the Apple Worldwide Developers Conference (WWDC) 2016. Zak holds a PhD in computer vision.

Presentations

Great TensorFlow Research Cloud projects from around the world (and how to start your own) Session

Join Zak Stone to see how researchers all over the world are expanding the frontiers of ML using free Cloud TPU capacity from the TensorFlow Research Cloud.

Presentations

TensorFlow on AWS 2-Day Training

Amazon Web Services (AWS) offers a breadth and depth of services to easily build, train, and deploy TensorFlow models. Shashank Prasanna, Vikrant Kahlir, and Rama Thamman give you hands-on experience working with these services.

Theodore Summe is the head of product for Cortex, Twitter’s central ML organization. His team of product managers works across applied research, ML services, and ML platform, working with all Twitter product teams to apply and advance ML applications to meet Twitter’s customers’ needs.

Presentations

Accelerating ML at Twitter Keynote

Twitter employs ML throughout its product to deliver value for its customers. Theodore Summe gives you a glimpse into ML at Twitter and explains how Cortex works to accelerate ML to better serve customer needs by partnering with TensorFlow.

Mikhail Szugalew is a machine learning developer at the Knowledge Society. A year ago, he knew nothing about machine learning, object detection, or the physical challenges the visually impaired face. With a strong will, he set out to learn about AI and make an impact in the world. Over the course of just eight months, he researched and developed a prototype device to assist the visually impaired with their navigational challenges. His endeavors show how machine learning technologies can impact the future. His experiences at just the age of 16 are a great example of how we live in a world where new powerful technologies can be leveraged by anyone, and even teenagers can make a difference.

Presentations

How machine learning can empower a 16-year-old to make crossing the street safer Session

When Mikhail Szugalew discovered that the visually impaired face huge navigational challenges with tasks as simple as crossing the street, he decided to do something about it at just the age of 16, using his experience with TensorFlow to develop object-detection models. He highlights his insights, struggles, process, takeaways, and vision for a better future.

Yong Tang is the director of engineering at MobileIron. He contributes to different container and machine learning projects for the open source community. His most recent focus is on data processing in machine learning. He’s a committer and the SIG I/O lead of the TensorFlow project, and received the Open Source Peer Bonus Award from Google for his contributions to TensorFlow. In addition to TensorFlow, Yong also contributes to many other projects for the open source community and is a committer of Docker and CoreDNS projects.

Presentations

Machine learning over real-time streaming data with TensorFlow Session

In many applications where data is generated continuously, combining machine learning with streaming data is imperative to discover useful information in real time. Yong Tang explores TensorFlow I/O, which can be used to easily build a data pipeline with TensorFlow and stream frameworks such as Apache Kafka, AWS Kinesis, or Google Cloud PubSub.

Rama Thamman is an R&D manager at Amazon Web Services.

Presentations

TensorFlow on AWS 2-Day Training

Amazon Web Services (AWS) offers a breadth and depth of services to easily build, train, and deploy TensorFlow models. Shashank Prasanna, Vikrant Kahlir, and Rama Thamman give you hands-on experience working with these services.

Neil Truong is a senior field application engineer at NVIDIA with expertise in system management and hardware architecture focused on GPU DL and ML application. He’s supporting the Google platform team to deploy the next-generation GPU hardware and software. He has experience in system on a chip (SoC) and system-level testing process and has managed GPU system design from concept to production.

Presentations

Running TensorFlow at scale on GPUs (sponsored by NVIDIA) Session

Neil Truong, Kari Briski, and Khoa Ho walk you through their experience running TensorFlow at scale on GPU clusters like the DGX SuperPod and the Summit supercomputer. They explore the design of these large-scale GPU systems and detail how to run TensorFlow at scale using BERT and AI plus high-performance computing (HPC) applications as examples.

KC Tung is an AI architect at Microsoft. Previously, he has been a cloud architect, ML engineer, and data scientist with hands-on experience and success in the development and serving of AI, deep learning, computer vision, and natural language processing (NLP) models in many enterprise use case-driven architectures, using open source machine learning libraries such as TensorFlow, Keras, PyTorch, and H2O. His specialties are AI and ML in end-to-end model and data structure design, testing, and serving in the cloud or on-premises, and technical core, the design of experiments, hypothesis development, and reference architecture for AI and ML in cloud-centric implementation. KC holds a PhD in molecular biophysics from the University of Texas Southwestern Medical Center in Dallas, Texas.

Presentations

A novel solution for a data augmentation and bias problem in NLP using TensorFlow Session

Join KC Tung to discover a way to use TensorFlow to solve a natural language processing (NLP) model bias problem with data augmentation for an enterprise customer (one of the largest airlines in the world). KC leveraged hidden gems in tf.data and the new API to easily find a novel use for text generation and found it surprisingly improved his NLP model.

Kari has been in the hardware and software solution industry for almost 20 years now, spending the last 3 years at NVidia in the data center and deep learning software group creating computing products that help people achieve their life’s work.

Presentations

Running TensorFlow at scale on GPUs (sponsored by NVIDIA) Session

Neil Truong, Kari Briski, and Khoa Ho walk you through their experience running TensorFlow at scale on GPU clusters like the DGX SuperPod and the Summit supercomputer. They explore the design of these large-scale GPU systems and detail how to run TensorFlow at scale using BERT and AI plus high-performance computing (HPC) applications as examples.

Paul Van Eck is a software engineer in the Cognitive OpenTech Group at IBM. Over the past few years, he’s been actively involved in open source AI technologies such as PyTorch and TensorFlow. With several years of web development experience, Paul has an express interest in browser-based machine learning. He’s worked on projects leveraging TensorFlow.js and still aims to explore other possibilities in this intersection of technologies.

Presentations

Node-RED and TensorFlow.js: Developing deep learning IoT apps in the browser Session

Va Barbosa and Paul Van Ec highlight the benefits of using TensorFlow.js and Node-RED together as an educational tool to engage developers and provide you with a powerful, creativity-inspiring platform for interacting and developing with machine learning models.

Pete Warden is the technical lead of the mobile and embedded TensorFlow Group on Google’s Brain team.

Presentations

TensorFlow Lite: Solution for running ML on-device Session

Pete Warden, Nupur Garg, and Matthew Dupuy take you through TensorFlow Lite, TensorFlow’s lightweight cross-platform solution for mobile and embedded devices, which enables on-device machine learning inference with low latency, high performance, and a small binary size.

Martin Wicke is a software engineer at Google working on making sure that TensorFlow is a thriving open source project. Previously, Martin worked in a number of startups and did research on computer graphics at Berkeley and Stanford.

Presentations

TensorFlow: To 2.0 and beyond Contributor Summit

Martin Wicke delivers the opening session, reflecting on TensorFlow's journey to 2.0 and where the project is headed next.

Town Hall: Contributors’ perspectives Contributor Summit

Join in to hear about the rapid growth of open source ML communities and a future road map for community building. Learn best practices, use cases, and how to develop metrics for your project by learning from other contributors to TensorFlow.

Edd Wilder-James is a strategist at Google, where he is helping build a strong and vital open source community around TensorFlow. A technology analyst, writer, and entrepreneur based in California, Edd previously helped transform businesses with data as vice president of strategy for Silicon Valley Data Science. Formerly Edd Dumbill, Edd was the founding program chair for the O’Reilly Strata Data Conference and chaired the Open Source Software Conference for six years. He was also the founding editor of the peer-reviewed journal Big Data. A startup veteran, Edd was the founder and creator of the Expectnation conference management system and a cofounder of the Pharmalicensing online intellectual property exchange. An advocate and contributor to open source software, Edd has contributed to various projects such as Debian and GNOME and created the DOAP vocabulary for describing software projects. Edd has written four books, including Learning Rails (O’Reilly).

Presentations

Getting involved in the TensorFlow community Contributor Summit

Learn how you can be a part of the growing TensorFlow (TF) ecosystem and become a contributor through code, documentation, education, or community leadership. Edd Wilder-James and Joana Filipa Bernardo Carrasqueira give you an overview of GitHub practices, request for comment (RFC) processes, and how to join the TF Special Interest Groups (SIGs) and make an impact in the community.

Thursday keynote welcome Keynote

TensorFlow World program chairs Ben Lorica and Edd Wilder-James welcome you to the second day of keynotes.

Thursday opening welcome Keynote

Program Chairs, Ben Lorica and Edd Wilder-James open the second day of keynotes.

Town Hall: Contributors’ perspectives Contributor Summit

Join in to hear about the rapid growth of open source ML communities and a future road map for community building. Learn best practices, use cases, and how to develop metrics for your project by learning from other contributors to TensorFlow.

Wednesday keynote welcome Keynote

TensorFlow World program chairs Ben Lorica and Edd Wilder-James welcome you to the first day of keynotes.

Wednesday opening welcome Keynote

Program Chairs, Edd Wilder-James and Ben Lorica open the first day of keynotes.

Welcome & Opening Remarks Contributor Summit

Edd Wilder-James welcomes you to the TensorFlow World 2019 Contributor Summit.

Craig Wiley is the director of product for Google Cloud’s AI Platform. Previously, Craig spent nine years at Amazon, most recently as the general manager of Amazon SageMaker, AWS’s machine learning platform, and he led pricing and analytics in Amazon’s third-party seller business. Craig has a deep belief in democratizing the power of data; he pushes to improve the tooling for experienced users while seeking to simplify it for the growing set of less-experienced users. Outside of work, he enjoys spending time with his family, eating delicious meals, and enthusiastically struggling through small home improvement projects.

Presentations

Enterprise-ready TensorFlow in the cloud (sponsored by Google Cloud Platform) Keynote

Enterprise adoption of AI placed new expectations on TensorFlow. Craig Wiley details how to maximize your TensorFlow performance and experience in the cloud. You’ll learn how to speed up your software development and ensure the longevity and reliability of your AI-powered enterprise applications.

Sam Witteveen is a developer expert for machine learning at Google. He has extensive experience in startups and mobile applications and helps developers and companies create smarter applications with machine learning. He’s especially passionate about deep learning and AI in the fields of natural language and conversational agents. Sam regularly shares his knowledge at events and trainings across Asia and is co-organizer of the Singapore TensorFlow and Deep Learning group.

Presentations

TensorFlow and TPUs in the real world: Converting deep learning projects to train faster Session

Sam Witteveen divulges tips and tricks to take advantage of tensor processing units (TPUs) in TensorFlow 2.0 and to take a current deep learning project and convert it to something that runs smoothly and quickly on cloud TPUs.

Li Xu is a software engineer on the health machine learning team at Twitter, working on the development of machine learning technologies for health, security, and privacy. Previously, he was a software engineer on the security machine learning platform team at Uber, working on the architecture development of machine learning platform for security, and a researcher at Yahoo Labs, where he conducted state-of-the-art research on security, privacy, and machine learning. Li has shipped many inventions and technologies to Yahoo, Uber, and Twitter products. Nowadays, more than a billion users are using these products. His research interests lie in security and machine learning. He’s authored or coauthored papers in top-ranked journals, conferences, book chapters, and US patents. He served as a program committee member for top conferences of security, AI, and big data.

Presentations

Improving the health of public conversations on Twitter with TensorFlow Session

When people discuss on Twitter, the company wants to ensure that they can have respectful conversations with genuine people. Twitter relies on machine learning to improve the health of public conversations and information integrity. Li Xu and Yi Zhuang examine how Twitter uses TensorFlow to detect abusive, toxic, and spammy content and promotes healthy conversations on the platform.

Jason Zaman is a senior staff software engineer at Light Labs, as well as Gentoo Linux developer, TensorFlow SIG-Build Lead, and ML-GDE.

Presentations

Building TensorFlow: Libraries and custom op Contributor Summit

TensorFlow is a huge project with many parts both integrated and increasingly separate. Building all these components so they work together requires care. Jason Zaman and Yifei Feng demystify the main components and dependencies within TensorFlow and explore how to add custom functionality easily using custom op.

Town Hall: Contributors’ perspectives Contributor Summit

Join in to hear about the rapid growth of open source ML communities and a future road map for community building. Learn best practices, use cases, and how to develop metrics for your project by learning from other contributors to TensorFlow.

Kangyi Zhang is a software engineer at Google Brain and a member of the TensorFlow.js team. He’s very excited about sharing how to do machine learning in the JavaScript world, concentrating on native TensorFlow execution under the Node.js runtime, and preparing data for machine learning model in JS. You can find him on GitHub @kangyizhang.

Presentations

Unlocking the power of machine learning for your JavaScript applications with TensorFlow Session

Kangyi Zhang, Brijesh Krishnaswami, Joseph Paul Cohen, and Brendan Duke dive into the TensorFlow.js ecosystem: how to bring an existing machine learning model into your JavaScript (JS) app, retrain the model with your data, and go beyond the browser to other JS platforms with live demos of models and featured apps (WeChat virtual plugin from L’Oréal and a radiology diagnostic tool from Mila).

Maggie Zhang is a deep learning software engineer at NVIDIA, where she works on deep learning frameworks. She earned her PhD in computer science and engineering from the University of New South Wales in Australia. Her research background includes GPU and CPU heterogeneous computing, compiler optimization, computer architecture, and deep learning.

Presentations

Accelerating training, inference, and ML applications on NVIDIA GPUs Tutorial

Maggie Zhang, Nathan Luehr, Josh Romero, Pooya Davoodi, and Davide Onofrio give you a sneak peek at software components from NVIDIA’s software stack so you can get the best out of your end-to-end AI applications on modern NVIDIA GPUs. They also examine features and tips and tricks to optimize your workloads right from data loading, processing, training, inference, and deployment.

Juntai Zheng is a software engineer at Databricks and is a member of the team developing MLflow. He has actively contributed to MLflow since its inception, including the TensorFlow support for MLflow projects. He develops the support for TensorFlow 2.0 in MLflow. Juntai holds a bachelor of arts degree from UC Berkeley in computer science.

Presentations

How to track and manage TensorFlow 2.0 and Keras model experiments with MLflow Session

Juntai Zheng explains how to use the MLflow open source platform to manage the model lifecycle. It supports many model flavors, such as MLeap, MLlib, scikit-learn, PyTorch, TensorFlow, and Keras, with particular focus on TensorFlow 2.0 and Keras models.

Yi Zhuang is a senior staff machine learning software engineer at Twitter, where he leads a team building a platform for working with ML models. He works on uniting ML practitioners around a single ML platform, bringing consistency to ML practices at Twitter. Previously, Yi led a team to develop a trillion-document-scale distributed search engine at Twitter. Yi holds an MS in computer science from Carnegie Mellon University. He loves cats and enjoys pondering over all things technical and logical.

Presentations

Improving the health of public conversations on Twitter with TensorFlow Session

When people discuss on Twitter, the company wants to ensure that they can have respectful conversations with genuine people. Twitter relies on machine learning to improve the health of public conversations and information integrity. Li Xu and Yi Zhuang examine how Twitter uses TensorFlow to detect abusive, toxic, and spammy content and promotes healthy conversations on the platform.

  • O'Reilly
  • TensorFlow
  • Google Cloud
  • IBM
  • NVIDIA
  • Databricks
  • Tensor Networks
  • VMware
  • Amazon Web Services
  • One Convergence
  • Quantiphi
  • Lambda Labs
  • Tech Mahindra
  • cnvrg.io
  • Determined AI
  • Inferencery
  • Manceps, Inc.
  • PerceptiLabs
  • Valohai

Contact us

confreg@oreilly.com

For conference registration information and customer service

partners@oreilly.com

For more information on community discounts and trade opportunities with O’Reilly conferences

sponsorships@oreilly.com

For information on exhibiting or sponsoring a conference

pr@oreilly.com

For media/analyst press inquires