Recurrent neural networks without a PhD
Who is this presentation for?
- Developers and data scientists
Many problems deemed “impossible” only five years ago have now been solved by deep learning—from playing Go to recognizing what’s in an image to translating languages. Software engineers are eager to adopt these new technologies as soon as they come out of research labs. And now you can learn how.
Martin Gorner leads a hands-on introduction to recurrent neural networks and TensorFlow. You’ll explore the basics of stateless and stateful RNNs and discover how they can be used in time series analysis. Along the way, you’ll learn tips, engineering best practices, and pointers to apply in your own projects—no PhD required. Tensor Processing Unit (TPU) accelerators are used to speed up training times.
- Familiarity with basic neural network concepts such as fully connected layers, activation functions, and so on
Materials or downloads needed in advance
- A WiFi-enabled laptop
What you'll learn
- Learn to build recurrent neural networks in Keras
- Understand stateless versus stateful RNNs and how to train on TPUs
Martin Gorner is a developer advocate at Google, where he focuses on parallel processing and machine learning. Martin is passionate about science, technology, coding, algorithms, and everything in between. He spent his first engineering years in the Computer Architecture Group of ST Microelectronics, then spent the next 11 years shaping the nascent ebook market at Mobipocket, which later became the software part of the Amazon Kindle and its mobile variants. He’s the author of the successful TensorFlow Without a PhD series. He graduated from Mines Paris Tech.
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires