TensorFlow model optimization: Quantization and pruning
Who is this presentation for?
- Researchers, ML engineers, and people focused on deployment
Join Raziel Alverez to learn from TensorFlow performance experts who cover topics including optimization, quantization, benchmarking, and more.
What you'll learn
- Gain a low-level overview of TensorFlow, techniques in optimization, and how and why to use benchmarks
Raziel Alverez is a senior staff engineer at Google, where he leads TensorFlow model optimization, aimed at making machine learning more efficient to deploy and execute. He’s a cofounder and engineering lead of TensorFlow Lite, and he developed the framework used to execute embedded ML models for Google’s speech recognition software (now in TensorFlow Lite) and lead the development of the latest iteration of the “Hey, Google” hotword recognizer. Previously, Raziel codesigned and implemented the Self-Assembling Interface Layer that forms the core of Appian’s (APPN) low-code development platform. He graduated summa cum laude from both the BS and master’s programs in computer science and machine learning at Mexico’s ITESM.
Leave a Comment or Question
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
For conference registration information and customer service
For more information on community discounts and trade opportunities with O’Reilly conferences
For information on exhibiting or sponsoring a conference
For media/analyst press inquires