In recent years, we have seen tremendous improvements in artificial intelligence. The major breakthroughs are due to the advances of neural-based models. However, recently, the more popular these algorithms and techniques get, the more serious the consequences of data and user privacy. These issues will drastically impact the future of AI research—specifically how neural-based models are developed, deployed, and evaluated.
Yishay Carmiel shares techniques and explains how data privacy will impact machine learning development and how future training and inference will be affected. Yishay first dives into training on private data, covering why training on private data should be addressed, federated learning, and differential privacy. He then explores inference on private data, discussing why inference on private data should be addressed, homomorphic encryption and neural networks, polynomial approximation of neural networks, protecting data in neural networks, data reconstruction from neural networks, and methods and techniques to secure data reconstruction from neural networks.
Yishay Carmiel is the founder of IntelligentWire, a company that develops and implements industry-leading deep learning and AI technologies for automatic speech recognition (ASR), natural language processing (NLP), and advanced voice data extraction, and is the head of Spoken Labs, the strategic artificial intelligence and machine learning research arm of Spoken Communications. Yishay and his teams are currently working on bleeding-edge innovations that make the real-time customer experience a reality—at scale. Yishay has nearly 20 years’ experience as an algorithm scientist and technology leader building large-scale machine learning algorithms and serving as a deep learning expert.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org