Presented By O'Reilly and Cloudera
Make Data Work
Feb 17–20, 2015 • San Jose, CA
Tara Sainath

Tara Sainath
Researcher, Google

Tara Sainath received her PhD in Electrical Engineering and Computer Science from MIT in 2009. The main focus of her PhD work was in acoustic modeling for noise robust speech recognition. After her PhD, she spent 5 years at the Speech and Language Algorithms group at IBM T.J. Watson Research Center, before joining Google Research. She has co-organized a special session on Sparse Representations at Interspeech 2010 in Japan. She has also organized a special session on Deep Learning at ICML 2013 in Atlanta. In addition, she is a staff reporter for the IEEE Speech and Language Processing Technical Committee (SLTC) Newsletter. Her research interests are mainly in acoustic modeling, including deep neural networks and sparse representations.


9:00am–5:00pm Wednesday, 02/18/2015
Hardcore Data Science
Location: LL20 BC
Ben Lorica (O'Reilly Media), Ben Recht (University of California, Berkeley), Chris Re (Stanford University | Apple), Maya Gupta (Google), Alyosha Efros (UC Berkeley), Eamonn Keogh (University of California - Riverside), John Myles White (Facebook), Fei-Fei Li (Stanford University), Tara Sainath (Google), Michael Jordan (UC Berkeley), Anima Anandkumar (UC Irvine), John Canny (UC Berkeley), David Andrzejewski (Sumo Logic)
Average rating: ****.
(4.86, 7 ratings)
All-Day: Strata's regular data science track has great talks with real world experience from leading edge speakers. But we didn't just stop there—we added the Hardcore Data Science day to give you a chance to go even deeper. The Hardcore day will add new techniques and technologies to your data science toolbox, shared by leading data science practitioners from startups, industry, consulting... Read more.
9:05am–9:45am Wednesday, 02/18/2015
Hardcore Data Science
Location: LL20 BC.
Tara Sainath (Google)
Average rating: ****.
(4.50, 4 ratings)
DNNs were first explored for acoustic modeling, where numerous research labs demonstrated improvements in WER between 10-40% relative. In this talk, I will provide an overview of the latest improvements in deep learning across various research labs since the initial inception. Read more.