Performance on a range of perceptual tasks, such as speech understanding and image recognition, has dramatically advanced in the last year or so, due to breakthroughs in deep neural network models. While the concepts underlying these approaches date from the early 90’s, they are far bigger than previously explored models, having 8 or more layers, tens of million of parameters and require millions of training examples. On image recognition tasks, they achieve errors rates less than half that of previous state-of-the-art models from a few years ago and, under certain conditions, approach or exceed human performance.
Rob Fergus is an Associate Professor of Computer Science at the
Courant Institute of Mathematical Sciences, New York University. He is
also a Research Scientist at Facebook, working in their AI Research
Group. He received a Masters in Electrical Engineering with Prof.
Pietro Perona at Caltech, before completing a PhD with Prof. Andrew
Zisserman at the University of Oxford in 2005. Before coming to NYU,
he spent two years as a post-doc in the Computer Science and
Artificial Intelligence Lab (CSAIL) at MIT, working with Prof. William
Freeman. He has received several awards including a CVPR best paper
prize, a Sloan Fellowship & NSF Career award and the IEEE