Presented By O'Reilly and Cloudera
Make Data Work
22–23 May 2017: Training
23–25 May 2017: Tutorials & Conference
London, UK

Fast and effective training for deep learning

David Barber (UCL)
16:3016:55 Tuesday, 23 May 2017
Hardcore Data Science
Location: London Suite 2/3
Secondary topics:  Deep learning
Average rating: ****.
(4.00, 1 rating)

David Barber considers two issues related to training of deep learning systems—natural language modeling and the use of higher-order optimization methods for deep learning—offering an overview of the topics, exploring recent work, and demonstrating how to use them effectively.

  • In natural language modeling, it’s common to have dictionaries with over 100,000 words, which creates a bottleneck in the standard maximum likelihood framework. It is well known that naive importance sampling causes instability in training deep learning language models. David shows how a simple modification stabilizes training and explains how to train a large-scale language model effectively.
  • Higher-order optimization methods for deep learning have the potential to significantly reduce training time in deep learning. David presents recent work that sheds light on the structure of deep learning objective functions.
Photo of David Barber

David Barber

UCL

David Barber is reader in information processing in the Department of Computer Science at UCL, where he develops novel machine learning algorithms. David is also a cofounder of the NLP company reinfer.io.

Comments on this page are now closed.

Comments

Picture of David Barber
David Barber | READER IN INFORMATION PROCESSING, DEPARTMENT OF COMPUTER SCIENCE
30/05/2017 14:26 BST

Hello Michal

I’ve put the slides here

https://drive.google.com/open?id=0B74YPrZW67b4andqYjRHMGwyd0E

You may also wish to read my blogpost https://davidbarber.github.io/blog/2017/03/15/large-number-of-classes/

David

Michal Kucharczyk | BI & RISK MANAGEMENT SPECIALIST
26/05/2017 9:45 BST

Hello David, do you plan to share the slides?