Put AI to Work
April 15-18, 2019
New York, NY
Please log in

BERT: Pretraining deep bidirectional transformers for language understanding

Chang Ming-Wei (Google)
1:00pm1:40pm Thursday, April 18, 2019
Machine Learning, Models and Methods
Location: Regent Parlor
Secondary topics:  Models and Methods, Text, Language, and Speech
Average rating: *****
(5.00, 4 ratings)

What you'll learn

  • Explore BERT, a new language representation model designed to pretrain deep bidirectional representations

Description

Ming-Wei Chang offers an overview of a new language representation model called BERT (Bidirectional Encoder Representations from Transformers). Unlike recent language representation models, BERT is designed to pretrain deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pretrained BERT representations can be fine-tuned with just one additional output layer to create models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.

BERT is conceptually simple and empirically powerful. It obtains new, state-of-the-art results on 11 natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement), and the SQuAD v1.1 question-answering test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.

Photo of Chang Ming-Wei

Chang Ming-Wei

Google

Ming-Wei Chang is a research scientist at Google AI Language. He enjoys developing interesting machine learning algorithms for practical problems, especially in the field of natural language processing. He has published more than 35 papers at top-tier conferences and won an outstanding paper award at ACL 2015 for his work on question answering over knowledge bases. He also won several international machine learning competitions on topics like entity linking, power load forecast prediction, and sequential data classification. His recent paper, “BERT: Pretraining of Deep Bidirectional Transformers for Language Understanding“—cowritten with his colleagues in Google AI Language—demonstrates the power of language model pretraining and details the new state of the art for over 11 natural language processing tasks.

Comments on this page are now closed.

Comments

Matt Goldwasser | LEAD DATA SCIENTIST
04/22/2019 7:58am EDT

Will the slides be posted for this session?