BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

This video is a recap of our May 2019 Americas TWiML Online Meetup: BERT Pre-training of Deep Bidirectional Transformers for Language Understanding.

In this month’s community segment, we discuss our thoughts on the meetups, GP2, the availability of resources, and our TWiML Talk with Delip Rao on Fake News.

In our presentation segment, Jidin Dinesh presents on deep bidirectional representations and language representation models with the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper by Jacob Devlin et. al.

For links to the papers, podcasts, and more mentioned above or during this meetup, for more information on previous meetups, or to get registered for upcoming meetups, visit twimlai.com/meetup!

 

Presentation:

Paper mentioned in the presentation:

Topic: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Presenter: Jidin Dinesh
Date: Wednesday 15th May 2019
Time: 17:00 PM US Pacific Time