In this episode of our AI Rewind series, we’ve brought back recent guest Sebastian Ruder, PhD Student at the National University of Ireland and Research Scientist at Aylien, to discuss trends in Natural Language Processing in 2018 and beyond.
Subscribe: iTunes / Google Play / Spotify / RSS
In our conversation we cover a bunch of interesting papers spanning topics such as pre-trained language models, common sense inference datasets and large document reasoning and more, and talk through Sebastian’s predictions for the new year.
Mentioned in the Interview
- TWiML Talk #195 – Milestones in Neural Natural Language Processing with Sebastian Ruder
- Paper: Phrase-Based & Neural Unsupervised Machine Translation
- Paper: Deep Contextualized Word Representations
- Paper: SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
- Paper: Semi-Supervised Sequence Modeling with Cross-View Training
- Paper: QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
- Paper: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- Google Dataset Search
- Deep Learning Indaba
- Fast AI 1.0
- Check out all of our great series from 2018 at the TWiML Presents: Series page!
- TWiML Online Meetup
- Register for the TWiML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0