Milestones in Neural Natural Language Processing with Sebastian Ruder

800 800 The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

In this episode, we’re joined by Sebastian Ruder, a PhD student studying natural language processing at the National University of Ireland and a Research Scientist at text analysis startup Aylien.

In our conversation, Sebastian and I discuss recent milestones in neural NLP, including multi-task learning and pretrained language models. We also discuss the use of attention-based models, Tree RNNs and LSTMs, and memory-based networks. Finally, Sebastian walks us through his recent ULMFit paper, short for “Universal Language Model Fine-tuning for Text Classification,” which he co-authored with Jeremy Howard of who I interviewed in episode 186.

Thanks to our Sponsor!

I’d like to send a huge thanks to our friends at IBM for their sponsorship of this episode. Are you interested in exploring code patterns leveraging multiple technologies, including ML and AI? Then check out IBM Developer. With more than 100 open source programs, a library of knowledge resources, developer advocates ready to help, and a global community of developers, what in the world will you create? Dive in at, and be sure to let them know that TWIML sent you!

In addition, IBM has tutorials and code patterns so you can start building custom text classifiers today with their Natural Language Classifier offering!

About Sebastian

Mentioned in the Interview

“More On That Later” by Lee Rosevere licensed under CC By 4.0

Leave a Reply

Your email address will not be published.