Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez
EPISODE 378
|
MAY
25,
2020
Watch
Follow
Share
About this Episode
Today we're joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley.
This was a very interesting interview, as Joseph recounts his time at CMU under former guest Carlos Guestrin, where he was passionate about flipping helicopters, and his experience as co-founder of GraphLab, which was acquired by Apple in 2016.
Our main focus in the conversation is Joseph's paper "Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers," which explores compute-efficient training strategies, based on model size. We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency? We also discuss the parallels between computer vision and NLP tasks, how he characterizes both "larger" and "faster" in the paper.
About the Guest
Joseph Gonzalez
UC Berkeley
Resources
- Paper: Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
- GraphLab
- Paper: ImageNet Classification with Deep Convolutional Neural Networks (AlexNet)
- Theano
- Caffe
- #286 - Environmental Impact of Large-Scale NLP Model Training with Emma Strubell
- Paper: RoBERTa: A Robustly Optimized BERT Pretraining Approach
- Paper: "Why Should I Trust You?": Explaining the Predictions of Any Classifier
- Paper: NBDT: Neural-Backed Decision Trees
