Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez

Banner Image: Joseph Gonzalez - Podcast Interview

Join our list for notifications and early access to events

About this Episode

Today we're joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley.

This was a very interesting interview, as Joseph recounts his time at CMU under former guest Carlos Guestrin, where he was passionate about flipping helicopters, and his experience as co-founder of GraphLab, which was acquired by Apple in 2016.

Our main focus in the conversation is Joseph's paper "Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers," which explores compute-efficient training strategies, based on model size. We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency? We also discuss the parallels between computer vision and NLP tasks, how he characterizes both "larger" and "faster" in the paper.

Connect with Joseph
Read More

Related Episodes

Related Topics

More from TWIML

Leave a Reply

Your email address will not be published. Required fields are marked *