Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley.
Subscribe: iTunes / Google Play / Spotify / RSS
This was a very interesting interview, as Joseph recounts his time at CMU under former guest Carlos Guestrin, where he was passionate about flipping helicopters, and his experience as co-founder of GraphLab, which was acquired by Apple in 2016.
Our main focus in the conversation is Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which explores compute-efficient training strategies, based on model size. We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency? We also discuss the parallels between computer vision and NLP tasks, how he characterizes both “larger” and “faster” in the paper.
The Discussion Series is Back!
Join us for the latest in the TWIML Discussion Series, this time focused on Advancing Your Data Science Career During the Pandemic, next Tuesday, May 26th, at 12pm PT.
I’ll be joined by Hilary Mason, who you know from her time at Fast Forward Labs, Cloudera, and this podcast, Caroline Chavier, data science recruiting maven and Co-Founder of Paris Women in Machine Learning and Data Science, Jacquline Nolis, co-author of Build a Career in Data Science, which is hot off the presses, and Ana Maria Echeverri of IBM and the OpenDS4All project.
With the help of your questions, this amazing panel and I will explore best practices, tips, advice, and direction for those affected by layoffs, new to the job market, or ready to accelerate your careers. To register for this panel, visit twimlai.com/advancingds.
Connect with Joseph!
- Paper: Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
- Paper: ImageNet Classification with Deep Convolutional Neural Networks (AlexNet)
- #286 – Environmental Impact of Large-Scale NLP Model Training with Emma Strubell
- Paper: RoBERTa: A Robustly Optimized BERT Pretraining Approach
- Paper: “Why Should I Trust You?”: Explaining the Predictions of Any Classifier
- Paper: NBDT: Neural-Backed Decision Trees
- Join the TWIML Community!
- Check out our TWIML Presents: series page!
- Register for the TWIML Newsletter
- Check out the official TWIMLcon:AI Platform video packages here!
- Download our latest eBook, The Definitive Guide to AI Platforms!
“More On That Later” by Lee Rosevere licensed under CC By 4.0