Today we’re joined by Stefano Soatto, VP of AI applications science at AWS and a professor of computer science at UCLA.
Subscribe: iTunes / Google Play / Spotify / RSS
Our conversation with Stefano centers on recent research of his called Graceful AI, which focuses on how to make trained systems evolve gracefully. We discuss the broader motivation for this research and the potential dangers or negative effects of constantly retraining ML models in production. We also talk about research into error rate clustering, the importance of model architecture when dealing with problems of model compression, how they’ve solved problems of regression and reprocessing by utilizing existing models, and much more.
Thanks to our Sponsor!
Big thanks to our friends at Amazon Web Services for their continued support of the podcast, and their sponsorship of today’s show!If you missed the AWS Machine Learning Summit last week, you can still catch all the sessions on-demand at twimlai.com/awsmlsummit. There you’ll find more than 30 sessions and keynotes featuring some of the brightest minds in machine learning diving deep into the art, science, and impact of ML. You’ll hear from industry luminaries and leading experts on the latest science breakthroughs, get real-world examples of how ML is impacting business, and learn best practices in building ML to share with your team.
Connect with Stefano!
- Blog: Graceful AI
- Paper: Towards Backward-Compatible Representation Learning
- Paper: Time matters in regularizing deep networks
- May ‘21 – Applied AI Research at AWS with Alex Smola – #487
- Jun ‘21 – Data Science on AWS with Chris Fregly and Antje Barth – #490