Fairness and Robustness in Federated Learning with Virginia Smith

EPISODE 504
WATCH
Play Video

Join our list for notifications and early access to events

About this Episode

Today we kick off our ICML coverage joined by Virginia Smith, an assistant professor in the Machine Learning Department at Carnegie Mellon University. In our conversation with Virginia, we explore her work on cross-device federated learning applications, including where the distributed learning aspects of FL are relative to the privacy techniques. We dig into her paper from ICML, Ditto: Fair and Robust Federated Learning Through Personalization, what fairness means in contrast to AI ethics, the particulars of the failure modes, the relationship between models and the things being optimized across devices, and the tradeoffs between fairness and robustness. We also discuss a second paper, Heterogeneity for the Win: One-Shot Federated Clustering, how the proposed method makes heterogeneity beneficial in data, how the heterogeneity of data is classified, and some applications of FL in an unsupervised setting.
Connect with Virginia
Read More

Thanks to our sponsor SigOpt

SigOpt was born out of the desire to make experts more efficient. While co-founder Scott Clark was completing his PhD at Cornell he noticed that often the final stage of research was a domain expert tweaking what they had built via trial and error. After completing his PhD, Scott developed MOE to solve this problem, and used it to optimize machine learning models and A/B tests at Yelp. SigOpt was founded in 2014 to bring this technology to every expert in every field.

SigOpt Logo

Related Episodes

Related Topics

More from TWIML

Leave a Reply

Your email address will not be published. Required fields are marked *