Deep Gradient Compression for Distributed Training with Song Han
EPISODE 146
|
MAY
31,
2018
Watch
Follow
Share
About this Episode
On today's show I chat with Song Han, assistant professor in MIT's EECS department, about his research on Deep Gradient Compression.
In our conversation, we explore the challenge of distributed training for deep neural networks and the idea of compressing the gradient exchange to allow it to be done more efficiently. Song details the evolution of distributed training systems based on this idea, and provides a few examples of centralized and decentralized distributed training architectures such as Uber's Horovod, as well as the approaches native to Pytorch and Tensorflow. Song also addresses potential issues that arise when considering distributed training, such as loss of accuracy and generalizability, and much more.
About the Guest
Song Han
Massachusetts Institute of Technology