SLIDE: Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning with Beidi Chen

800 800 The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Today we’re joined by Beidi Chen, a PhD student at Rice University, who along with her co-contributors, developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines in their paper, SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems.

The team aimed to solve the problem of computational power that often creates a bottleneck for training large-scale neural networks. “We usually have a brute force way of doing things…or the time to achieve it…But the problem is we usually don’t have that time and computational power.”

Their solution is an algorithm called SLIDE (Sub-LInear Deep learning Engine) which combines a few elements: Randomized Algorithms (applies randomness in its logic); Multicore Parallelism (performing several computations at the same time); and Workload Optimization (maximum performance for data processing).

Check out our full write-up on this interview here.

Connect with Beidi!

Resources

Join Forces!

“More On That Later” by Lee Rosevere licensed under CC By 4.0

Leave a Reply

Your email address will not be published.