Today we’re joined by Beidi Chen, a PhD student at Rice University, who along with her co-contributors, developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines in their paper, SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems.
Subscribe: iTunes / Google Play / Spotify / RSS
The team aimed to solve the problem of computational power that often creates a bottleneck for training large-scale neural networks. “We usually have a brute force way of doing things…or the time to achieve it…But the problem is we usually don’t have that time and computational power.”
Their solution is an algorithm called SLIDE (Sub-LInear Deep learning Engine) which combines a few elements: Randomized Algorithms (applies randomness in its logic); Multicore Parallelism (performing several computations at the same time); and Workload Optimization (maximum performance for data processing).
Check out our full write-up on this interview here.
Connect with Beidi!
- Paper: SLIDE: In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems
- delicious: Dataset generated from the del.icio.us site bookmarks
- Join the TWIML Community!
- Check out our TWIML Presents: series page!
- Register for the TWIML Newsletter
- Check out the official TWIMLcon:AI Platform video packages here!
- Download our latest eBook, The Definitive Guide to AI Platforms!
“More On That Later” by Lee Rosevere licensed under CC By 4.0