Machine Learning for Datacenter Optimization, a ‘Crazy’ New GPU from NVIDIA & Faster RNN Training–TWiML 2016/07/22

991 523 This Week in Machine Learning & AI

This week’s show covers Google’s use of machine learning to cut datacenter power consumption, NVIDIA’s new ‘crazy, reckless’ GPU, and a new Layer Normalization technique that promises to reduce the training time for deep neural networks. Plus, a bunch more.

This week’s podcast is sponsored by Cloudera, organizers of the Wrangle Conference which is coming up in San Francisco on July 28th. Check out the event page for information on the great talks and speakers they’ve got planned, and if you decide to register, use the code “COMMUNITY” for 20% off!

Here are the notes for this week’s podcast:

Google Drives Datacenter Efficiency with Deep Learning

Google’s Cloud Natural Language and Cloud Speech APIs in Public Beta

NVIDIA’s New “Crazy, Reckless” GPU

Sketchy Enterprise AI Adoption Numbers

Layer Normalization for Faster RNN Training

TensorFlow With Latest RNN & NLP Papers

Projects

Image: NVIDIA

1 comment
  • Paddy
    REPLY

    Hi Sam,

    Really enjoying the podcast, keep up the good work.

    As someone who is relatively new to the space (just finished the Titanic Kaggle), I really enjoyed the recommendation for Abhishek’s blog post Approaching (almost) any machine learning problem. Given that this post focuses on applying ML algorithms, is there anything that you know of that deals with the data munging / cleaning etc. that happens prior to this?

    Thanks

Leave a Reply

Your email address will not be published.