NVIDIA’s New “Crazy, Reckless” GPU For Deep Learning

991 523 The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

This post is an excerpt from the July 22, 2016 edition of the This Week in Machine Learning & AI podcast. You can listen or subscribe to the podcast below.

Last week, at a Machine Learning meetup at Stanford University, NVIDIA CEO Jen-Hsun Huang unveiled the company’s new flagship GPU, the NVIDIA TITAN X, and gifted the first device off of the assembly line to famed ML Researcher Andrew Ng. The new TITAN X, which holds the same name as the previous version of the device, is based on the company’s new Pascal graphics architecture, which was unveiled back in May.

Last night, at a Machine Learning meetup at Stanford University, NVIDIA CEO Jen-Hsun Huang unveiled the company’s new flagship GPU, the NVIDIA TITAN X and gifted the first device off of the assembly line to famed ML Researcher Andrew Ng. The new TITAN X, which holds the same name as the previous version of the device, is based on the company’s new Pascal graphics architecture, which was unveiled back in May.

The company is so excited about the card, it’s blog post introducing the card threw around a ton of superlatives and adjectives like Biggest, Ultimate, Irresponsible, Crazy, and Reckless. It also threw a bunch of numbers around, including these:

  • 11 Trillion Floating point ops/sec 32-bit floating point
  • 44 Trillion INT8 ops per second
  • 12B transistors
  • 3,584 CUDA cores running at 1.53 GHz
  • 12 GB of GDDR5X memory with a 480 GB/s bandwidth)

The other number it tossed out there was 1,200, which is the price of the card in US dollars.

Now, not everyone is excited about this card as NVIDIA. Indeed, for gamers, what NVIDIA’s offering with the TITAN X is a GPU that’s about 25% faster than the company’s standby offering the GTX1080 but at double the cost.

But it could be that that’s because the company is targeting deep learning researchers instead of gamers for the TITAN X. (In fact, CEO Jen-Hsun said as much at the product launch.) For people working on deep learning, the specs of the TITAN X should allow it to increase model training performance by 30-60%, which can save a researcher weeks of time and computing costs.

The best technical preview I’ve found of the new card, which comes out on August 2nd, is over on AnandTech. Of course I’ll be dropping a link to this article and all the other ones I mention on the show into the show notes, available at twimlai.com.

Leave a Reply

Your email address will not be published.