This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode I speak with Yael Niv, professor of neuroscience and psychology at Princeton University. Yael joined me after her invited talk on “Learning State Representations.”
Subscribe: iTunes / Google Play / Spotify / RSS
In this interview Yael and I explore the relationship between neuroscience and machine learning. In particular, we discusses the importance of state representations in human learning, some of her experimental results in this area, and how a better understanding of representation learning can lead to insights into machine learning problems such as reinforcement and transfer learning. Did I mention this was a nerd alert show? I really enjoyed this interview and I know you will too. Be sure to send over any thoughts or feedback via the show notes page.
Join the Giveaway!
Recently we hit a very exciting milestone for the podcast: One Million Listens!!! We’d hate to miss an opportunity to show you some love, so we’re holding another listener appreciation contest to celebrate the occasion. Tweet to us @twimlai using #TWIML1MIL to enter. Every entry gets a fly #TWIML1MIL sticker plus a chance to win one of 10 limited edition t-shirts commemorating the occasion. We’ll be giving away some other mystery prizes from the magic TWiML swag bag along the way, so you should definitely enter. If you’re not on twitter, or want more ways to enter, just look below for more chances to win!!!
Thanks to our Sponsor
I’d like to thank our friends over at Intel Nervana for their sponsorship of this podcast and our NIPS series. While Intel was very active at NIPS, with a bunch of workshops, demonstrations and poster sessions, their big news at NIPS was the first public viewing of the Intel Nervana™ Neural Network Processor, or NNP. The goal of the NNP architecture is to provide the flexibility needed to support deep learning primitives while making the core hardware components as efficient as possible, giving neural network designers powerful tools for solving larger and more difficult problems while minimizing data movement and maximizing data re-use. To learn more about Intel’s AI Products Group and the Intel Nervana NNP, visit IntelNervana.com.