This Week in Machine Learning & AI This Week in Machine Learning & AI This Week in Machine Learning & AI This Week in Machine Learning & AI This Week in Machine Learning & AI

    Learning State Representations with Yael Niv

    800 800 This Week in Machine Learning & AI

    This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode I speak with Yael Niv, professor of neuroscience and psychology at Princeton University. Yael joined me after her invited talk on “Learning State Representations.”

    In this interview Yael and I explore the relationship between neuroscience and machine learning. In particular, we discusses the importance of state representations in human learning, some of her experimental results in this area, and how a better understanding of representation learning can lead to insights into machine learning problems such as reinforcement and transfer learning. Did I mention this was a nerd alert show? I really enjoyed this interview and I know you will too. Be sure to send over any thoughts or feedback via the show notes page.

    Join the Giveaway!

    Recently we hit a very exciting milestone for the podcast: One Million Listens!!! We’d hate to miss an opportunity to show you some love, so we’re holding another listener appreciation contest to celebrate the occasion. Tweet to us @twimlai using #TWIML1MIL to enter. Every entry gets a fly #TWIML1MIL sticker plus a chance to win one of 10 limited edition t-shirts commemorating the occasion. We’ll be giving away some other mystery prizes from the magic TWiML swag bag along the way, so you should definitely enter. If you’re not on twitter, or want more ways to enter, just look below for more chances to win!!!

    #TWiML1MIL

    Thanks to our Sponsor

    Intel Nervana LogoI’d like to thank our friends over at Intel Nervana for their sponsorship of this podcast and our NIPS series. While Intel was very active at NIPS, with a bunch of workshops, demonstrations and poster sessions, their big news at NIPS was the first public viewing of the Intel Nervana™ Neural Network Processor, or NNP. The goal of the NNP architecture is to provide the flexibility needed to support deep learning primitives while making the core hardware components as efficient as possible, giving neural network designers powerful tools for solving larger and more difficult problems while minimizing data movement and maximizing data re-use. To learn more about Intel’s AI Products Group and the Intel Nervana NNP, visit IntelNervana.com.

    About Yael

    Mentioned in the Interview

    5 comments
    • Charl Botha
      REPLY

      As I was listening to this highly enjoyable podcast this morning, Prof Niv tells the story (at about 14 minutes in) about the parallel parking neural network that was overtaken (excuse the pun) by the single perceptron the year after, due to changing the representation to polar coordinates.

      Google could not help me find any more information about this story. Do you perhaps have any pointers so I can try and track down the source and more details of the story?

      (P.S. I am a regular listener and huge fan of TWIML&AI, thank you very much for creating and running this!!)

    Leave a Reply

    Your email address will not be published.