Evaluating Model Explainability Methods with Sara Hooker

800 800 This Week in Machine Learning & AI

In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain.

I had the pleasure of speaking with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks. We discuss what interpretability means and when it’s important, and explore some nuances like the distinction between interpreting model decisions vs model function. We also dig into her paper Evaluating Feature Importance Estimates and look at the relationship between this work and interpretability approaches like LIME.

We also talk a bit about Google, in particular, the relationship between Brain and the rest of the Google AI landscape and the significance of the recently announced Google AI Lab in Accra, Ghana, being led by friend of the show Moustapha Cisse. And, of course, we chat a bit about the Indaba as well.

Thanks to our Sponsor!


I’d like to send a big shout out to our friends at Google AI for their support of the podcast and their sponsorship of this series. In this podcast you heard Sara talk about the AI Residency program she’s in at Google. Well, just yesterday they opened up applications for the 2019 program! The Google AI Residency is a one-year machine learning research training program with the goal of helping individuals become successful machine learning researchers. The program seeks Residents from a very diverse set of educational and professional backgrounds from all over the world, so if you think this is something that interests you, you should definitely apply! Find out more about the program at g.co/airesidency.

About Sara

Mentioned in the Interview

“More On That Later” by Lee Rosevere licensed under CC By 4.0

Leave a Reply

Your email address will not be published.