Studying Machine Intelligence with Been Kim
EPISODE 571
|
MAY
9,
2022
Watch
Follow
Share
About this Episode
Today we continue our ICLR coverage joined by Been Kim, a staff research scientist at Google Brain, and an ICLR 2022 Invited Speaker. Been, whose research has historically been focused on interpretability in machine learning, delivered the keynote Beyond interpretability: developing a language to shape our relationships with AI, which explores the need to study AI machines as scientific objects, in isolation and with humans, which will provide principles for tools, but also is necessary to take our working relationship with AI to the next level.
Before we dig into Been’s talk, she characterizes where we are as an industry and community with interpretability, and what the current state of the art is for interpretability techniques. We explore how the Gestalt principles appear in neural networks, Been’s choice to characterize communication with machines as a language as opposed to a set of principles or foundational understanding, and much much more.
About the Guest
Been Kim
Google Brain
Resources
- Blog: Beyond interpretability: developing a language to shape our relationships with AI by Been Kim
- Conference: Beyond interpretability: developing a language to shape our relationships with AI
- Paper: Neural Networks Trained on Natural Scenes Exhibit Gestalt Closure
- Paper: Saliency Maps Contain Network "Fingerprints"
- Paper: Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems
- Paper: Debugging Tests for Model Explanations
- Paper: Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation
- Paper: Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
- Paper: Concept Bottleneck Models
- Paper: Towards Automatic Concept-based Explanations
- Paper: On Completeness-aware Concept-Based Explanations in Deep Neural Networks
- Paper: DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
- Paper: Acquisition of Chess Knowledge in AlphaZero
- Can Language Models Be Too Big? With Emily M. Bender and Margaret Mitchell - #467
- Social Intelligence with Blaise Aguera y Arcas - #340