About This Episode
Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta.
We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.
Watch on Youtube
Connect with Alona!
- Paper: Predictive Representation Learning for Language Modeling
- Paper: Words as a window: Using word embeddings to explore the learned representations of Convolutional Neural Networks
- Paper: Neural representation of words within phrases: Temporal evolution of color-adjectives and object-nouns during simple composition
- Profile: Tom Mitchell
- Profile: Janet Werker
- Workshop: How Can Findings About The Brain Improve AI Systems?
- Word2Vec & Friends with Bruno Goncalves – #048