Towards Improved Transfer Learning with Hugo Larochelle
EPISODE 631
|
MAY
29,
2023
Watch
Follow
Share
About this Episode
Today we’re joined by Hugo Larochelle, a research scientist at Google Deepmind. In our conversation with Hugo, we discuss his work on transfer learning, understanding the capabilities of deep learning models, and creating the Transactions on Machine Learning Research journal. We explore the use of large language models in NLP, prompting, and zero-shot learning. Hugo also shares insights from his research on neural knowledge mobilization for code completion and discusses the adaptive prompts used in their system.
About the Guest
Hugo Larochelle
Resources
- Paper: Head2Toe: Utilizing intermediate representations for better transfer learning
- Paper: Repository-Level Prompt Generation for Large Language Models of Code
- Journal: Transactions on Machine Learning Research
- Paper: A universal representation transformer layer for few-shot image classification
- Paper: Extracting and composing robust features with denoising autoencoders
- Paper: Zero-data learning of new tasks
- Paper: Optimization as a model for few-shot learning
- Keynote: Perspectives on knowledge acquisition & mobilization with neural net - CoLLAs 2022
- Learning Representations for Visual Search with Naila Murray