About This Episode
Today we’re joined by Alexander Richard, a research scientist at Facebook Reality Labs, and recipient of the ICLR Best Paper Award for his paper “Neural Synthesis of Binaural Speech From Mono Audio.”
We begin our conversation with a look into the charter of Facebook Reality Labs, and Alex’s specific Codec Avatar project, where they’re developing AR/VR for social telepresence (applications like this come to mind). Of course, we dig into the aforementioned paper, discussing the difficulty in improving the quality of audio and the role of dynamic time warping, as well as the challenges of creating this model. Finally, Alex shares his thoughts on 3D rendering for audio, and other future research directions.
If you’re a fan of this episode, you might also enjoy our conversation with Jesse Engel.
Watch on Youtube
Connect with Alexander!
- Paper: Neural Synthesis of Binaural Speech From Mono Audio
- Paper: MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement
- Paper: Audio- and Gaze-driven Facial Animation of Codec Avatars
- Blog: Facebook is building the future of connection with lifelike avatars