In this episode of our Deep Learning Indaba Series, we’re joined by Herman Kamper, Lecturer in the electrical and electronics engineering department at Stellenbosch University in SA and a co-organizer of the Indaba.
Subscribe: iTunes / Google Play / Spotify / RSS
Herman and I discuss his work on limited- and zero-resource speech recognition, how those differ from regular speech recognition, and the tension between linguistic and statistical methods in this space. We dive into the specifics of the methods being used and developed in Herman’s lab as well, including how phoneme data is used for segmenting and processing speech data.
Thanks to our Sponsor!
We’d like to send a big shout out to our friends at Google AI for their support of the podcast and their sponsorship of this series. In this podcast you heard Sara talk about the AI Residency program she’s in at Google. Well, just yesterday they opened up applications for the 2019 program! The Google AI Residency is a one-year machine learning research training program with the goal of helping individuals become successful machine learning researchers. The program seeks Residents from a very diverse set of educational and professional backgrounds from all over the world, so if you think this is something that interests you, you should definitely apply! Find out more about the program at g.co/airesidency.
Mentioned in the Interview
- Deep Learning Indaba
- Paper: Low-Resource Speech-to-Text Translation
- Emmanuel Dupoux
- Sharon Goldwater
- Deep Learning Indaba Series Page
- TWIML Presents: Series page
- TWIML Events Page
- TWIML Meetup
- TWIML Newsletter
“More On That Later” by Lee Rosevere licensed under CC By 4.0