The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) Thu, 17 Sep 2020 18:33:55 +0000 Thu, 17 Sep 2020 20:38:07 +0000 Libsyn WebEngine 2.0 https://twimlai.com en https://twimlai.com team@twimlai.com (team@twimlai.com) https://ssl-static.libsyn.com/p/assets/c/d/6/9/cd6983cef600ee9d/TWIML_AI_Podcast_Official_Cover_Art_1400px.png The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) Sam Charrington ai,artificialintelligence,datascience,machinelearning,ml,samcharrington,tech,technology,thetwimlaipocast,thisweekinmachinelearning,twiml,twimlaipodcast no team@twimlai.com episodic http://twimlai.libsyn.com/rss Understanding Cultural Style Trends with Computer Vision w/ Kavita Bala - #410 Understanding Cultural Style Trends with Computer Vision w/ Kavita Bala Thu, 17 Sep 2020 18:33:55 +0000 Today we’re joined by Kavita Bala, the Dean of Computing and Information Science at Cornell University. 

Kavita, whose research explores the overlap of computer vision and computer graphics, joined us to discuss a few of her projects, including GrokStyle, a startup that was recently acquired by Facebook and is currently being deployed across their Marketplace features. We also talk about StreetStyle/GeoStyle, projects focused on using social media data to find style clusters across the globe. 

Kavita shares her thoughts on the privacy and security implications, progress with integrating privacy-preserving techniques into vision projects like the ones she works on, and what’s next for Kavita’s research.

The complete show notes for this episode can be found at twimlai.com/go/410.

]]>
Today we’re joined by Kavita Bala, the Dean of Computing and Information Science at Cornell University. 

Kavita, whose research explores the overlap of computer vision and computer graphics, joined us to discuss a few of her projects, including GrokStyle, a startup that was recently acquired by Facebook and is currently being deployed across their Marketplace features. We also talk about StreetStyle/GeoStyle, projects focused on using social media data to find style clusters across the globe. 

Kavita shares her thoughts on the privacy and security implications, progress with integrating privacy-preserving techniques into vision projects like the ones she works on, and what’s next for Kavita’s research.

The complete show notes for this episode can be found at twimlai.com/go/410.

]]>
38:48 clean podcast,facebook,technology,tech,data,cornell,ai,socialmedia,ml,artificialintelligence,machinelearning,datascience,computervision,streetstyle,twiml,grokstyle,humanperception,geostyle,groknet,kavitabala 410 full Sam Charrington
That's a VIBE: ML for Human Pose and Shape Estimation with Nikos Athanasiou, Muhammed Kocabas, Michael Black - #409 That's a VIBE: ML for Human Pose and Shape Estimation with Nikos Athanasiou, Muhammed Kocabas, Michael Black Mon, 14 Sep 2020 20:37:40 +0000 Today we’re joined by Nikos Athanasiou, Muhammed Kocabas, Ph.D. students, and Michael Black, Director of the Max Planck Institute for Intelligent Systems. 

We caught up with the group to explore their paper VIBE: Video Inference for Human Body Pose and Shape Estimation, which they submitted to CVPR 2020. In our conversation, we explore the problem that they’re trying to solve through an adversarial learning framework, the datasets (AMASS) that they’re building upon, the core elements that separate this work from its predecessors in this area of research, and the results they’ve seen through their experiments and testing.

 The complete show notes for this episode can be found at https://twimlai.com/go/409.

Register for TWIMLfest today!

]]>
Today we’re joined by Nikos Athanasiou, Muhammed Kocabas, Ph.D. students, and Michael Black, Director of the Max Planck Institute for Intelligent Systems. 

We caught up with the group to explore their paper VIBE: Video Inference for Human Body Pose and Shape Estimation, which they submitted to CVPR 2020. In our conversation, we explore the problem that they’re trying to solve through an adversarial learning framework, the datasets (AMASS) that they’re building upon, the core elements that separate this work from its predecessors in this area of research, and the results they’ve seen through their experiments and testing.

 The complete show notes for this episode can be found at https://twimlai.com/go/409.

Register for TWIMLfest today!

]]>
44:51 clean podcast,technology,tech,data,vibe,ai,ml,artificialintelligence,machinelearning,datascience,motioncapture,computervision,twiml,amass,michaelblack,nikosathanasiou,muhammedkocabas,maxplanckinstituteforintelligentsystems,humanmeshrecovery,videoinference 409 full Sam Charrington
3D Deep Learning with PyTorch 3D w/ Georgia Gkioxari - #408 3D Deep Learning with PyTorch 3D w/ Georgia Gkioxari Thu, 10 Sep 2020 17:50:11 +0000 Today we’re joined by Georgia Gkioxari, a research scientist at Facebook AI Research. 

Georgia was hand-picked by the TWIML community to discuss her work on the recently released open-source library PyTorch3D. In our conversation, Georgia describes her experiences as a computer vision researcher prior to the 2012 deep learning explosion, and how the entire landscape has changed since then. 

Georgia walks us through the user experience of PyTorch3D, while also detailing who the target audience is, why the library is useful, and how it fits in the broad goal of giving computers better means of perception. Finally, Georgia gives us a look at what it’s like to be a co-chair for CVPR 2021 and the challenges with updating the peer review process for the larger academic conferences. 

The complete show notes for this episode can be found at twimlai.com/go/408.

]]>
Today we’re joined by Georgia Gkioxari, a research scientist at Facebook AI Research. 

Georgia was hand-picked by the TWIML community to discuss her work on the recently released open-source library PyTorch3D. In our conversation, Georgia describes her experiences as a computer vision researcher prior to the 2012 deep learning explosion, and how the entire landscape has changed since then. 

Georgia walks us through the user experience of PyTorch3D, while also detailing who the target audience is, why the library is useful, and how it fits in the broad goal of giving computers better means of perception. Finally, Georgia gives us a look at what it’s like to be a co-chair for CVPR 2021 and the challenges with updating the peer review process for the larger academic conferences. 

The complete show notes for this episode can be found at twimlai.com/go/408.

]]>
36:46 clean technology,data,python,cnn,fair,perception,ai,cuda,ml,artificialintelligence,machinelearning,datascience,computervision,deeplearning,cvpr,twiml,facebookairesearch,pytorch3d,georgiagkioxari,graphmodeling 408 full Sam Charrington
What are the Implications of Algorithmic Thinking? with Michael I. Jordan - #407 What are the Implications of Algorithmic Thinking? with Michael I. Jordan Mon, 07 Sep 2020 11:43:29 +0000 Today we’re joined by the legendary Michael I. Jordan, Distinguished Professor in the Departments of EECS and Statistics at UC Berkeley. 

Michael was gracious enough to connect us all the way from Italy after being named IEEE’s 2020 John von Neumann Medal recipient. In our conversation with Michael, we explore his career path, and how his influence from other fields like philosophy shaped his path. 

We spend quite a bit of time discussing his current exploration into the intersection of economics and AI, and how machine learning systems could be used to create value and empowerment across many industries through “markets.” We also touch on the potential of “interacting learning systems” at scale, the valuation of data, the commoditization of human knowledge into computational systems, and much, much more.

The complete show notes for this episode can be found at. twimlai.com/go/407.

]]>
Today we’re joined by the legendary Michael I. Jordan, Distinguished Professor in the Departments of EECS and Statistics at UC Berkeley. 

Michael was gracious enough to connect us all the way from Italy after being named IEEE’s 2020 John von Neumann Medal recipient. In our conversation with Michael, we explore his career path, and how his influence from other fields like philosophy shaped his path. 

We spend quite a bit of time discussing his current exploration into the intersection of economics and AI, and how machine learning systems could be used to create value and empowerment across many industries through “markets.” We also touch on the potential of “interacting learning systems” at scale, the valuation of data, the commoditization of human knowledge into computational systems, and much, much more.

The complete show notes for this episode can be found at. twimlai.com/go/407.]]>
57:27 clean podcast,technology,tech,data,philosophy,biology,statistics,ai,ml,artificialintelligence,machinelearning,ucberkeley,datascience,thoughtleader,twiml,michaelijordan,interactinglearningsystems,humanknowledge,bayesiannetworks,stevestout 407 full Sam Charrington
Beyond Accuracy: Behavioral Testing of NLP Models with Sameer Singh - #406 Beyond Accuracy: Behavioral Testing of NLP Models with Sameer Singh Thu, 03 Sep 2020 19:10:48 +0000 Today we’re joined by Sameer Singh, an assistant professor in the department of computer science at UC Irvine. 

Sameer’s work centers on large-scale and interpretable machine learning applied to information extraction and natural language processing. We caught up with Sameer right after he was awarded the best paper award at ACL 2020 for his work on Beyond Accuracy: Behavioral Testing of NLP Models with CheckList.

In our conversation, we explore CheckLists, the task-agnostic methodology for testing NLP models introduced in the paper. We also discuss how well we understand the cause of pitfalls or failure modes in deep learning models, Sameer’s thoughts on embodied AI, and his work on the now famous LIME paper, which he co-authored alongside Carlos Guestrin. 

The complete show notes for this episode can be found at twimlai.com/go/406.

]]>
Today we’re joined by Sameer Singh, an assistant professor in the department of computer science at UC Irvine. 

Sameer’s work centers on large-scale and interpretable machine learning applied to information extraction and natural language processing. We caught up with Sameer right after he was awarded the best paper award at ACL 2020 for his work on Beyond Accuracy: Behavioral Testing of NLP Models with CheckList.

In our conversation, we explore CheckLists, the task-agnostic methodology for testing NLP models introduced in the paper. We also discuss how well we understand the cause of pitfalls or failure modes in deep learning models, Sameer’s thoughts on embodied AI, and his work on the now famous LIME paper, which he co-authored alongside Carlos Guestrin. 

The complete show notes for this episode can be found at twimlai.com/go/406.

]]>
41:11 clean podcast,technology,tech,data,ai,lime,nlp,checklists,ml,artificialintelligence,machinelearning,datascience,ucirvine,twiml,blackboxes,sameersingh,carlosguestrin,acl2020bestpaper,behavioraltesting,embodiedai 406 full Sam Charrington
How Machine Learning Powers On-Demand Logistics at Doordash with Gary Ren - #405 How Machine Learning Powers On-Demand Logistics at Doordash with Gary Ren Mon, 31 Aug 2020 20:27:27 +0000 Today we’re joined by Gary Ren, a machine learning engineer for the logistics team at DoorDash. 

In our conversation, we explore how machine learning powers the entire logistics ecosystem. We discuss the stages of their “marketplace,” and how using ML for optimized route planning and matching affects consumers, dashers, and merchants. We also talk through how they use traditional mathematics, classical machine learning, potential use cases for reinforcement learning frameworks, and challenges to implementing these explorations.  

The complete show notes for this episode can be found at twimlai.com/go/405!

Check out our upcoming event at twimlai.com/twimlfest

]]>
Today we’re joined by Gary Ren, a machine learning engineer for the logistics team at DoorDash. 

In our conversation, we explore how machine learning powers the entire logistics ecosystem. We discuss the stages of their “marketplace,” and how using ML for optimized route planning and matching affects consumers, dashers, and merchants. We also talk through how they use traditional mathematics, classical machine learning, potential use cases for reinforcement learning frameworks, and challenges to implementing these explorations.  

The complete show notes for this episode can be found at twimlai.com/go/405!

Check out our upcoming event at twimlai.com/twimlfest

]]>
43:48 clean podcast,technology,tech,data,microsoft,bing,pandemic,ai,optimization,logistics,ml,artificialintelligence,machinelearning,datascience,doordash,fooddelivery,twiml,reinforcementlearning,nvidiagtc,garyren 405 full Sam Charrington
Machine Learning as a Software Engineering Discipline with Dillon Erb - #404 Machine Learning as a Software Engineering Discipline with Dillon Erb Thu, 27 Aug 2020 19:23:44 +0000 Today we’re joined by Dillon Erb, Co-founder & CEO of Paperspace.

We’ve followed Paperspace since their origins offering GPU-enabled compute resources to data scientists and machine learning developers, to the release of their Jupyter-based Gradient service. Our conversation with Dillon centered on the challenges that organizations face building and scaling repeatable machine learning workflows, and how they’ve done this in their own platform by applying time-tested software engineering practices. 

We also discuss the importance of reproducibility in production machine learning pipelines, how the processes and tools of software engineering map to the machine learning workflow, and technical issues that ML teams run into when trying to scale the ML workflow.

The complete show notes for this episode can be found at twimlai.com/go/404.

]]>
Today we’re joined by Dillon Erb, Co-founder & CEO of Paperspace.

We’ve followed Paperspace since their origins offering GPU-enabled compute resources to data scientists and machine learning developers, to the release of their Jupyter-based Gradient service. Our conversation with Dillon centered on the challenges that organizations face building and scaling repeatable machine learning workflows, and how they’ve done this in their own platform by applying time-tested software engineering practices. 

We also discuss the importance of reproducibility in production machine learning pipelines, how the processes and tools of software engineering map to the machine learning workflow, and technical issues that ML teams run into when trying to scale the ML workflow.

The complete show notes for this episode can be found at twimlai.com/go/404.

]]>
44:39 clean podcast,ai,gpu,argo,ml,artificialintelligence,pipelines,machinelearning,datascience,deeplearning,paperspace,twiml,fastai,jeremyhoward,mlops,kubeflow,dillonerb,rachelthomas,jupyternotebooks,mlplatforms 404 full Sam Charrington
AI and the Responsible Data Economy with Dawn Song - #403 AI and the Responsible Data Economy with Dawn Song Mon, 24 Aug 2020 20:02:06 +0000 Today we’re joined by Professor of Computer Science at UC Berkeley, Dawn Song. Dawn’s research is centered at the intersection of AI, deep learning, security, and privacy. She’s currently focused on bringing these disciplines together with her startup, Oasis Labs. 

In our conversation, we explore their goals of building a ‘platform for a responsible data economy,’ which would combine techniques like differential privacy, blockchain, and homomorphic encryption. The platform would give consumers more control of their data, and enable businesses to better utilize data in a privacy-preserving and responsible way. 

We also discuss how to privatize and anonymize data in language models like GPT-3, real-world examples of adversarial attacks and how to train against them, her work on program synthesis to get towards AGI, and her work on privatizing coronavirus contact tracing data.

The complete show notes for this episode can be found twimlai.com/go/403.

]]>
Today we’re joined by Professor of Computer Science at UC Berkeley, Dawn Song. Dawn’s research is centered at the intersection of AI, deep learning, security, and privacy. She’s currently focused on bringing these disciplines together with her startup, Oasis Labs. 

In our conversation, we explore their goals of building a ‘platform for a responsible data economy,’ which would combine techniques like differential privacy, blockchain, and homomorphic encryption. The platform would give consumers more control of their data, and enable businesses to better utilize data in a privacy-preserving and responsible way. 

We also discuss how to privatize and anonymize data in language models like GPT-3, real-world examples of adversarial attacks and how to train against them, her work on program synthesis to get towards AGI, and her work on privatizing coronavirus contact tracing data.

The complete show notes for this episode can be found twimlai.com/go/403.

]]>
52:17 clean podcast,technology,tech,ai,agi,ml,coronavirus,artificialintelligence,blockchain,machinelearning,ucberkeley,datascience,twiml,differentialprivacy,homomorphicencryption,adversarialattacks,dawnsong,responsibledataeconomy,oasislabs,programsynthesis 403 full Sam Charrington
Relational, Object-Centric Agents for Completing Simulated Household Tasks with Wilka Carvalho - #402 Relational, Object-Centric Agents for Completing Simulated Household Tasks with Wilka Carvalho Thu, 20 Aug 2020 17:52:49 +0000 Today we’re joined by Wilka Carvalho, a PhD student at the University of Michigan, Ann Arbor.

We first met Wilka at the Black in AI workshop at last year’s NeurIPS conference, and finally got a chance to catch up about his latest research, ‘ROMA: A Relational, Object-Model Learning Agent for Sample-Efficient Reinforcement Learning.’ In the paper, Wilka explores the challenge of object interaction tasks, focusing on every day, in-home functions like filling a cup of water in a sink. 

In our conversation, we discuss his interest in understanding the foundational building blocks of intelligence, how he’s addressing the challenge of ‘object-interaction’ tasks, the biggest obstacles he’s run into along the way.

The complete show notes for this episode can be found at twimlai.com/go/402.

]]>
Today we’re joined by Wilka Carvalho, a PhD student at the University of Michigan, Ann Arbor.

We first met Wilka at the Black in AI workshop at last year’s NeurIPS conference, and finally got a chance to catch up about his latest research, ‘ROMA: A Relational, Object-Model Learning Agent for Sample-Efficient Reinforcement Learning.’ In the paper, Wilka explores the challenge of object interaction tasks, focusing on every day, in-home functions like filling a cup of water in a sink. 

In our conversation, we discuss his interest in understanding the foundational building blocks of intelligence, how he’s addressing the challenge of ‘object-interaction’ tasks, the biggest obstacles he’s run into along the way.

The complete show notes for this episode can be found at twimlai.com/go/402.

]]>
41:04 clean podcast,technology,tech,data,ai,ml,artificialintelligence,machinelearning,datascience,universityofmichigan,twiml,reinforcementlearning,transformermodel,blackinai,wilkacarvalho,objectmodellearning,objectinteraction,representationlearning Today we’re joined by Wilka Carvalho, a PhD student at the University of Michigan, Ann Arbor. In our conversation, we focus on his paper ‘ROMA: A Relational, Object-Model Learning Agent for Sample-Efficient Reinforcement Learning.’ In the paper, Wilka explores the challenge of object interaction tasks, focusing on every day, in-home functions. We discuss how he’s addressing the challenge of ‘object-interaction’ tasks, the biggest obstacles he’s run into along the way. 402 full Sam Charrington
Model Explainability Forum - #401 Model Explainability Forum Mon, 17 Aug 2020 19:28:01 +0000 Today we’re bringing you the latest TWIML Discussion Series panel on Model Explainability. The use of machine learning in business, government, and other settings that require users to understand the model’s predictions has exploded in recent years. This growth, combined with the increased popularity of opaque ML models like deep learning, has led to the development of a thriving field of model explainability research and practice. 

In this panel discussion, we bring together experts and researchers to explore the current state of explainability and some of the key emerging ideas shaping the field. Each guest will share their unique perspective and contributions to thinking about model explainability in a practical way.

We explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more. We round out the session with an audience Q&A! Check out the list of resources below!

The complete show notes for this episode can be found at twimlai.com/go/401.

]]>
Today we’re bringing you the latest TWIML Discussion Series panel on Model Explainability. The use of machine learning in business, government, and other settings that require users to understand the model’s predictions has exploded in recent years. This growth, combined with the increased popularity of opaque ML models like deep learning, has led to the development of a thriving field of model explainability research and practice. 

In this panel discussion, we bring together experts and researchers to explore the current state of explainability and some of the key emerging ideas shaping the field. Each guest will share their unique perspective and contributions to thinking about model explainability in a practical way.

We explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more. We round out the session with an audience Q&A! Check out the list of resources below!

The complete show notes for this episode can be found at twimlai.com/go/401.

]]>
01:26:41 clean data,policy,legal,ibm,ai,blackbox,ml,artificialintelligence,machinelearning,datascience,explainability,interpretability,himalakkaraju,modelexplainability,stakeholderdrivenexplainability,adversarialattacks,rayidghani,solonbarocas,kushvarshney Today we bring you the latest Discussion Series: The Model Explainability Forum. Our group of experts and researchers explore the current state of explainability and discuss the key emerging ideas shaping the field. Each guest shares their unique perspective and contributions to thinking about model explainability in a practical way. We explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more. 401 full Sam Charrington
What NLP Tells Us About COVID-19 and Mental Health with Johannes Eichstaedt - #400 What NLP Tells Us About COVID-19 and Mental Health with Johannes Eichstaedt Thu, 13 Aug 2020 15:31:37 +0000 Today we’re joined by Johannes Eichstaedt, an Assistant Professor of Psychology at Stanford University. 

Johannes joined us at the outset of the coronavirus pandemic to discuss his use of Facebook and Twitter data to measure the psychological states of large populations and individuals. In our conversation, we explore how Johannes applies his physics background to a career as a computational social scientist, the differences in communication on social media vs the real world, and what language indicators point to changes in mental health. 

We also discuss some of the major patterns in the data that emerged over the first few months of lockdown, including mental health, social norms, and political patterns. We also explore how Johannes built the process, and the techniques he’s using to collect, sift through, and understand the data.

The complete show notes for this episode can be found at twimlai.com/go/400.

]]>
Today we’re joined by Johannes Eichstaedt, an Assistant Professor of Psychology at Stanford University. 

Johannes joined us at the outset of the coronavirus pandemic to discuss his use of Facebook and Twitter data to measure the psychological states of large populations and individuals. In our conversation, we explore how Johannes applies his physics background to a career as a computational social scientist, the differences in communication on social media vs the real world, and what language indicators point to changes in mental health. 

We also discuss some of the major patterns in the data that emerged over the first few months of lockdown, including mental health, social norms, and political patterns. We also explore how Johannes built the process, and the techniques he’s using to collect, sift through, and understand the data.

The complete show notes for this episode can be found at twimlai.com/go/400.

]]>
58:09 clean facebook,twitter,technology,tech,data,psychology,stanford,pandemic,ai,socialmedia,ml,coronavirus,artificialintelligence,mentalhealth,machinelearning,datascience,twiml,covid19,johanneseichstaedt,stanfordhai Today we’re joined by Johannes Eichstaedt, an Assistant Professor of Psychology at Stanford University. In our conversation, we explore how Johannes applies his physics background to a career as a computational social scientist, some of the major patterns in the data that emerged over the first few months of lockdown, including mental health, social norms, and political patterns. We also explore how Johannes built the process, and the techniques he’s using to collect, sift through, and understand the da 400 full Sam Charrington
Human-AI Collaboration for Creativity with Devi Parikh - #399 Human-AI Collaboration for Creativity with Devi Parikh Mon, 10 Aug 2020 19:24:54 +0000 Today we’re joined by Devi Parikh, Associate Professor at the School of Interactive Computing at Georgia Tech, and research scientist at Facebook AI Research (FAIR). 

While Devi’s work is more broadly focused on computer vision applications, we caught up to discuss her presentation on AI and Creativity at the CV for Fashion, Art and Design workshop at CVPR 2020. In our conversation, we touch on Devi’s definition of creativity,  explore multiple ways that AI could impact the creative process for artists, and help humans become more creative. We investigate tools like casual creator for preference prediction, neuro-symbolic generative art, and visual journaling. 

The complete show notes for this episode can be found at twimlai.com/talk/399.

A quick reminder that this is your last chance to register for tomorrow’s Model Explainability Forum! For more information, visit https://twimlai.com/explainabilityforum.

]]>
Today we’re joined by Devi Parikh, Associate Professor at the School of Interactive Computing at Georgia Tech, and research scientist at Facebook AI Research (FAIR). 

While Devi’s work is more broadly focused on computer vision applications, we caught up to discuss her presentation on AI and Creativity at the CV for Fashion, Art and Design workshop at CVPR 2020. In our conversation, we touch on Devi’s definition of creativity,  explore multiple ways that AI could impact the creative process for artists, and help humans become more creative. We investigate tools like casual creator for preference prediction, neuro-symbolic generative art, and visual journaling. 

The complete show notes for this episode can be found at twimlai.com/talk/399.

A quick reminder that this is your last chance to register for tomorrow’s Model Explainability Forum! For more information, visit https://twimlai.com/explainabilityforum.

]]>
44:56 clean podcast,art,technology,tech,data,creativity,ai,ml,artificialintelligence,machinelearning,datascience,computervision,twiml,vilbert,cvpr2020,deviparikh,neurosymbolicgenerativeart,facebookairesearch Today we’re joined by Devi Parikh, Associate Professor at the School of Interactive Computing at Georgia Tech, and research scientist at Facebook AI Research (FAIR). In our conversation, we touch on Devi’s definition of creativity, explore multiple ways that AI could impact the creative process for artists, and help humans become more creative. We investigate tools like casual creator for preference prediction, neuro-symbolic generative art, and visual journaling. 399 full Sam Charrington
Neural Augmentation for Wireless Communication with Max Welling - #398 Neural Augmentation for Wireless Communication with Max Welling Thu, 06 Aug 2020 19:12:09 +0000 Today we’re joined by Max Welling, Vice President of Technologies at Qualcomm Netherlands, and Professor at the University of Amsterdam. In case you missed it, Max joined us last year to discuss his work on  Gauge Equivariant CNNs and Generative Models - the 2nd most popular episode of 2019. 

In this conversation, we explore the concept and Max’s work in neural augmentation, and how it’s being deployed for channel tracking and other applications. We also discuss their current work on federated learning and incorporating the technology on devices to give users more control over the privacy of their personal data. Max also shares his thoughts on quantum mechanics and the future of quantum neural networks for chip design.

The complete show notes for this episode can be found at twimlai.com/talk/398.

This episode is sponsored by Qualcomm Technologies.

]]>
Today we’re joined by Max Welling, Vice President of Technologies at Qualcomm Netherlands, and Professor at the University of Amsterdam. In case you missed it, Max joined us last year to discuss his work on  Gauge Equivariant CNNs and Generative Models - the 2nd most popular episode of 2019. 

In this conversation, we explore the concept and Max’s work in neural augmentation, and how it’s being deployed for channel tracking and other applications. We also discuss their current work on federated learning and incorporating the technology on devices to give users more control over the privacy of their personal data. Max also shares his thoughts on quantum mechanics and the future of quantum neural networks for chip design.

The complete show notes for this episode can be found at twimlai.com/talk/398.

This episode is sponsored by Qualcomm Technologies.

]]>
49:15 clean podcast,technology,tech,data,ai,gans,qualcomm,ml,artificialintelligence,machinelearning,dataprivacy,datascience,twiml,universityofamsterdam,maxwelling,federatedlearning,quantummachinelearning,neuralaugmentation,chipdesign,graphneuralnetwork Today we’re joined by Max Welling, Vice President of Technologies at Qualcomm Netherlands, and Professor at the University of Amsterdam. In our conversation, we explore Max’s work in neural augmentation, and how it’s being deployed. We also discuss his work with federated learning and incorporating the technology on devices to give users more control over the privacy of their personal data. Max also shares his thoughts on quantum mechanics and the future of quantum neural networks for chip design. 398 full Sam Charrington
Quantum Machine Learning: The Next Frontier? with Iordanis Kerenidis - #397 Quantum Machine Learning: The Next Frontier? with Iordanis Kerenidis Tue, 04 Aug 2020 17:09:42 +0000 Today we conclude our 2020 ICML coverage joined by Iordanis Kerenidis, Research Director at Centre National de la Recherche Scientifique (CNRS) in Paris, and Head of Quantum Algorithms at QC Ware.

Iordanis’ research centers around quantum algorithms of machine learning, and was an ICML main conference Keynote speaker on the topic! We focus our conversation on his presentation, exploring the prospects and challenges of quantum machine learning, as well as the field’s history, evolution, and future. We’ll also discuss the foundations of quantum computing, and some of the challenges to consider for breaking into the field.

The complete show notes for this episode can be found at twimlai.com/talk/397. For complete ICML series details, visit twimlai.com/icml20.

]]>
Today we conclude our 2020 ICML coverage joined by Iordanis Kerenidis, Research Director at Centre National de la Recherche Scientifique (CNRS) in Paris, and Head of Quantum Algorithms at QC Ware.

Iordanis’ research centers around quantum algorithms of machine learning, and was an ICML main conference Keynote speaker on the topic! We focus our conversation on his presentation, exploring the prospects and challenges of quantum machine learning, as well as the field’s history, evolution, and future. We’ll also discuss the foundations of quantum computing, and some of the challenges to consider for breaking into the field.

The complete show notes for this episode can be found at twimlai.com/talk/397. For complete ICML series details, visit twimlai.com/icml20.

]]>
01:01:36 clean physics,theoretical,quantumcomputing,supervisedlearning,icml,quantummachinelearning,iordaniskerenidis,qcware,centrenationaldelarecherchescientifique,convolutionalneuralnetworks,ewintang,quantumalgorithms Today we're joined by Iordanis Kerenidis, Research Director CNRS Paris and Head of Quantum Algorithms at QC Ware. Iordanis was an ICML main conference Keynote speaker on the topic of Quantum ML, and we focus our conversation on his presentation, exploring the prospects and challenges of quantum machine learning, as well as the field’s history, evolution, and future. We’ll also discuss the foundations of quantum computing, and some of the challenges to consider for breaking into the field. 397 full Sam Charrington
ML and Epidemiology with Elaine Nsoesie - #396 ML and Epidemiology with Elaine Nsoesie Thu, 30 Jul 2020 18:44:10 +0000 Today we continue our ICML series with Elaine Nsoesie, assistant professor at Boston University. 

Elaine presented a keynote talk at the ML for Global Health workshop at ICML 2020, where she shared her research centered around data-driven epidemiology. In our conversation, we discuss the different ways that machine learning applications can be used to address global health issues, including use cases like infectious disease surveillance via hospital parking lot capacity, and tracking search data for changes in health behavior in African countries. We also discuss COVID-19 epidemiology, focusing on the importance of recognizing how the disease is affecting people of different races, ethnicities, and economic backgrounds.

To follow along with our 2020 ICML Series, visit twimlai.com/icml20. The complete show notes for this episode can be found at twimali.com/talk/396.

]]>
Today we continue our ICML series with Elaine Nsoesie, assistant professor at Boston University. 

Elaine presented a keynote talk at the ML for Global Health workshop at ICML 2020, where she shared her research centered around data-driven epidemiology. In our conversation, we discuss the different ways that machine learning applications can be used to address global health issues, including use cases like infectious disease surveillance via hospital parking lot capacity, and tracking search data for changes in health behavior in African countries. We also discuss COVID-19 epidemiology, focusing on the importance of recognizing how the disease is affecting people of different races, ethnicities, and economic backgrounds.

To follow along with our 2020 ICML Series, visit twimlai.com/icml20. The complete show notes for this episode can be found at twimali.com/talk/396.

]]>
48:31 clean podcast,technology,tech,data,africa,epidemiology,ai,surveillance,ml,artificialintelligence,infectiousdisease,machinelearning,globalhealth,datascience,bostonuniversity,twiml,covid19,digitaldata,elainensoesie,blackinai Today we continue our ICML series with Elaine Nsoesie, assistant professor at Boston University. In our conversation, we discuss the different ways that machine learning applications can be used to address global health issues, including infectious disease surveillance, and tracking search data for changes in health behavior in African countries. We also discuss COVID-19 epidemiology and the importance of recognizing how the disease is affecting people of different races and economic backgrounds. 396 full Sam Charrington
Language (Technology) Is Power: Exploring the Inherent Complexity of NLP Systems with Hal Daumé III - #395 Language (Technology) Is Power: Exploring the Inherent Complexity of NLP Systems with Hal Daumé III Mon, 27 Jul 2020 21:06:07 +0000 Today we’re joined by Hal Daume III, professor at the University of Maryland, Senior Principal Researcher at Microsoft Research, and Co-Chair of the 2020 ICML Conference. 

We had the pleasure of catching up with Hal ahead of this year's ICML to discuss his research at the intersection of bias, fairness, NLP, and the effects language has on machine learning models. 

We explore language in two categories as they appear in machine learning models and systems: (1) How we use language to interact with the world, and (2) how we “do” language. We also discuss ways to better incorporate domain experts into ML system development, and Hal’s experience as ICML Co-Chair.

Follow along with our ICML coverage at twimlai.com/icml20. The complete show notes for this episode can be found at twimlai.com/talk/395.

]]>
Today we’re joined by Hal Daume III, professor at the University of Maryland, Senior Principal Researcher at Microsoft Research, and Co-Chair of the 2020 ICML Conference. 

We had the pleasure of catching up with Hal ahead of this year's ICML to discuss his research at the intersection of bias, fairness, NLP, and the effects language has on machine learning models. 

We explore language in two categories as they appear in machine learning models and systems: (1) How we use language to interact with the world, and (2) how we “do” language. We also discuss ways to better incorporate domain experts into ML system development, and Hal’s experience as ICML Co-Chair.

Follow along with our ICML coverage at twimlai.com/icml20. The complete show notes for this episode can be found at twimlai.com/talk/395.

]]>
01:04:43 clean podcast,technology,tech,data,language,ethics,fairness,bias,ai,nlp,ml,artificialintelligence,machinelearning,datascience,universityofmaryland,icml,twiml,computationallinguistics,haldaume,mlsystems Today we’re joined by Hal Daume III, professor at the University of Maryland and Co-Chair of the 2020 ICML Conference. We had the pleasure of catching up with Hal ahead of this year's ICML to discuss his research at the intersection of bias, fairness, NLP, and the effects language has on machine learning models, exploring language in two categories as they appear in machine learning models and systems: (1) How we use language to interact with the world, and (2) how we “do” language. 395 full Sam Charrington
Graph ML Research at Twitter with Michael Bronstein - #394 Graph ML Research at Twitter with Michael Bronstein Thu, 23 Jul 2020 19:11:20 +0000 Today we’re excited to be joined by return guest Michael Bronstein, Professor at Imperial College London, and Head of Graph Machine Learning at Twitter. We last spoke with Michael at NeurIPS in 2017 about Geometric Deep Learning

Since then, his research focus has slightly shifted to exploring graph neural networks. In our conversation, we discuss the evolution of the graph machine learning space, contextualizing Michael’s work on geometric deep learning and research on non-euclidian unstructured data. We also talk about his new role at Twitter and some of the research challenges he’s faced, including scalability and working with dynamic graphs. Michael also dives into his work on differential graph modules for graph CNNs, and the various applications of this work.

The complete show notes for this episode can be found at twimlai.com/talk/394.

]]>
Today we’re excited to be joined by return guest Michael Bronstein, Professor at Imperial College London, and Head of Graph Machine Learning at Twitter. We last spoke with Michael at NeurIPS in 2017 about Geometric Deep Learning

Since then, his research focus has slightly shifted to exploring graph neural networks. In our conversation, we discuss the evolution of the graph machine learning space, contextualizing Michael’s work on geometric deep learning and research on non-euclidian unstructured data. We also talk about his new role at Twitter and some of the research challenges he’s faced, including scalability and working with dynamic graphs. Michael also dives into his work on differential graph modules for graph CNNs, and the various applications of this work.

The complete show notes for this episode can be found at twimlai.com/talk/394.

]]>
56:37 clean podcast,twitter,technology,tech,data,ai,ml,artificialintelligence,machinelearning,datascience,neuralnetworks,twiml,michaelbronstein,imperialcollegelondon,graphmachinelearning,graphcnn,dynamicgraphs,differentialgraphmodules Today we’re excited to be joined by return guest Michael Bronstein, Head of Graph Machine Learning at Twitter. In our conversation, we discuss the evolution of the graph machine learning space, his new role at Twitter, and some of the research challenges he’s faced, including scalability and working with dynamic graphs. Michael also dives into his work on differential graph modules for graph CNNs, and the various applications of this work. 394 full Sam Charrington
Panel: The Great ML Language (Un)Debate! - #393 Panel: The Great ML Language (Un)Debate! Mon, 20 Jul 2020 18:15:33 +0000 Today we’re excited to bring The Great ML Language (Un)Debate’ to the podcast! In the latest edition of our series of live discussions, we brought together experts and enthusiasts representing an array of both popular and emerging programming languages for machine learning. In the discussion, we explored the strengths, weaknesses, and approaches offered by Clojure, JavaScript, Julia, Probabilistic Programming, Python, R, Scala, and Swift. We round out the session with an audience Q&A (58:28), covering topics including favorite secondary languages, what languages pair well, quite a few questions about C++, and much more. 

Head over to twimlai.com/talk/393 for more information about our panelists!

]]>
Today we’re excited to bring ‘The Great ML Language (Un)Debate’ to the podcast! In the latest edition of our series of live discussions, we brought together experts and enthusiasts representing an array of both popular and emerging programming languages for machine learning. In the discussion, we explored the strengths, weaknesses, and approaches offered by Clojure, JavaScript, Julia, Probabilistic Programming, Python, R, Scala, and Swift. We round out the session with an audience Q&A (58:28), covering topics including favorite secondary languages, what languages pair well, quite a few questions about C++, and much more. 

Head over to twimlai.com/talk/393 for more information about our panelists!

]]>
01:33:08 clean podcast,technology,tech,data,javascript,python,r,ai,julia,scala,ml,clojure,artificialintelligence,machinelearning,datascience,programminglanguage,twiml,probabilisticprogramming,andswift Today we’re excited to bring ‘The Great ML Language (Un)Debate’ to the podcast! In the latest edition of our series of live discussions, we brought together experts and enthusiasts to discuss both popular and emerging programming languages for machine learning, along with the strengths, weaknesses, and approaches offered by Clojure, JavaScript, Julia, Probabilistic Programming, Python, R, Scala, and Swift. We round out the session with an audience Q&A (58:28). 393 full Sam Charrington
What the Data Tells Us About COVID-19 with Eric Topol - #392 What the Data Tells Us About COVID-19 with Eric Topol Thu, 16 Jul 2020 18:12:40 +0000 Today we’re joined by Eric Topol, Director & Founder of the Scripps Research Translational Institute, and author of the book Deep Medicine. 

Eric is also one of the most trusted voices on the COVID-19 pandemic, giving those that follow his Twitter account (link) daily updates on the disease and its impact, from both a biological and public health perspective. We had the pleasure of catching up with Eric to talk through several Coronavirus-related topics, including what we’ve learned since the pandemic began and the role of technology—including ML and AI—in understanding and preventing the spread of the disease. We also explore the broader opportunity for medical applications of AI, the promise they offer for personalized medicine, and how techniques like federated learning and homomorphic encryption can offer more privacy in healthcare.  

The complete show notes for this episode can be found at twimlai.com/talk/392.

]]>
Today we’re joined by Eric Topol, Director & Founder of the Scripps Research Translational Institute, and author of the book Deep Medicine. 

Eric is also one of the most trusted voices on the COVID-19 pandemic, giving those that follow his Twitter account (link) daily updates on the disease and its impact, from both a biological and public health perspective. We had the pleasure of catching up with Eric to talk through several Coronavirus-related topics, including what we’ve learned since the pandemic began and the role of technology—including ML and AI—in understanding and preventing the spread of the disease. We also explore the broader opportunity for medical applications of AI, the promise they offer for personalized medicine, and how techniques like federated learning and homomorphic encryption can offer more privacy in healthcare.  

The complete show notes for this episode can be found at twimlai.com/talk/392.

]]>
41:36 clean technology,tech,data,healthcare,pandemic,ai,ml,coronavirus,artificialintelligence,machinelearning,datascience,deeplearning,covid19,andrewng,erictopol,kaifulee,garykasparov,federatedlearning,deepmedicine,system1system2 Today we’re joined by Eric Topol, Director & Founder of the Scripps Research Translational Institute, and author of the book Deep Medicine. We caught up with Eric to talk through what we’ve learned about the coronavirus since it's emergence, and the role of tech in understanding and preventing the spread of the disease. We also explore the broader opportunity for medical applications of AI, the promise of personalized medicine, and how techniques like federated learning can offer more privacy in healthc 392 full Sam Charrington
The Case for Hardware-ML Model Co-design with Diana Marculescu - #391 The Case for Hardware-ML Model Co-design with Diana Marculescu Mon, 13 Jul 2020 20:03:18 +0000 Today we’re joined by Diana Marculescu, Department Chair and Professor of Electrical and Computer Engineering at University of Texas at Austin. 

We caught up with Diana to discuss her work on hardware-aware machine learning. In particular, we explore her keynote, “Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design” from the Efficient Deep Learning in Computer Vision workshop at this year’s CVPR conference. 

In our conversation, we explore how her research group is focusing on making ML models more efficient so that they run better on current hardware systems, and what components and techniques they’re using to achieve true co-design. We also discuss her work with Neural architecture search, how this fits into the edge vs cloud conversation, and her thoughts on the longevity of deep learning research. 

The complete show notes for this episode can be found at twimlai.com/talk/391.

]]>
Today we’re joined by Diana Marculescu, Department Chair and Professor of Electrical and Computer Engineering at University of Texas at Austin. 

We caught up with Diana to discuss her work on hardware-aware machine learning. In particular, we explore her keynote, “Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design” from the Efficient Deep Learning in Computer Vision workshop at this year’s CVPR conference. 

In our conversation, we explore how her research group is focusing on making ML models more efficient so that they run better on current hardware systems, and what components and techniques they’re using to achieve true co-design. We also discuss her work with Neural architecture search, how this fits into the edge vs cloud conversation, and her thoughts on the longevity of deep learning research. 

The complete show notes for this episode can be found at twimlai.com/talk/391.

]]>
44:58 clean podcast,technology,tech,data,hardware,ai,ml,artificialintelligence,quantization,machinelearning,datascience,neuralnetworks,universityoftexas,codesign,twiml,neuralarchitecturesearch,cvpr2020,dianamarculescu,modelpruning,neuralpower Today we’re joined by Diana Marculescu, Professor of Electrical and Computer Engineering at UT Austin. We caught up with Diana to discuss her work on hardware-aware machine learning. In particular, we explore her keynote, “Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design” from CVPR 2020. We explore how her research group is focusing on making models more efficient so that they run better on current hardware systems, and how they plan on achieving true co 391 full Sam Charrington
Computer Vision for Remote AR with Flora Tasse - #390 Computer Vision for Remote AR with Flora Tasse Thu, 09 Jul 2020 18:34:44 +0000 Today we conclude our CVPR coverage joined by Flora Tasse, Head of Computer Vision & AI Research at Streem. 

Flora, a keynote speaker at the AR/VR workshop at CVPR, walks us through some of the interesting use cases at the intersection of AI, computer vision, and augmented reality technology. In our conversation, we discuss how Flora’s interest in a career in AR/VR developed, the origin of her company Selerio, which was eventually acquired by Streem, and her current research.

We also spend time exploring the difficulties associated with building 3D mesh environments, extracting metadata from those environments, the challenges of pose estimation, and other papers that caught Flora’s eye from the conference.

The complete show notes for this episode can be found at twimlai.com/talk/390. For our complete CVPR series, head to twimlai.com/cvpr20.

]]>
Today we conclude our CVPR coverage joined by Flora Tasse, Head of Computer Vision & AI Research at Streem. 

Flora, a keynote speaker at the AR/VR workshop at CVPR, walks us through some of the interesting use cases at the intersection of AI, computer vision, and augmented reality technology. In our conversation, we discuss how Flora’s interest in a career in AR/VR developed, the origin of her company Selerio, which was eventually acquired by Streem, and her current research.

We also spend time exploring the difficulties associated with building 3D mesh environments, extracting metadata from those environments, the challenges of pose estimation, and other papers that caught Flora’s eye from the conference.

The complete show notes for this episode can be found at twimlai.com/talk/390. For our complete CVPR series, head to twimlai.com/cvpr20.

]]>
40:54 clean podcast,technology,data,stream,ai,ar,vr,ml,artificialintelligence,augmentedreality,virtualreality,machinelearning,datascience,computervision,cvpr,twiml,floratasse,spatialmapping,objectrecognition,videounderstanding Today we conclude our CVPR coverage joined by Flora Tasse, Head of Computer Vision & AI Research at Streem. Flora, a keynote speaker at the AR/VR workshop, walks us through some of the interesting use cases at the intersection of AI, CV, and AR technologies, her current work and the origin of her company Selerio, which was eventually acquired by Streem, the difficulties associated with building 3D mesh environments, extracting metadata from those environments, the challenges of pose estimation and more. 390 full Sam Charrington
Deep Learning for Automatic Basketball Video Production with Julian Quiroga - #389 Deep Learning for Automatic Basketball Video Production with Julian Quiroga Mon, 06 Jul 2020 18:03:13 +0000 Today we return to our coverage of the 2020 CVPR conference with a conversation with Julian Quiroga, a Computer Vision Team Lead at Genius Sports.

Julian presented his recent paper “As Seen on TV: Automatic Basketball Video Production using Gaussian-based Actionness and Game States Recognition” at the CVSports workshop. We jump right into the paper, discussing details like camera setups and angles, detection and localization of the figures on the court (players, refs, and of course, the ball), and the role that deep learning plays in the process. We also break down how this work applies to different sports, and the ways that Julian is looking to improve on this work for better accuracy. 

The complete show notes for this episode can be found at twimlai.com/talk/389. To follow along with our entire CVPR series, visit twimlai.com/cvpr20.

Thanks again to our friends at Qualcomm for their support of the podcast and sponsorship of this series!

]]>
Today we return to our coverage of the 2020 CVPR conference with a conversation with Julian Quiroga, a Computer Vision Team Lead at Genius Sports.

Julian presented his recent paper “As Seen on TV: Automatic Basketball Video Production using Gaussian-based Actionness and Game States Recognition” at the CVSports workshop. We jump right into the paper, discussing details like camera setups and angles, detection and localization of the figures on the court (players, refs, and of course, the ball), and the role that deep learning plays in the process. We also break down how this work applies to different sports, and the ways that Julian is looking to improve on this work for better accuracy. 

The complete show notes for this episode can be found at twimlai.com/talk/389. To follow along with our entire CVPR series, visit twimlai.com/cvpr20.

Thanks again to our friends at Qualcomm for their support of the podcast and sponsorship of this series!

]]>
42:15 clean podcast,technology,tech,data,basketball,ai,zoom,gaussian,artificialintelligence,lebronjames,machinelearning,datascience,computervision,deeplearning,twiml,julianquiroga,cvpr2020,objectdetection,objectlocalization,geniussports Today we're Julian Quiroga, a Computer Vision Team Lead at Genius Sports, to discuss his recent paper “As Seen on TV: Automatic Basketball Video Production using Gaussian-based Actionness and Game States Recognition.” We explore camera setups and angles, detection and localization of figures on the court (players, refs, and of course, the ball), and the role that deep learning plays in the process. We also break down how this work applies to different sports, and the ways that he is looking to improve i 389 full Sam Charrington
How External Auditing is Changing the Facial Recognition Landscape with Deb Raji - #388 How External Auditing is Changing the Facial Recognition Landscape with Deb Raji Thu, 02 Jul 2020 18:38:22 +0000 Today we’re taking a break from our CVPR coverage to bring you this interview with Deb Raji, a Technology Fellow at the AI Now Institute at New York University. 

Over the past week or two, there have been quite a few major news stories in the AI community, including the self-imposed moratorium on facial recognition technology from Amazon, IBM and Microsoft.There was also the release of PULSE, a controversial computer vision model that ultimately sparked a Twitter firestorm involving Yann Lecun and AI ethics researchers, including friend of the show, Timnit Gebru. The controversy echoed into the broader AI community, eventually leading to the former’s departure from Twitter. 

In our conversation with Deb, we dig into these stories in depth, discussing the origins of Deb’s work on the Gender Shades project, how subsequent work put a spotlight on the potential harms of facial recognition technology, and who holds responsibility for dealing with underlying bias issues in datasets.

The complete show notes for this episode can be found at twimlai.com/talk/388.

]]>
Today we’re taking a break from our CVPR coverage to bring you this interview with Deb Raji, a Technology Fellow at the AI Now Institute at New York University. 

Over the past week or two, there have been quite a few major news stories in the AI community, including the self-imposed moratorium on facial recognition technology from Amazon, IBM and Microsoft.There was also the release of PULSE, a controversial computer vision model that ultimately sparked a Twitter firestorm involving Yann Lecun and AI ethics researchers, including friend of the show, Timnit Gebru. The controversy echoed into the broader AI community, eventually leading to the former’s departure from Twitter. 

In our conversation with Deb, we dig into these stories in depth, discussing the origins of Deb’s work on the Gender Shades project, how subsequent work put a spotlight on the potential harms of facial recognition technology, and who holds responsibility for dealing with underlying bias issues in datasets.

The complete show notes for this episode can be found at twimlai.com/talk/388.

]]>
01:21:47 clean google,microsoft,ethics,fairness,ibm,bias,artificialintelligence,clearview,facialrecognition,machinelearning,debraji,ainowinstitute,timnitgebru,yannlecun,joybuolomwini,gendershades,clarifai,modelcards,actionableauditing,codedbias Today we’re taking a break from our CVPR coverage to bring you this interview with Deb Raji, a Technology Fellow at the AI Now Institute. Recently there have been quite a few major news stories in the AI community, including the self-imposed moratorium on facial recognition tech from Amazon, IBM and Microsoft. In our conversation with Deb, we dig into these stories, discussing the origins of Deb’s work on the Gender Shades project, the harms of facial recognition, and much more. 388 full Sam Charrington
AI for High-Stakes Decision Making with Hima Lakkaraju - #387 AI for High-Stakes Decision Making with Hima Lakkaraju Mon, 29 Jun 2020 19:44:24 +0000 Today we’re joined by Hima Lakkaraju, an Assistant Professor at Harvard University with appointments in both the Business School and Department of Computer Science. 

At CVPR, Hima was a keynote speaker at the Fair, Data-Efficient and Trusted Computer Vision Workshop, where she spoke on Understanding the Perils of Black Box Explanations. Hima talks us through her presentation, which focuses on the unreliability of explainability techniques that center perturbations, such as LIME or SHAP, as well as how attacks on these models can be carried out, and what these attacks look like. We also discuss people’s tendency to trust computer systems and their outputs, her thoughts on collaborator (and former TWIML guest) Cynthia Rudin’s theory that we shouldn’t use black-box algorithms, and much more.

For the complete show notes, visit twimlai.com/talk/387. For our continuing CVPR Coverage, visit twimlai.com/cvpr20.

]]>
Today we’re joined by Hima Lakkaraju, an Assistant Professor at Harvard University with appointments in both the Business School and Department of Computer Science. 

At CVPR, Hima was a keynote speaker at the Fair, Data-Efficient and Trusted Computer Vision Workshop, where she spoke on Understanding the Perils of Black Box Explanations. Hima talks us through her presentation, which focuses on the unreliability of explainability techniques that center perturbations, such as LIME or SHAP, as well as how attacks on these models can be carried out, and what these attacks look like. We also discuss people’s tendency to trust computer systems and their outputs, her thoughts on collaborator (and former TWIML guest) Cynthia Rudin’s theory that we shouldn’t use black-box algorithms, and much more.

For the complete show notes, visit twimlai.com/talk/387. For our continuing CVPR Coverage, visit twimlai.com/cvpr20.

]]>
45:54 clean harvard,bias,discrimination,blackbox,explainability,cvpr,interpretability,himalakkaraju,cynthiarudin,ayannahoward,perturbations Today we’re joined by Hima Lakkaraju, an Assistant Professor at Harvard University. At CVPR, Hima was a keynote speaker at the Fair, Data-Efficient and Trusted Computer Vision Workshop, where she spoke on Understanding the Perils of Black Box Explanations. Hima talks us through her presentation, which focuses on the unreliability of explainability techniques that center perturbations, such as LIME or SHAP, as well as how attacks on these models can be carried out, and what they look like. 387 full Sam Charrington
Invariance, Geometry and Deep Neural Networks with Pavan Turaga - #386 Invariance, Geometry and Deep Neural Networks with Pavan Turaga Thu, 25 Jun 2020 17:08:44 +0000 We continue our CVPR coverage with today’s guest, Pavan Turaga, Associate Professor at Arizona State University, with dual appointments as the Director of the Geometric Media Lab, and Interim Director of the School of Arts, Media, and Engineering.

Pavan gave a keynote presentation at the Differential Geometry in CV and ML Workshop, speaking on Revisiting Invariants with Geometry and Deep Learning. In our conversation, we go in-depth on Pavan’s research integrating physics-based principles into computer vision. We also discuss the context of the term “invariant,” and the role of architectural, loss function, and data constraints on models. Pavan also contextualizes this work in relation to Hinton’s similar Capsule Network research.

Check out the complete show notes for this episode at twimlai.com/talk/386.

]]>
We continue our CVPR coverage with today’s guest, Pavan Turaga, Associate Professor at Arizona State University, with dual appointments as the Director of the Geometric Media Lab, and Interim Director of the School of Arts, Media, and Engineering.

Pavan gave a keynote presentation at the Differential Geometry in CV and ML Workshop, speaking on Revisiting Invariants with Geometry and Deep Learning. In our conversation, we go in-depth on Pavan’s research integrating physics-based principles into computer vision. We also discuss the context of the term “invariant,” and the role of architectural, loss function, and data constraints on models. Pavan also contextualizes this work in relation to Hinton’s similar Capsule Network research.

Check out the complete show notes for this episode at twimlai.com/talk/386.

]]>
47:14 clean podcast,technology,physics,geometry,artificialintelligence,computerscience,machinelearning,datascience,computervision,deeplearning,cvpr,twiml,deepneuralnetworks,geoffreyhinton,pavanturaga,invariants,capsulenetworks,deeparchitectures We continue our CVPR coverage with today’s guest, Pavan Turaga, Associate Professor at Arizona State University. Pavan gave a keynote presentation at the Differential Geometry in CV and ML Workshop, speaking on Revisiting Invariants with Geometry and Deep Learning. We go in-depth on Pavan’s research on integrating physics-based principles into computer vision. We also discuss the context of the term “invariant,” and Pavan contextualizes this work in relation to Hinton’s similar Capsule Network res 386 full Sam Charrington
Channel Gating for Cheaper and More Accurate Neural Nets with Babak Ehteshami Bejnordi - #385 Channel Gating for Cheaper and More Accurate Neural Nets with Babak Ehteshami Bejnordi Mon, 22 Jun 2020 20:19:02 +0000 Today we’re joined by Babak Ehteshami Bejnordi, a Research Scientist at Qualcomm.

Babak works closely with former guest Max Welling and is currently focused on conditional computation, which is the main driver for today’s conversation. We dig into a few papers in great detail including one from this year’s CVPR conference, Conditional Channel Gated Networks for Task-Aware Continual Learning

We also discuss the paper TimeGate: Conditional Gating of Segments in Long-range Activities, and another paper from this year’s ICLR conference, Batch-Shaping for Learning Conditional Channel Gated Networks. We cover how gates are used to drive efficiency and accuracy, while decreasing model size, how this research manifests into actual products, and more! 

For more information on the episode, visit twimlai.com/talk/385. To follow along with the CVPR 2020 Series, visit twimlai.com/cvpr20

Thanks to Qualcomm for sponsoring today’s episode and the CVPR 2020 Series!

]]>
Today we’re joined by Babak Ehteshami Bejnordi, a Research Scientist at Qualcomm.

Babak works closely with former guest Max Welling and is currently focused on conditional computation, which is the main driver for today’s conversation. We dig into a few papers in great detail including one from this year’s CVPR conference, Conditional Channel Gated Networks for Task-Aware Continual Learning

We also discuss the paper TimeGate: Conditional Gating of Segments in Long-range Activities, and another paper from this year’s ICLR conference, Batch-Shaping for Learning Conditional Channel Gated Networks. We cover how gates are used to drive efficiency and accuracy, while decreasing model size, how this research manifests into actual products, and more! 

For more information on the episode, visit twimlai.com/talk/385. To follow along with the CVPR 2020 Series, visit twimlai.com/cvpr20

Thanks to Qualcomm for sponsoring today’s episode and the CVPR 2020 Series!

]]>
55:58 clean podcast,technology,tech,data,ai,qualcomm,ml,artificialintelligence,deepmind,machinelearning,datascience,cvpr,twiml,neuralnetwork,geoffreyhinton,babakbejnordi,conditionalcompute,continuallearning,maxwelling,gatednetwork Today we’re joined by Babak Ehteshami Bejnordi, a Research Scientist at Qualcomm. Babak is currently focused on conditional computation, which is the main driver for today’s conversation. We dig into a few papers in great detail including one from this year’s CVPR conference, Conditional Channel Gated Networks for Task-Aware Continual Learning, covering how gates are used to drive efficiency and accuracy, while decreasing model size, how this research manifests into actual products, and more! 385 full Sam Charrington
Machine Learning Commerce at Square with Marsal Gavalda - #384 Building an ML-Forward Commerce Platform at Square with Marsal Gavalda - #384 Thu, 18 Jun 2020 18:17:41 +0000 Today we’re joined by Marsal Gavalda, head of machine learning for the Commerce platform at Square. 

Marsal, who hails from Barcelona, Catalonia, kicks off our conversation by indulging Sam in their shared love for language, which is what put him on the path to a career in machine learning. At Square, Marsal manages the development of machine learning for various tools and platforms, including marketing, appointments, and above all, risk management. 

We explore how they manage this vast portfolio of projects, and how having an ML and technology focus at the outset of the company has contributed to their success. We also discuss some of Marsal’s tips and best practices for internal democratization of ML, their approach to developing ML-driven features, the techniques deployed in the development of those features, and much more!

The complete show notes for this episode can be found at twimlai.com/talk/384.

]]>
Today we’re joined by Marsal Gavalda, head of machine learning for the Commerce platform at Square. 

Marsal, who hails from Barcelona, Catalonia, kicks off our conversation by indulging Sam in their shared love for language, which is what put him on the path to a career in machine learning. At Square, Marsal manages the development of machine learning for various tools and platforms, including marketing, appointments, and above all, risk management. 

We explore how they manage this vast portfolio of projects, and how having an ML and technology focus at the outset of the company has contributed to their success. We also discuss some of Marsal’s tips and best practices for internal democratization of ML, their approach to developing ML-driven features, the techniques deployed in the development of those features, and much more!

The complete show notes for this episode can be found at twimlai.com/talk/384.

]]>
51:53 clean products,engineering,square,platform,ecommerce,riskmanagement,featuredevelopment,manuelaveloso,marsalgavalda,andrewng Today we’re joined by Marsal Gavalda, head of machine learning for the Commerce platform at Square, where he manages the development of machine learning for various tools and platforms, including marketing, appointments, and above all, risk management. We explore how they manage their vast portfolio of projects, and how having an ML and technology focus at the outset of the company has contributed to their success, tips and best practices for internal democratization of ML, and much more. 384 full Sam Charrington
Cell Exploration with ML at the Allen Institute w/ Jianxu Chen - #383 Cell Exploration with ML at the Allen Institute w/ Jianxu Chen Mon, 15 Jun 2020 20:41:27 +0000 Today we’re joined by Jianxu Chen, a scientist in the Assay Development group at the Allen Institute for Cell Science. 

At the latest GTC conference, Jianxu presented his work on the Allen Cell Explorer Toolkit, an open-source project that allows users to do 3D segmentation of intracellular structures in fluorescence microscope images at high resolutions, making the images more accessible for data analysis. 

In our conversation, we discuss three of the major components of the toolkit: the cell image analyzer, the image generator, and the image visualizer. We also explore Jianxu’s transition from computer science into computational biology. More broadly, we cover how the use of GPUs has fundamentally changed this research, and the goals his team had in mind when they began the project.

Check out the complete show notes at twimlai.com/talk/383.

]]>
Today we’re joined by Jianxu Chen, a scientist in the Assay Development group at the Allen Institute for Cell Science. 

At the latest GTC conference, Jianxu presented his work on the Allen Cell Explorer Toolkit, an open-source project that allows users to do 3D segmentation of intracellular structures in fluorescence microscope images at high resolutions, making the images more accessible for data analysis. 

In our conversation, we discuss three of the major components of the toolkit: the cell image analyzer, the image generator, and the image visualizer. We also explore Jianxu’s transition from computer science into computational biology. More broadly, we cover how the use of GPUs has fundamentally changed this research, and the goals his team had in mind when they began the project.

Check out the complete show notes at twimlai.com/talk/383.

]]>
43:21 clean podcast,technology,tech,data,biology,3d,ai,opensource,ml,artificialintelligence,machinelearning,datascience,molecularbiology,computervision,twiml,computationalbiology,allencellexplorertoolkit,alleninstitute,nvidiagtc Today we’re joined by Jianxu Chen, a scientist at the Allen Institute for Cell Science. At the latest GTC conference, Jianxu presented his work on the Allen Cell Explorer Toolkit, an open-source project that allows users to do 3D segmentation of intracellular structures in fluorescence microscope images at high resolutions, making the images more accessible for data analysis. We discuss three of the major components of the toolkit: the cell image analyzer, the image generator, and the image visualizer 383 full Sam Charrington
Neural Arithmetic Units & Experiences as an Independent ML Researcher with Andreas Madsen - #382 Neural Arithmetic Units & Experiences as an Independent ML Researcher with Andreas Madsen Thu, 11 Jun 2020 19:12:27 +0000 Today we’re joined by Andreas Madsen, an independent researcher based in Denmark whose research focuses on developing interpretable machine learning models. 

While we caught up with Andreas to discuss his ICLR spotlight paper, “Neural Arithmetic Units,” we also spend time exploring his experience as an independent researcher. We discuss the difficulties of working with limited resources, the importance of finding peers to collaborate with, and tempering expectations of getting papers accepted to conferences -- something that might take a few tries to get right.

In his paper, Andreas notes that Neural Networks struggle to perform exact arithmetic operations over real numbers, but this can be helped with the addition of two NN components: the Neural Addition Unit (NAU), which can learn exact addition and subtraction; and the Neural Multiplication Unit (NMU) that can multiply subsets of a vector.

The complete show notes can be found at twimlai.com/talk/382.

]]>
Today we’re joined by Andreas Madsen, an independent researcher based in Denmark whose research focuses on developing interpretable machine learning models. 

While we caught up with Andreas to discuss his ICLR spotlight paper, “Neural Arithmetic Units,” we also spend time exploring his experience as an independent researcher. We discuss the difficulties of working with limited resources, the importance of finding peers to collaborate with, and tempering expectations of getting papers accepted to conferences -- something that might take a few tries to get right.

In his paper, Andreas notes that Neural Networks struggle to perform exact arithmetic operations over real numbers, but this can be helped with the addition of two NN components: the Neural Addition Unit (NAU), which can learn exact addition and subtraction; and the Neural Multiplication Unit (NMU) that can multiply subsets of a vector.

The complete show notes can be found at twimlai.com/talk/382.

]]>
30:54 clean podcast,technology,tech,data,ai,ml,artificialintelligence,machinelearning,datascience,neuralnetworks,iclr,twiml,andreasmadsen,independentresearch,neuralarithmeticunits Today we’re joined by Andreas Madsen, an independent researcher based in Denmark. While we caught up with Andreas to discuss his ICLR spotlight paper, “Neural Arithmetic Units,” we also spend time exploring his experience as an independent researcher, discussing the difficulties of working with limited resources, the importance of finding peers to collaborate with, and tempering expectations of getting papers accepted to conferences -- something that might take a few tries to get right. 382 full Sam Charrington
2020: A Critical Inflection Point for Responsible AI with Rumman Chowdhury - #381 2020: A Critical Inflection Point for Responsible AI with Rumman Chowdhury Mon, 08 Jun 2020 19:52:00 +0000 Today we’re joined by Rumman Chowdhury, Managing Director and Global Lead of Responsible Artificial Intelligence at Accenture. In our conversation with Rumman, we explored questions like: 

  • Why is now such a critical inflection point in the application of responsible AI?
  • How should engineers and practitioners think about AI ethics and responsible AI?
  • Why is AI ethics inherently personal and how can you define your own personal approach?
  • Is the implementation of AI governance necessarily authoritarian?
  • How do we balance idealism and pragmatism in the application of AI ethics?

We also cover practical topics like how and where you should implement responsible AI in your organization, and building the teams and processes capable of taking on critical ethics and governance questions.

The complete show notes for this episode can be found at twimlai.com/talk/381.

]]>
Today we’re joined by Rumman Chowdhury, Managing Director and Global Lead of Responsible Artificial Intelligence at Accenture. In our conversation with Rumman, we explored questions like: 

  • Why is now such a critical inflection point in the application of responsible AI?
  • How should engineers and practitioners think about AI ethics and responsible AI?
  • Why is AI ethics inherently personal and how can you define your own personal approach?
  • Is the implementation of AI governance necessarily authoritarian?
  • How do we balance idealism and pragmatism in the application of AI ethics?

We also cover practical topics like how and where you should implement responsible AI in your organization, and building the teams and processes capable of taking on critical ethics and governance questions.

The complete show notes for this episode can be found at twimlai.com/talk/381.

]]>
01:01:58 clean podcast,technology,tech,data,agency,governance,ai,accenture,uber,idealism,ml,artificialintelligence,machinelearning,datascience,scalingup,nickbostrom,twiml,responsibleai,aiethics,rummanchowdhury Today we’re joined by Rumman Chowdhury, Managing Director and Global Lead of Responsible AI at Accenture. In our conversation with Rumman, we explored questions like:  • Why is now such a critical inflection point in the application of responsible AI? • How should engineers and practitioners think about AI ethics and responsible AI? • Why is AI ethics inherently personal and how can you define your own personal approach? • Is the implementation of AI governance necessarily authoritarian? 381 full Sam Charrington
Panel: Advancing Your Data Science Career During the Pandemic - #380 Panel: Advancing Your Data Science Career During the Pandemic Thu, 04 Jun 2020 20:02:37 +0000 Today we’re joined by Ana Maria Echeverri, Caroline Chavier, Hilary Mason, and Jacqueline Nolis, our guests for the recent Advancing Your Data Science Career During the Pandemic panel.

In this conversation, we explore ways that Data Scientists and ML/AI practitioners can continue to advance their careers despite current challenges. Our panelists provide concrete tips, advice, and direction for those just starting out, those affected by layoffs, and those just wanting to move forward in their careers.

Topics we cover include:

  • Guerilla Job Hunting
  • Portfolio Building
  • Navigating Hiring Freezes
  • Acing the Technical Interview
  • Presenting the Best Candidate

For more information about our guests, or for links to the resources mentioned, visit the show notes page at twimlai.com/talk/380.

]]>
Today we’re joined by Ana Maria Echeverri, Caroline Chavier, Hilary Mason, and Jacqueline Nolis, our guests for the recent Advancing Your Data Science Career During the Pandemic panel.

In this conversation, we explore ways that Data Scientists and ML/AI practitioners can continue to advance their careers despite current challenges. Our panelists provide concrete tips, advice, and direction for those just starting out, those affected by layoffs, and those just wanting to move forward in their careers.

Topics we cover include:

  • Guerilla Job Hunting
  • Portfolio Building
  • Navigating Hiring Freezes
  • Acing the Technical Interview
  • Presenting the Best Candidate

For more information about our guests, or for links to the resources mentioned, visit the show notes page at twimlai.com/talk/380.

]]>
01:07:16 clean podcast,technology,tech,ibm,jobhunting,artificialintelligence,machinelearning,technicalinterview,anamariaecheverri,carolinechavier,datascientists,hilarymason,hiringfreezes,jacquelinenolis,machinelearningpractitioners,portfoliobuilding,wimlds Today we’re joined by Ana Maria Echeverri, Caroline Chavier, Hilary Mason, and Jacqueline Nolis, our guests for the recent Advancing Your Data Science Career During the Pandemic panel. In this conversation, we explore ways that Data Scientists and ML/AI practitioners can continue to advance their careers despite current challenges. Our panelists provide concrete tips, advice, and direction for those just starting out, those affected by layoffs, and those just wanting to move forward in their careers. 380 full Sam Charrington
On George Floyd, Empathy, and the Road Ahead On George Floyd, Empathy, and the Road Ahead Tue, 02 Jun 2020 01:43:07 +0000 Visit twimlai.com/blacklivesmatter for resources to support organizations pushing for social equity like Black Lives Matter, and groups offering relief for those jailed for exercising their rights to peaceful protest. 

]]>
Visit twimlai.com/blacklivesmatter for resources to support organizations pushing for social equity like Black Lives Matter, and groups offering relief for those jailed for exercising their rights to peaceful protest. 

]]>
06:20 clean equity,bias,racism,ai,protest,equality,ml,artificialintelligence,unitedstates,machinelearning,policebrutality,blacklivesmatter,thisweekinmachinelearning,twiml Visit twimlai.com/blacklivesmatter for resources to support organizations pushing for social equity like Black Lives Matter, and groups offering relief for those jailed for exercising their rights to peaceful protest.  bonus Sam Charrington
Engineering a Less Artificial Intelligence with Andreas Tolias - #379 Engineering a Less Artificial Intelligence with Andreas Tolias Thu, 28 May 2020 16:29:20 +0000 Today we’re joined by Andreas Tolias, Professor of Neuroscience at Baylor College of Medicine and Principal Investigator of the Neuroscience-Inspired Networks for Artificial Intelligence organization.

We caught up with Andreas to discuss his recent perspective piece, “Engineering a Less Artificial Intelligence,” which explores the shortcomings of state-of-the-art learning algorithms in comparison to the brain. The paper also offers several ideas about how neuroscience can lead the quest for better inductive biases by providing useful constraints on representations and network architecture. We discuss the promise of deep neural networks, the differences between inductive bias and model bias, the role of interpretability, and the exciting future of biological systems and deep learning. 

The complete show notes can be found at twimali.com/talk/379.

]]>
Today we’re joined by Andreas Tolias, Professor of Neuroscience at Baylor College of Medicine and Principal Investigator of the Neuroscience-Inspired Networks for Artificial Intelligence organization.

We caught up with Andreas to discuss his recent perspective piece, “Engineering a Less Artificial Intelligence,” which explores the shortcomings of state-of-the-art learning algorithms in comparison to the brain. The paper also offers several ideas about how neuroscience can lead the quest for better inductive biases by providing useful constraints on representations and network architecture. We discuss the promise of deep neural networks, the differences between inductive bias and model bias, the role of interpretability, and the exciting future of biological systems and deep learning. 

The complete show notes can be found at twimali.com/talk/379.

]]>
46:41 clean podcast,technology,tech,data,neuroscience,ai,ml,artificialintelligence,algorithm,machinelearning,datascience,neuralnetworks,deeplearning,twiml,interpretability,biologicalsystems,andreastolias,inductivebias Today we’re joined by Andreas Tolias, Professor of Neuroscience at Baylor College of Medicine. We caught up with Andreas to discuss his recent perspective piece, “Engineering a Less Artificial Intelligence,” which explores the shortcomings of state-of-the-art learning algorithms in comparison to the brain. The paper also offers several ideas about how neuroscience can lead the quest for better inductive biases by providing useful constraints on representations and network architecture. 379 full Sam Charrington
Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378 Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez Mon, 25 May 2020 13:59:00 +0000 Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. 

Our main focus in the conversation is Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which explores compute-efficient training strategies, based on model size.

We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency? We also discuss the parallels between computer vision and NLP tasks, how he characterizes both “larger” and “faster” in the paper.

Check out the complete show notes for this episode at twimlai.com/talk/378.

]]>
Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. 

Our main focus in the conversation is Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which explores compute-efficient training strategies, based on model size.

We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency? We also discuss the parallels between computer vision and NLP tasks, how he characterizes both “larger” and “faster” in the paper.

Check out the complete show notes for this episode at twimlai.com/talk/378.

]]>
52:39 clean podcast,technology,tech,data,inference,compression,ai,bert,nlp,ml,artificialintelligence,machinelearning,ucberkeley,datascience,computervision,twiml,neuralnetwork,transformermodel,languagemodels,josephgonzalez Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. In our conversation, we explore Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which looks at compute-efficient training strategies for models. We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency? 378 full Sam Charrington
The Physics of Data with Alpha Lee - #377 The Physics of Data with Alpha Lee Thu, 21 May 2020 18:10:30 +0000 Today we’re joined by Alpha Lee, Winton Advanced Fellow in the Department of Physics at the University of Cambridge, and Co-Founder of data-driven drug discovery startup, PostEra. Our conversation centers around Alpha’s research which can be broken down into three main categories: data-driven drug discovery, material discovery, and physical analysis of machine learning. 

We discuss the similarities and differences between drug discovery and material science, including the parallels in the design test cycle, and the major differences in cost. We also explore the goals associated with uncertainty estimation, why deep networks are easier to optimize than shallow networks, the concept of energy landscape, and how it all fits into his research. We also talk about his startup, PostEra which offers medicinal chemistry as a service powered by machine learning.

The complete show notes for this episode can be found at twimlai.com/talk/377.

]]>
Today we’re joined by Alpha Lee, Winton Advanced Fellow in the Department of Physics at the University of Cambridge, and Co-Founder of data-driven drug discovery startup, PostEra. Our conversation centers around Alpha’s research which can be broken down into three main categories: data-driven drug discovery, material discovery, and physical analysis of machine learning. 

We discuss the similarities and differences between drug discovery and material science, including the parallels in the design test cycle, and the major differences in cost. We also explore the goals associated with uncertainty estimation, why deep networks are easier to optimize than shallow networks, the concept of energy landscape, and how it all fits into his research. We also talk about his startup, PostEra which offers medicinal chemistry as a service powered by machine learning.

The complete show notes for this episode can be found at twimlai.com/talk/377.

]]>
34:29 clean podcast,technology,tech,data,chemistry,physics,ai,ml,artificialintelligence,drugdiscovery,machinelearning,datascience,forbes30under30,twiml,materialscience,alphalee,universityofcambridge,materialdiscovery,physicalanalysisofmachinelearning,postera Today we’re joined by Alpha Lee, Winton Advanced Fellow in the Department of Physics at the University of Cambridge. Our conversation centers around Alpha’s research which can be broken down into three main categories: data-driven drug discovery, material discovery, and physical analysis of machine learning. We discuss the similarities and differences between drug discovery and material science, his startup, PostEra which offers medicinal chemistry as a service powered by machine learning, and much more 377 full Sam Charrington
Is Linguistics Missing from NLP Research? w/ Emily M. Bender - #376 Is Linguistics Missing from NLP Research? w/ Emily M. Bender Mon, 18 May 2020 15:19:21 +0000 Today we’re joined by Emily M. Bender, Professor of Linguistics at the University of Washington. 

Our discussion covers a lot of ground, but centers on the question, "Is Linguistics Missing from NLP Research?" We explore if we would be making more progress, on more solid foundations, if more linguists were involved in NLP research, or is the progress we're making (e.g. with deep learning models like Transformers) just fine?

Later this afternoon (3pm PT) we’ll be hosting a viewing party with Emily over on our YouTube channel. Sam and Emily will be in the live chat answering your questions from the conversation. Register at twimlai.com/376viewing!

Check out the complete show notes for this conversation at twimlai.com/talk/376.

]]>
Today we’re joined by Emily M. Bender, Professor of Linguistics at the University of Washington. 

Our discussion covers a lot of ground, but centers on the question, "Is Linguistics Missing from NLP Research?" We explore if we would be making more progress, on more solid foundations, if more linguists were involved in NLP research, or is the progress we're making (e.g. with deep learning models like Transformers) just fine?

Later this afternoon (3pm PT) we’ll be hosting a viewing party with Emily over on our YouTube channel. Sam and Emily will be in the live chat answering your questions from the conversation. Register at twimlai.com/376viewing!

Check out the complete show notes for this conversation at twimlai.com/talk/376.

]]>
52:34 clean grammar,linguistics,nlp,universityofwashington,computationallinguistics,naturallanguageprocessing,emilybender Today we’re joined by Emily M. Bender, Professor of Linguistics at the University of Washington. Our discussion covers a lot of ground, but centers on the question, "Is Linguistics Missing from NLP Research?" We explore if we would be making more progress, on more solid foundations, if more linguists were involved in NLP research, or is the progress we're making (e.g. with deep learning models like Transformers) just fine? 376 full Sam Charrington
Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks with Nataniel Ruiz - #375 Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks with Nataniel Ruiz Thu, 14 May 2020 15:49:36 +0000 Today we’re joined by Nataniel Ruiz, a PhD Student in the Image & Video Computing group at Boston University. 

We caught up with Nataniel to discuss his paper “Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems,” which will be presented at the upcoming CVPR conference. In our conversation, we discuss the concept of this work, which essentially injects noise into an image to disrupt a generative model’s ability to manipulate said image. We also explore some of the challenging parts of implementing this work, a few potential scenarios in which this could be deployed, and the broader contributions that went into this work. 

The complete show notes for this episode can be found at twimlai.com/talk/375.

]]>
Today we’re joined by Nataniel Ruiz, a PhD Student in the Image & Video Computing group at Boston University. 

We caught up with Nataniel to discuss his paper “Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems,” which will be presented at the upcoming CVPR conference. In our conversation, we discuss the concept of this work, which essentially injects noise into an image to disrupt a generative model’s ability to manipulate said image. We also explore some of the challenging parts of implementing this work, a few potential scenarios in which this could be deployed, and the broader contributions that went into this work. 

The complete show notes for this episode can be found at twimlai.com/talk/375.

]]>
42:42 clean podcast,technology,tech,data,ai,ml,gan,artificialintelligence,machinelearning,datascience,bostonuniversity,deepfakes,deepfake,cvpr,twiml,natanielruiz,adversarialattack,imagetranslationnetwork,facialmanipulation,generativeadversarialnetwork Today we’re joined by Nataniel Ruiz, a PhD Student at Boston University. We caught up with Nataniel to discuss his paper “Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems.” In our conversation, we discuss the concept of this work, as well as some of the challenging parts of implementing this work, potential scenarios in which this could be deployed, and the broader contributions that went into this work. 375 full Sam Charrington
Understanding the COVID-19 Data Quality Problem with Sherri Rose - #374 Understanding the COVID-19 Data Quality Problem with Sherri Rose - #374 Mon, 11 May 2020 18:26:42 +0000 Today we’re joined by Sherri Rose, Associate Professor at Harvard Medical School. 

Sherri’s research centers around developing and integrating statistical machine learning approaches to improve human health. We cover a lot of ground in our conversation, including the intersection of her research with the current COVID-19 pandemic, the importance of quality in datasets and rigor when publishing papers, and the pitfalls of using causal inference.

We also touch on Sherri’s work in algorithmic fairness, including the necessary emphasis being put on studying issues of fairness, the shift she’s seen in fairness conferences covering these issues in relation to healthcare research, and her paper “Fair Regression for Health Care Spending.”

Check out the complete show notes for this episode at twimlai.com/talk/374.

]]>
Today we’re joined by Sherri Rose, Associate Professor at Harvard Medical School. 

Sherri’s research centers around developing and integrating statistical machine learning approaches to improve human health. We cover a lot of ground in our conversation, including the intersection of her research with the current COVID-19 pandemic, the importance of quality in datasets and rigor when publishing papers, and the pitfalls of using causal inference.

We also touch on Sherri’s work in algorithmic fairness, including the necessary emphasis being put on studying issues of fairness, the shift she’s seen in fairness conferences covering these issues in relation to healthcare research, and her paper “Fair Regression for Health Care Spending.”

Check out the complete show notes for this episode at twimlai.com/talk/374.

]]>
44:30 clean podcast,technology,tech,data,healthcare,ai,ml,coronavirus,artificialintelligence,causality,machinelearning,datascience,dataquality,algorithmicfairness,twiml,causalinference,covid19,sherrirose,harvardmedicalschool Today we’re joined by Sherri Rose, Associate Professor at Harvard Medical School. We cover a lot of ground in our conversation, including the intersection of her research with the current COVID-19 pandemic, the importance of quality in datasets and rigor when publishing papers, and the pitfalls of using causal inference. We also touch on Sherri’s work in algorithmic fairness, the shift she’s seen in fairness conferences covering these issues in relation to healthcare research, and a few recent pape 374 full Sam Charrington
The Whys and Hows of Managing Machine Learning Artifacts with Lukas Biewald - #373 The Whys and Hows of Managing Machine Learning Artifacts with Lukas Biewald Thu, 07 May 2020 14:35:05 +0000 Today we’re joined by Lukas Biewald, founder and CEO of Weights & Biases, to discuss their new tool Artifacts, an end to end pipeline tracker. You might remember Lukas from his original interview with us towards the end of last year, for more background on Lukas and W&B we encourage you to check that out here .

In this conversation, we explore Artifacts’ place in the broader machine learning tooling ecosystem through the lens of our eBook “The definitive guide to ML Platforms” and how it fits with the W&B model management platform. We discuss also discuss what exactly “Artifacts” are, what the tool is tracking, and take a look at the onboarding process for users. 

Check out the complete show notes for this episode at twimlai.com/talk/373.

]]>
Today we’re joined by Lukas Biewald, founder and CEO of Weights & Biases, to discuss their new tool Artifacts, an end to end pipeline tracker. You might remember Lukas from his original interview with us towards the end of last year, for more background on Lukas and W&B we encourage you to check that out here .

In this conversation, we explore Artifacts’ place in the broader machine learning tooling ecosystem through the lens of our eBook “The definitive guide to ML Platforms” and how it fits with the W&B model management platform. We discuss also discuss what exactly “Artifacts” are, what the tool is tracking, and take a look at the onboarding process for users. 

Check out the complete show notes for this episode at twimlai.com/talk/373.

]]>
53:30 clean podcast,technology,tech,data,ai,artifacts,ml,artificialintelligence,machinelearning,datascience,dataset,pipelinemanagement,twiml,weightsbiases,weightsandbiases,lukasbiewald,dataversioning,modelmanagement,mlplatform Today we’re joined by Lukas Biewald, founder and CEO of Weights & Biases, to discuss their new tool Artifacts, an end to end pipeline tracker. In our conversation, we explore Artifacts’ place in the broader machine learning tooling ecosystem through the lens of our eBook “The definitive guide to ML Platforms” and how it fits with the W&B model management platform. We discuss also discuss what exactly “Artifacts” are, what the tool is tracking, and take a look at the onboarding process for users. 373 full Sam Charrington
Language Modeling and Protein Generation at Salesforce with Richard Socher - #372 Language Modeling and Protein Generation at Salesforce with Richard Socher - #372 Mon, 04 May 2020 19:10:44 +0000 Today we’re joined Richard Socher, Chief Scientist and Executive VP at Salesforce.

Richard, who has been at the forefront of Salesforce’s AI Research since they acquired his startup Metamind in 2016, and his team have been publishing a ton of great projects as of late, including CTRL: A Conditional Transformer Language Model for Controllable Generation, and ProGen, an AI Protein Generator, both of which we cover in-depth in this conversation. We explore the balancing act between investments, product requirement research and otherwise at a large product-focused company like Salesforce, the evolution of his language modeling research since being acquired, and how it ties in with Protein Generation.

The complete show notes for this episode can be found at twimlai.com/talk/372.  

]]>
Today we’re joined Richard Socher, Chief Scientist and Executive VP at Salesforce.

Richard, who has been at the forefront of Salesforce’s AI Research since they acquired his startup Metamind in 2016, and his team have been publishing a ton of great projects as of late, including CTRL: A Conditional Transformer Language Model for Controllable Generation, and ProGen, an AI Protein Generator, both of which we cover in-depth in this conversation. We explore the balancing act between investments, product requirement research and otherwise at a large product-focused company like Salesforce, the evolution of his language modeling research since being acquired, and how it ties in with Protein Generation.

The complete show notes for this episode can be found at twimlai.com/talk/372.  

]]>
42:36 clean podcast,technology,tech,data,ai,salesforce,ctrl,ml,artificialintelligence,machinelearning,datascience,twiml,gpt2,richardsocher,metamind,proteinmodeling,languagemodeling,transformermodels Today we’re joined Richard Socher, Chief Scientist and Executive VP at Salesforce. Richard and his team have published quite a few great projects lately, including CTRL: A Conditional Transformer Language Model for Controllable Generation, and ProGen, an AI Protein Generator, both of which we cover in-depth in this conversation. We also explore the balancing act between investments, product requirement research and otherwise at a large product-focused company like Salesforce. 372 full Sam Charrington
AI Research at JPMorgan Chase with Manuela Veloso - #371 AI Research at JPMorgan Chase with Manuela Veloso Thu, 30 Apr 2020 16:21:31 +0000 Today we’re joined by Manuela Veloso, Head of AI Research at JPMorgan Chase and Professor at Carnegie Mellon University. Since moving from CMU to JPMorgan Chase, Manuela and her team established a set of seven lofty research goals. In this conversation we focus on the first three: building AI systems to eradicate financial crime, safely liberate data, and perfect client experience. 

We also explore Manuela’s background, including her time as a PhD student at CMU, or as she describes it, the “mecca of AI,” with some of the most influential figures in AI like Geoff Hinton, and Herb Simon on the faculty at the time. We also cover Manuela’s founding role with RoboCup, an annual international competition centered on autonomous robots playing soccer.

The complete show notes for this episode can be found at twimlai.com/talk/371.

]]>
Today we’re joined by Manuela Veloso, Head of AI Research at JPMorgan Chase and Professor at Carnegie Mellon University. Since moving from CMU to JPMorgan Chase, Manuela and her team established a set of seven lofty research goals. In this conversation we focus on the first three: building AI systems to eradicate financial crime, safely liberate data, and perfect client experience. 

We also explore Manuela’s background, including her time as a PhD student at CMU, or as she describes it, the “mecca of AI,” with some of the most influential figures in AI like Geoff Hinton, and Herb Simon on the faculty at the time. We also cover Manuela’s founding role with RoboCup, an annual international competition centered on autonomous robots playing soccer.

The complete show notes for this episode can be found at twimlai.com/talk/371.

]]>
45:25 clean podcast,technology,tech,data,cognition,perception,ai,autonomy,cmu,ml,artificialintelligence,jpmorganchase,fintech,machinelearning,datascience,twiml,manuelaveloso,geoffreyhinton,marcraibert,robocup Today we’re joined by Manuela Veloso, Head of AI Research at J.P. Morgan Chase. Since moving from CMU to JP Morgan Chase, Manuela and her team established a set of seven lofty research goals. In this conversation we focus on the first three: building AI systems to eradicate financial crime, safely liberate data, and perfect client experience. We also explore Manuela’s background, including her time CMU in the ‘80s, or as she describes it, the “mecca of AI,” and her founding role with RoboCup. 371 full Sam Charrington
Panel: Responsible Data Science in the Fight Against COVID-19 - #370 Panel: Responsible Data Science in the Fight Against COVID-19 Wed, 29 Apr 2020 19:26:10 +0000 Since the beginning of the coronavirus pandemic, we’ve seen an outpouring of interest on the part of data scientists and AI practitioners wanting to make a contribution. At the same time, some of the resulting efforts have been criticized for promoting the spread of misinformation or being disconnected from the applicable domain knowledge.

In this discussion, we explore how data scientists and ML/AI practitioners can responsibly contribute to the fight against coronavirus and COVID-19. Four experts: Rex Douglass, Rob Munro, Lea Shanley, and Gigi Yuen-Reed shared a ton of valuable insight on the best ways to get involved.

We've gathered all the resources that our panelists discussed during the conversation, you can find those at twimlai.com/talk/370.

]]>
Since the beginning of the coronavirus pandemic, we’ve seen an outpouring of interest on the part of data scientists and AI practitioners wanting to make a contribution. At the same time, some of the resulting efforts have been criticized for promoting the spread of misinformation or being disconnected from the applicable domain knowledge.

In this discussion, we explore how data scientists and ML/AI practitioners can responsibly contribute to the fight against coronavirus and COVID-19. Four experts: Rex Douglass, Rob Munro, Lea Shanley, and Gigi Yuen-Reed shared a ton of valuable insight on the best ways to get involved.

We've gathered all the resources that our panelists discussed during the conversation, you can find those at twimlai.com/talk/370.

]]>
57:03 clean epidemiology,pandemic,ai,ml,coronavirus,artificialintelligence,machinelearning,datascience,samcharrington,twiml,covid19,covid,rexdouglass,leashanley,gigiyuenreed,robertmunro In this discussion, we explore how data scientists and ML/AI practitioners can responsibly contribute to the fight against coronavirus and COVID-19. Four experts: Rex Douglass, Rob Munro, Lea Shanley, and Gigi Yuen-Reed shared a ton of valuable insight on the best ways to get involved. We've gathered all the resources that our panelists discussed during the conversation, you can find those at twimlai.com/talk/370. 370 full Sam Charrington
Adversarial Examples Are Not Bugs, They Are Features with Aleksander Madry - #369 Adversarial Examples Are Not Bugs, They Are Features with Aleksander Madry Mon, 27 Apr 2020 13:18:57 +0000 Today we’re joined by Aleksander Madry, Faculty in the MIT EECS Department, a member of CSAIL and of the Theory of Computation group. Aleksander, whose work is more on the theoretical side of machine learning research, walks us through his paper “Adversarial Examples Are Not Bugs, They Are Features,” which was published previously presented at last year’s NeurIPS conference. 

In our conversation, we explore the idea of adversarial examples in machine learning systems being features, with results that might be undesirable, but still working as designed. We talk through what we expect these systems to do, vs what they’re actually doing, if we’re able to characterize these patterns, and what makes them compelling, and if the insights from the paper will inform opinions on either side of the deep learning debate.

The complete show notes for this can be found at twimlai.com/talk/369.

]]>
Today we’re joined by Aleksander Madry, Faculty in the MIT EECS Department, a member of CSAIL and of the Theory of Computation group. Aleksander, whose work is more on the theoretical side of machine learning research, walks us through his paper “Adversarial Examples Are Not Bugs, They Are Features,” which was published previously presented at last year’s NeurIPS conference. 

In our conversation, we explore the idea of adversarial examples in machine learning systems being features, with results that might be undesirable, but still working as designed. We talk through what we expect these systems to do, vs what they’re actually doing, if we’re able to characterize these patterns, and what makes them compelling, and if the insights from the paper will inform opinions on either side of the deep learning debate.

The complete show notes for this can be found at twimlai.com/talk/369.

]]>
41:03 clean podcast,technology,tech,data,ai,bugs,features,mit,insights,ml,artificialintelligence,machinelearning,datascience,deeplearning,twiml,aleksandermadry,adversarialexamples,machinelearningsystems,csail Today we’re joined by Aleksander Madry, Faculty in the MIT EECS Department, to discuss his paper “Adversarial Examples Are Not Bugs, They Are Features.” In our conversation, we talk through what we expect these systems to do, vs what they’re actually doing, if we’re able to characterize these patterns, and what makes them compelling, and if the insights from the paper will help inform opinions on either side of the deep learning debate. 369 full Sam Charrington
AI for Social Good: Why "Good" isn't Enough with Ben Green - #368 AI for Social Good: Why "Good" isn't Enough with Ben Green Thu, 23 Apr 2020 12:58:56 +0000 Today we’re joined by Ben Green, PhD Candidate at Harvard, Affiliate at the Berkman Klein Center for Internet & Society at Harvard, Research Fellow at the AI Now Institute at NYU. 

Ben’s research is focused on social and policy impacts of data science, with a focus on algorithmic fairness, municipal governments, and the criminal justice system. In our conversation, we discuss his paper ‘Good' Isn't Good Enough,’ which explores the 2 things he feels are missing from data science and machine learning projects, papers and research; A grounded definition of what “good” actually means, and the absence of a “theory of change.” We also talk through how he thinks about the unintended consequence associated with the application of technology to social good, and his theory for the relationship between technology and social impact. 

The complete show notes for this episode can be found at twimlai.com/talk/368.

]]>
Today we’re joined by Ben Green, PhD Candidate at Harvard, Affiliate at the Berkman Klein Center for Internet & Society at Harvard, Research Fellow at the AI Now Institute at NYU. 

Ben’s research is focused on social and policy impacts of data science, with a focus on algorithmic fairness, municipal governments, and the criminal justice system. In our conversation, we discuss his paper ‘Good' Isn't Good Enough,’ which explores the 2 things he feels are missing from data science and machine learning projects, papers and research; A grounded definition of what “good” actually means, and the absence of a “theory of change.” We also talk through how he thinks about the unintended consequence associated with the application of technology to social good, and his theory for the relationship between technology and social impact. 

The complete show notes for this episode can be found at twimlai.com/talk/368.

]]>
40:34 clean podcast,good,technology,tech,data,harvard,ai,algorithms,ml,artificialintelligence,socialgood,machinelearning,datascience,socialimpact,twiml,neurips,bengreen Today we’re joined by Ben Green, PhD Candidate at Harvard and Research Fellow at the AI Now Institute at NYU. Ben’s research is focused on the social and policy impacts of data science, with a focus on algorithmic fairness and the criminal justice system. We discuss his paper ‘Good' Isn't Good Enough,’ which explores the 2 things he feels are missing from data science and machine learning research; A grounded definition of what “good” actually means, and the absence of a “theory of change. 368 full Sam Charrington
The Evolution of Evolutionary AI with Risto Miikkulainen - #367 The Evolution of Evolutionary AI with Risto Miikkulainen Mon, 20 Apr 2020 12:58:17 +0000 Today we’re joined by Risto Miikkulainen, Associate VP of Evolutionary AI at Cognizant AI, and Professor of Computer Science at the UT Austin.

Risto joined us back on episode #47 to discuss evolutionary algorithms, and today we do an update of sorts on what is the latest we should know on the topic. In our conversation, we discuss various use cases for evolutionary AI, the relationship between evolutionary algorithms and reinforcement learning, some of the latest approaches to deploying evolutionary models. We also explore his paper “Better Future through AI: Avoiding Pitfalls and Guiding AI Towards its Full Potential,” which details the historical evolution of AI, discussing where things currently stand, and where they might go in the future. 

The complete show notes for this episode can be found at twimlai.com/talk/367.

]]>
Today we’re joined by Risto Miikkulainen, Associate VP of Evolutionary AI at Cognizant AI, and Professor of Computer Science at the UT Austin.

Risto joined us back on episode #47 to discuss evolutionary algorithms, and today we do an update of sorts on what is the latest we should know on the topic. In our conversation, we discuss various use cases for evolutionary AI, the relationship between evolutionary algorithms and reinforcement learning, some of the latest approaches to deploying evolutionary models. We also explore his paper “Better Future through AI: Avoiding Pitfalls and Guiding AI Towards its Full Potential,” which details the historical evolution of AI, discussing where things currently stand, and where they might go in the future. 

The complete show notes for this episode can be found at twimlai.com/talk/367.

]]>
38:13 clean podcast,technology,tech,data,ai,ml,artificialintelligence,algorithm,machinelearning,datascience,cognizant,twiml,reinforcementlearning,neuralarchitecturesearch,ristomiikkulainen,evolutionaryai,evolutionarymodels,offpolicylearning,populationbasedsearch Today we’re joined by Risto Miikkulainen, Associate VP of Evolutionary AI at Cognizant AI. Risto joined us back on episode #47 to discuss evolutionary algorithms, and today we get an update on the latest on the topic. In our conversation, we discuss use cases for evolutionary AI and the latest approaches to deploying evolutionary models. We also explore his paper “Better Future through AI: Avoiding Pitfalls and Guiding AI Towards its Full Potential,” which digs into the historical evolution of AI. 367 full Sam Charrington
Neural Architecture Search and Google’s New AutoML Zero with Quoc Le - #366 Neural Architecture Search and Google’s New AutoML Zero with Quoc Le Thu, 16 Apr 2020 05:00:00 +0000 Today we’re super excited to share our recent conversation with Quoc Le, a research scientist at Google, on the Brain team. Quoc has been very busy recently with his work on Google’s AutoML Zero, which details significant advances in automated machine learning that can  “automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.”

Another major theme of this conversation is semi-supervised learning, discussing his work on the paper “Self-training with Noisy Student improves ImageNet classification.” Finally, we discuss how his interest in sequence to sequence learning, and a chance encounter, led to the development of Meena, Google’s recent multi-turn conversational chatbot. 

This was a really fun conversation, so much so that we decided to release the video! April 16th at 12 pm PT, Quoc and Sam will premiere the video version of this interview, and answer your questions in the chat. We’ll see you there!

The complete show notes for this episode can be found at twimlai.com/talk/366.

]]>
Today we’re super excited to share our recent conversation with Quoc Le, a research scientist at Google, on the Brain team. Quoc has been very busy recently with his work on Google’s AutoML Zero, which details significant advances in automated machine learning that can  “automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.”

Another major theme of this conversation is semi-supervised learning, discussing his work on the paper “Self-training with Noisy Student improves ImageNet classification.” Finally, we discuss how his interest in sequence to sequence learning, and a chance encounter, led to the development of Meena, Google’s recent multi-turn conversational chatbot. 

This was a really fun conversation, so much so that we decided to release the video! April 16th at 12 pm PT, Quoc and Sam will premiere the video version of this interview, and answer your questions in the chat. We’ll see you there!

The complete show notes for this episode can be found at twimlai.com/talk/366.

]]>
53:43 clean podcast,technology,tech,google,data,ai,transformer,ml,meena,artificialintelligence,machinelearning,datascience,googlebrain,twiml,imagenet,quocle,neuralarchitecturesearch,automlzero,deeplearningarchitecture,selfsupervisedlearning Today we’re super excited to share our recent conversation with Quoc Le, a research scientist at Google. Quoc joins us to discuss his work on Google’s AutoML Zero, semi-supervised learning, and the development of Meena, the multi-turn conversational chatbot. This was a really fun conversation, so much so that we decided to release the video! April 16th at 12 pm PT, Quoc and Sam will premiere the video version of this interview on Youtube, and answer your questions in the chat. We’ll see you there! 366 full Sam Charrington
Automating Electronic Circuit Design with Deep RL w/ Karim Beguir - #365 Automating Electronic Circuit Design with Deep RL w/ Karim Beguir Mon, 13 Apr 2020 14:23:13 +0000 Today we’re joined by return guest Karim Beguir, Co-Founder and CEO of InstaDeep. We originally spoke with Karim about InstaDeep’s work back on episode 302, check that episode out for a full brief of Karim’s background.

In today’s conversation, we chat with Karim about InstaDeep’s new offering, DeepPCB, an end-to-end platform for automated circuit board design. We discuss challenges and problems with some of the original iterations of auto-routers, how Karim defines circuit board “complexity,” the differences between reinforcement learning being used for games and in this use case, and their spotlight paper from NeurIPS, co-authored with a team from Deepmind. 

Check out the complete show notes at twimlai.com/talk/365.

]]>
Today we’re joined by return guest Karim Beguir, Co-Founder and CEO of InstaDeep. We originally spoke with Karim about InstaDeep’s work back on episode 302, check that episode out for a full brief of Karim’s background.

In today’s conversation, we chat with Karim about InstaDeep’s new offering, DeepPCB, an end-to-end platform for automated circuit board design. We discuss challenges and problems with some of the original iterations of auto-routers, how Karim defines circuit board “complexity,” the differences between reinforcement learning being used for games and in this use case, and their spotlight paper from NeurIPS, co-authored with a team from Deepmind. 

Check out the complete show notes at twimlai.com/talk/365.

]]>
35:23 clean podcast,technology,tech,data,automation,atari,ai,artificialintelligence,deepmind,machinelearning,datascience,alphago,circuitboard,twiml,reinforcementlearning,deepreinforcementlearning,karimbeguir,instadeep,deeppcb,nandodefreitas Today we’re joined by return guest Karim Beguir, Co-Founder and CEO of InstaDeep. In our conversation, we chat with Karim about InstaDeep’s new offering, DeepPCB, an end-to-end platform for automated circuit board design. We discuss challenges and problems with some of the original iterations of auto-routers, how Karim defines circuit board “complexity,” the differences between reinforcement learning being used for games and in this use case, and their spotlight paper from NeurIPS. 365 full Sam Charrington
Neural Ordinary Differential Equations with David Duvenaud - #364 Neural Ordinary Differential Equations with David Duvenaud Thu, 09 Apr 2020 01:47:21 +0000 Today we’re joined by David Duvenaud, Assistant Professor at the University of Toronto. David, who joined us back on episode #96 back in January ‘18, is back to talk about the various papers that have come out of his lab over the last year and change, focused on Neural Ordinary Differential Equations, a type of continuous-depth neural network.

In our conversation, we talk through quite a few of David’s papers on the topic, which you can find below on the show notes page. We discuss the problem that David is trying to solve with this research, the potential that ODEs have to replace “the backbone” of the neural networks that are used to train today, and David’s approach to engineering. 

The complete show notes for this episode can be found at twimlai.com/talk/364.

]]>
Today we’re joined by David Duvenaud, Assistant Professor at the University of Toronto. David, who joined us back on episode #96 back in January ‘18, is back to talk about the various papers that have come out of his lab over the last year and change, focused on Neural Ordinary Differential Equations, a type of continuous-depth neural network.

In our conversation, we talk through quite a few of David’s papers on the topic, which you can find below on the show notes page. We discuss the problem that David is trying to solve with this research, the potential that ODEs have to replace “the backbone” of the neural networks that are used to train today, and David’s approach to engineering. 

The complete show notes for this episode can be found at twimlai.com/talk/364.

]]>
48:49 clean podcast,technology,tech,data,ai,ml,ode,artificialintelligence,universityoftoronto,machinelearning,datascience,neuralnetworks,twiml,timeseries,differentialequations,neuralode,ffjord,invertible,davidduvenaud,vectorinstitute Today we’re joined by David Duvenaud, Assistant Professor at the University of Toronto, to discuss his research on Neural Ordinary Differential Equations, a type of continuous-depth neural network. In our conversation, we talk through a few of David’s papers on the subject. We discuss the problem that David is trying to solve with this research, the potential that ODEs have to replace “the backbone” of the neural networks that are used to train today, and David’s approach to engineering. 364 full Sam Charrington
The Measure and Mismeasure of Fairness with Sharad Goel - #363 The Measure and Mismeasure of Fairness with Sharad Goel Mon, 06 Apr 2020 04:00:11 +0000 Today we’re joined by Sharad Goel, Assistant Professor in the management science & engineering department at Stanford. Sharad, who also has appointments in the computer science, sociology, and law departments, has spent the recent years focused on applying machine learning to better understand and improve public policy. 

In our conversation, we dive into Sharad’s non-traditional path to academia, which includes extensive work on discriminatory policing, including practices like stop-and-frisk, leading up to his work on The Stanford Open Policing Project, which uses data from over 200 million traffic stops nationwide to “help researchers, journalists, and policymakers investigate and improve interactions between police and the public.” Finally, we discuss Sharad’s paper “The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning,” which identifies three formal definitions of fairness in algorithms, the statistical limitations of each, and details how mathematical formalizations of fairness could be introduced into algorithms.

Check out the complete show notes for this episode at twimlai.com/talk/363.

]]>
Today we’re joined by Sharad Goel, Assistant Professor in the management science & engineering department at Stanford. Sharad, who also has appointments in the computer science, sociology, and law departments, has spent the recent years focused on applying machine learning to better understand and improve public policy. 

In our conversation, we dive into Sharad’s non-traditional path to academia, which includes extensive work on discriminatory policing, including practices like stop-and-frisk, leading up to his work on The Stanford Open Policing Project, which uses data from over 200 million traffic stops nationwide to “help researchers, journalists, and policymakers investigate and improve interactions between police and the public.” Finally, we discuss Sharad’s paper “The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning,” which identifies three formal definitions of fairness in algorithms, the statistical limitations of each, and details how mathematical formalizations of fairness could be introduced into algorithms.

Check out the complete show notes for this episode at twimlai.com/talk/363.

]]>
47:33 clean podcast,technology,data,policy,stanford,fairness,ai,discrimination,policing,ml,artificialintelligence,algorithm,stopandfrisk,socialscience,machinelearning,datascience,twimlai,sharadgoel,infermarginality,stanfordopenpolicingproject Today we’re joined by Sharad Goel, Assistant Professor at Stanford. Sharad, who also has appointments in the computer science, sociology, and law departments, has spent recent years focused on applying ML to understanding and improving public policy. In our conversation, we discuss Sharad’s extensive work on discriminatory policing, and The Stanford Open Policing Project. We also dig into Sharad’s paper “The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning.” 363 full Sam Charrington
Simulating the Future of Traffic with RL w/ Cathy Wu - #362 Simulating the Future of Traffic with RL w/ Cathy Wu Thu, 02 Apr 2020 05:13:26 +0000 Today we’re joined by Cathy Wu, Gilbert W. Winslow Career Development Assistant Professor in the department of Civil and Environmental Engineering at MIT. We had the pleasure of catching up with Cathy at NeurIPS to discuss her talk “Mixed Autonomy Traffic: A Reinforcement Learning Perspective.” 

In our conversation, we discuss Cathy’s transition to applying machine learning to civil engineering, specifically, understanding the potential impact autonomous vehicles would have on traffic once deployed. To better understand this, Cathy built multiple reinforcement learning simulations, including a track and intersection scenarios. We talk through how each scenario is set up, how human drivers are modeled for this simulation, and the results of the experiments.

Check out the complete show notes for this episode at twimlai.com/talk/362.

]]>
Today we’re joined by Cathy Wu, Gilbert W. Winslow Career Development Assistant Professor in the department of Civil and Environmental Engineering at MIT. We had the pleasure of catching up with Cathy at NeurIPS to discuss her talk “Mixed Autonomy Traffic: A Reinforcement Learning Perspective.” 

In our conversation, we discuss Cathy’s transition to applying machine learning to civil engineering, specifically, understanding the potential impact autonomous vehicles would have on traffic once deployed. To better understand this, Cathy built multiple reinforcement learning simulations, including a track and intersection scenarios. We talk through how each scenario is set up, how human drivers are modeled for this simulation, and the results of the experiments.

Check out the complete show notes for this episode at twimlai.com/talk/362.

]]>
34:16 clean podcast,technology,tech,data,flow,ai,mit,ml,artificialintelligence,selfdrivingcars,machinelearning,ucberkeley,datascience,autonomousvehicles,twiml,reinforcementlearning,cathywu,alexandrebayen,mixedautonomytraffic Today we’re joined by Cathy Wu, Assistant Professor at MIT. We had the pleasure of catching up with Cathy to discuss her work applying RL to mixed autonomy traffic, specifically, understanding the potential impact autonomous vehicles would have on various mixed-autonomy scenarios. To better understand this, Cathy built multiple RL simulations, including a track, intersection, and merge scenarios. We talk through how each scenario is set up, how human drivers are modeled, the results, and much more. 362 full Sam Charrington
Consciousness and COVID-19 with Yoshua Bengio - #361 Consciousness and COVID-19 with Yoshua Bengio Mon, 30 Mar 2020 05:00:00 +0000 Today we’re joined by one of, if not the most cited computer scientist in the world, Yoshua Bengio. Yoshua is a Professor in the Department of Computer Science and Operations Research at the University of Montreal and the Founder and Scientific Director of MILA. We caught up with Yoshua just a few weeks into the coronavirus pandemic, so we spend a bit of time discussing his work both broadly on the impact of AI in society, as well as his current endeavor in building a COVID-19 tracing application, and the use of ML to propose experimental candidate drugs.

We also explore his work on consciousness, including how Yoshua defines consciousness, his paper “The Consciousness Prior,” the relationship between consciousness and intelligence, how attention could be used to train consciousness, the current state of consciousness research, and how he sees it evolving. 

Check out the complete show notes page at twimlai.com/talk/361.

]]>
Today we’re joined by one of, if not the most cited computer scientist in the world, Yoshua Bengio. Yoshua is a Professor in the Department of Computer Science and Operations Research at the University of Montreal and the Founder and Scientific Director of MILA. We caught up with Yoshua just a few weeks into the coronavirus pandemic, so we spend a bit of time discussing his work both broadly on the impact of AI in society, as well as his current endeavor in building a COVID-19 tracing application, and the use of ML to propose experimental candidate drugs.

We also explore his work on consciousness, including how Yoshua defines consciousness, his paper “The Consciousness Prior,” the relationship between consciousness and intelligence, how attention could be used to train consciousness, the current state of consciousness research, and how he sees it evolving. 

Check out the complete show notes page at twimlai.com/talk/361.

]]>
48:19 clean podcast,technology,tech,data,consciousness,mila,ai,ml,coronavirus,artificialintelligence,climatechange,machinelearning,datascience,twiml,universityofmontreal,covid19,yoshuabengio,acmturingaward,theconsciousnessprior,differentialprivacy Today we’re joined by one of, if not the most cited computer scientist in the world, Yoshua Bengio, Professor at the University of Montreal and the Founder and Scientific Director of MILA. We caught up with Yoshua to explore his work on consciousness, including how Yoshua defines consciousness, his paper “The Consciousness Prior,” as well as his current endeavor in building a COVID-19 tracing application, and the use of ML to propose experimental candidate drugs. 361 full Sam Charrington
Geometry-Aware Neural Rendering with Josh Tobin - #360 Geometry-Aware Neural Rendering with Josh Tobin Thu, 26 Mar 2020 05:00:00 +0000 Today we’re joined by Josh Tobin, Co-Organizer of the machine learning training program Full Stack Deep Learning, and more recently, the founder of a stealth startup. We had the pleasure of sitting down with Josh prior to his presentation of his paper Geometry-Aware Neural Rendering at NeurIPS.

This work looks to build upon DeepMind’s “Neural scene representation and rendering,” with the goal of developing implicit scene understanding. We discuss challenges, the various datasets used to train his model, and the similarities between variational autoencoder training and his process. 

The complete show notes for this episode can be found at twimlai.com/talk/360.

]]>
Today we’re joined by Josh Tobin, Co-Organizer of the machine learning training program Full Stack Deep Learning, and more recently, the founder of a stealth startup. We had the pleasure of sitting down with Josh prior to his presentation of his paper Geometry-Aware Neural Rendering at NeurIPS.

This work looks to build upon DeepMind’s “Neural scene representation and rendering,” with the goal of developing implicit scene understanding. We discuss challenges, the various datasets used to train his model, and the similarities between variational autoencoder training and his process. 

The complete show notes for this episode can be found at twimlai.com/talk/360.

]]>
24:58 clean podcast,technology,tech,data,ai,nvidia,ml,artificialintelligence,deepmind,machinelearning,ucberkeley,datascience,openai,twiml,neurips,roboticlearning,fullstackdeeplearning,pieterabbeel,geometryawareneuralrendering,inhandblockmanipulation Today we’re joined by Josh Tobin, Co-Organizer of the machine learning training program Full Stack Deep Learning. We had the pleasure of sitting down with Josh prior to his presentation of his paper Geometry-Aware Neural Rendering at NeurIPS. Josh's goal is to develop implicit scene understanding, building upon Deepmind's Neural scene representation and rendering work. We discuss challenges, the various datasets used to train his model, and the similarities between VAE training and his process, and mor 360 full Sam Charrington
The Third Wave of Robotic Learning with Ken Goldberg - #359 The Third Wave of Robotic Learning with Ken Goldberg Mon, 23 Mar 2020 02:47:42 +0000 Today we’re joined by Ken Goldberg, professor of engineering and William S. Floyd Jr. distinguished chair in engineering at UC Berkeley. Ken, who is also an accomplished artist, and collaborator on projects such as DexNet and The Telegarden, has recently been focusing on robotic learning for grasping.

In our conversation with Ken, we chat about some of the challenges that arise when working on robotic grasping, including uncertainty in perception, control, and physics. We also discuss his view on the role of physics in robotic learning, citing co-contributors Sergey Levine and Pieter Abbeel along the way. Finally, we discuss some of his thoughts on potential robot use cases, from the use of robots in assisting in telemedicine, and agriculture, and even robotic Covid-19 testing.

The complete show notes for this episode can be found at twimlai.com/talk/359.

]]>
Today we’re joined by Ken Goldberg, professor of engineering and William S. Floyd Jr. distinguished chair in engineering at UC Berkeley. Ken, who is also an accomplished artist, and collaborator on projects such as DexNet and The Telegarden, has recently been focusing on robotic learning for grasping.

In our conversation with Ken, we chat about some of the challenges that arise when working on robotic grasping, including uncertainty in perception, control, and physics. We also discuss his view on the role of physics in robotic learning, citing co-contributors Sergey Levine and Pieter Abbeel along the way. Finally, we discuss some of his thoughts on potential robot use cases, from the use of robots in assisting in telemedicine, and agriculture, and even robotic Covid-19 testing.

The complete show notes for this episode can be found at twimlai.com/talk/359.

]]>
01:00:37 clean podcast,technology,tech,physics,robotics,ai,ml,artificialintelligence,grasping,machinelearning,ucberkeley,datascience,twiml,covid19,kengoldberg,sergeylevine,peiterabbeel,roboticlearning,dexnet,telegarden Today we’re joined by Ken Goldberg, professor of engineering at UC Berkeley, focused on robotic learning. In our conversation with Ken, we chat about some of the challenges that arise when working on robotic grasping, including uncertainty in perception, control, and physics. We also discuss his view on the role of physics in robotic learning, and his thoughts on potential robot use cases, from the use of robots in assisting in telemedicine, agriculture, and even robotic Covid-19 testing. 359 full Sam Charrington
Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee - #358 Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee Wed, 18 Mar 2020 21:04:03 +0000 Today we’re joined by Stefan Lee, assistant professor at the school of electrical engineering and computer science at Oregon State University. Stefan, who we sat down with at NeurIPS this past winter, is focused on the development of agents that can perceive their environment and communicate their understanding with humans in order to coordinate their actions to achieve mutual goals. 

In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks, a model for learning joint representations of image content and natural language. We talk through the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks and finally, we discuss the importance of visual grounding.

Check out the complete show notes page at twimlai.com/talk/358.

]]>
Today we’re joined by Stefan Lee, assistant professor at the school of electrical engineering and computer science at Oregon State University. Stefan, who we sat down with at NeurIPS this past winter, is focused on the development of agents that can perceive their environment and communicate their understanding with humans in order to coordinate their actions to achieve mutual goals. 

In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks, a model for learning joint representations of image content and natural language. We talk through the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks and finally, we discuss the importance of visual grounding.

Check out the complete show notes page at twimlai.com/talk/358.

]]>
27:36 clean podcast,technology,tech,data,linguistics,ai,bert,ml,artificialintelligence,computerscience,machinelearning,datascience,oregonstateuniversity,twiml,neurips,stefanlee,vilbert,transformermodel,visualinformation,visualquestionanswering Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks. 358 full Sam Charrington
Upside-Down Reinforcement Learning with Jürgen Schmidhuber - #357 Upside-Down Reinforcement Learning with Jürgen Schmidhuber Mon, 16 Mar 2020 07:24:12 +0000 Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland.

Jürgen’s lab is well known for creating the Long Short-Term Memory (LSTM) network which has become a prevalent neural network, used commonly devices such as smartphones, which we discuss in detail in our first conversation with Jürgen back in 2017.

In this conversation, we dive into some of Jürgen’s more recent work, including his recent paper, Reinforcement Learning Upside Down: Don’t Predict Rewards — Just Map Them to Actions.

Check out the show notes page at twimlai.com/talk/357.

]]>
Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland.

Jürgen’s lab is well known for creating the Long Short-Term Memory (LSTM) network which has become a prevalent neural network, used commonly devices such as smartphones, which we discuss in detail in our first conversation with Jürgen back in 2017.

In this conversation, we dive into some of Jürgen’s more recent work, including his recent paper, Reinforcement Learning Upside Down: Don’t Predict Rewards — Just Map Them to Actions.

Check out the show notes page at twimlai.com/talk/357.

]]>
33:19 clean podcast,ai,audi,starcraft,dota,ml,artificialintelligence,supervisedlearning,deepmind,machinelearning,lstm,openai,twiml,neurips,reinforcementlearning,sepphochreiter,nnaisense,idsia,supsi,upsidedownreinforcementlearning Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland. Jürgen’s lab is well known for creating the Long Short-Term Memory (LSTM) network, and in this conversation, we discuss some of the recent research coming out of his lab, namely Upside-Down Reinforcement Learning. 357 full Sam Charrington
SLIDE: Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning with Beidi Chen - #356 SLIDE: Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning with Beidi Chen Thu, 12 Mar 2020 04:43:48 +0000
Today we're joined by Beidi Chen, PhD student at Rice University. Beidi is part of the team that developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines. They presented their findings at NeurIPS 2019 and have since gained a lot of attention for their paper, SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems. In this interview, Beidi shares how the team took a new look at deep learning with the case of extreme classification by turning it into a search problem and using locality-sensitive hashing.
 
Check out the complete show notes at twimlai.com/talk/356. 
]]>
SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems. In this interview, Beidi shares how the team took a new look at deep learning with the case of extreme classification by turning it into a search problem and using locality-sensitive hashing.   Check out the complete show notes at twimlai.com/talk/356. ]]> 31:21 clean podcast,slide,technology,tech,data,ai,gpu,cpu,ml,artificialintelligence,machinelearning,datascience,riceuniversity,deeplearning,twiml,neurips,beidichen,mlsys,sysml Beidi Chen is part of the team that developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines. They presented their findings at NeurIPS 2019 and have since gained a lot of attention for their paper, SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems. Beidi shares how the team took a new look at deep learning with the case of extreme classification by turning it into a search problem and using locality-sensitive hashing. 356 full Sam Charrington
Advancements in Machine Learning with Sergey Levine - #355 Advancements in Machine Learning with Sergey Levine Mon, 09 Mar 2020 20:16:00 +0000 Today we're joined by Sergey Levine, an Assistant Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers -- which means a lot of ground to cover!

Sergey and his lab’s recent efforts have been focused on contributing to a future where machines can be “out there in the real world, learning continuously through their own experience.” Sergey shares how many of the papers presented at the most recent NeurIPS conference are working to make that happen. Some of the major developments have been in the research fields of model-free reinforcement learning, causality and imitation learning, and offline reinforcement learning.

Check out the complete show notes page at twimlai.com/talk/355.

]]>
Today we're joined by Sergey Levine, an Assistant Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers -- which means a lot of ground to cover!

Sergey and his lab’s recent efforts have been focused on contributing to a future where machines can be “out there in the real world, learning continuously through their own experience.” Sergey shares how many of the papers presented at the most recent NeurIPS conference are working to make that happen. Some of the major developments have been in the research fields of model-free reinforcement learning, causality and imitation learning, and offline reinforcement learning.

Check out the complete show notes page at twimlai.com/talk/355.

]]>
42:13 clean podcast,science,technology,tech,data,intelligence,learning,artificial,imitation,machine,ai,reinforcement,ml,causality,twiml,modelbased Today we're joined by Sergey Levine, an Assistant Professor at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. Sergey and his lab’s recent efforts have been focused on contributing to a future where machines can be “out there in the real world, learning continuously through their own experience.” We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers -- which means a lot of ground to cover! 355 full Sam Charrington
Secrets of a Kaggle Grandmaster with David Odaibo - #354 Secrets of a Kaggle Grandmaster with David Odaibo Thu, 05 Mar 2020 21:16:03 +0000 Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s solution was Kaggle, a popular platform for data science competitions.

Fast forward four years, and David is now a Kaggle Grandmaster, the highest designation, with particular accomplishment in computer vision competitions. Having completed his degree last year, he is currently co-founder and CTO of Analytical AI, a company that grew out of one of his recent Kaggle successes.

David has a background in deep learning and medical imaging–something he shares with his brother, Stephen Odaibo, who we interviewed last year about his work in Retinal Image Generation for Disease Discovery.

Check out the full article and interview at twimlai.com/talk/354

]]>
Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s solution was Kaggle, a popular platform for data science competitions.

Fast forward four years, and David is now a Kaggle Grandmaster, the highest designation, with particular accomplishment in computer vision competitions. Having completed his degree last year, he is currently co-founder and CTO of Analytical AI, a company that grew out of one of his recent Kaggle successes.

David has a background in deep learning and medical imaging–something he shares with his brother, Stephen Odaibo, who we interviewed last year about his work in Retinal Image Generation for Disease Discovery.

Check out the full article and interview at twimlai.com/talk/354

]]>
41:15 clean medical,network,image,homeland,security,architecture,drivers,generation,imaging,artificial,ai,csharp,distracted,segmentation,kernels,lstm,kaggle,analyticalai,encoderdecoder,unet Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s solution was Kaggle, a popular platform for data science competitions. Fast forward four years, and David is now a Kaggle Grandmaster, the highest designation, with particular accomplishment in computer vision competitions, and co-founder and CTO of Analytical 354 full Sam Charrington
NLP for Mapping Physics Research with Matteo Chinazzi - #353 NLP for Mapping Physics Research with Matteo Chinazzi Mon, 02 Mar 2020 23:21:30 +0000 Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper Mapping the Physics Research Space: a Machine Learning Approach, along with co-authors including former TWIML AI Podcast guest Bruno Gonçalves.

In addition to predicting the trajectory of physics research, Matteo is also active in the computational epidemiology field. His work in that area involves building simulators that can model the spread of diseases like Zika or the seasonal flu at a global scale. 

Check out our full article on this episode at twimlai.com/talk/353.

]]>
Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper Mapping the Physics Research Space: a Machine Learning Approach, along with co-authors including former TWIML AI Podcast guest Bruno Gonçalves.

In addition to predicting the trajectory of physics research, Matteo is also active in the computational epidemiology field. His work in that area involves building simulators that can model the spread of diseases like Zika or the seasonal flu at a global scale. 

Check out our full article on this episode at twimlai.com/talk/353.

]]>
34:12 clean podcast,science,network,technology,tech,word,data,systems,intelligence,learning,artificial,machine,applied,ai,complex,embedding,ml,twiml,word2vec Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper Mapping the Physics Research Space: a Machine Learning Approach. In addition to predicting the trajectory of physics research, Matteo is also active in the computational epidemiology field. His work in that area involves building simulators that can model the spread of diseases like Zika or the seasonal flu at a global scale. 353 full Sam Charrington
Metric Elicitation and Robust Distributed Learning with Sanmi Koyejo - #352 Metric Elicitation and Robust Distributed Learning with Sanmi Koyejo Thu, 27 Feb 2020 16:38:25 +0000 The unfortunate reality is that many of the most commonly used machine learning metrics don't account for the complex trade-offs that come with real-world decision making. This is one of the challenges that today’s guest, Sanmi Koyejo has dedicated his research to address.

Sanmi is an assistant professor at the Department of Computer Science at the University of Illinois, where he applies his background in cognitive science, probabilistic modeling, and Bayesian inference to pursue his research which focuses broadly on “adaptive and robust machine learning.”

Check out the full episode write-up at twimlai.com/talk/352.

]]>
The unfortunate reality is that many of the most commonly used machine learning metrics don't account for the complex trade-offs that come with real-world decision making. This is one of the challenges that today’s guest, Sanmi Koyejo has dedicated his research to address.

Sanmi is an assistant professor at the Department of Computer Science at the University of Illinois, where he applies his background in cognitive science, probabilistic modeling, and Bayesian inference to pursue his research which focuses broadly on “adaptive and robust machine learning.”

Check out the full episode write-up at twimlai.com/talk/352.

]]>
55:11 clean podcast,of,science,technology,tech,data,intelligence,models,learning,university,byzantine,cognitive,artificial,distributed,inference,machine,ai,bayesian,illinois,metric,robust,radios,ml,sanmi,probabilistic,elicitation,icml,twiml,koyejo The unfortunate reality is that many of the most commonly used machine learning metrics don't account for the complex trade-offs that come with real-world decision making. This is one of the challenges that Sanmi Koyejo, assistant professor at the University of Illinois, has dedicated his research to address. Sanmi applies his background in cognitive science, probabilistic modeling, and Bayesian inference to pursue his research which focuses broadly on “adaptive and robust machine learning.” 352 full Sam Charrington
High-Dimensional Robust Statistics with Ilias Diakonikolas - #351 High-Dimensional Robust Statistics with Ilias Diakonikolas Mon, 24 Feb 2020 21:14:36 +0000 Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, which was the recipient of the NeurIPS 2019 Outstanding Paper award. The paper, which focuses on high-dimensional robust learning, is regarded as the first progress made around distribution-independent learning with noise since the 80s. In our conversation, we explore robustness in machine learning, problems with corrupt data in high-dimensional settings, and of course, a deep dive into the paper. 

Check out our full write up on the paper and the interview at twimlai.com/talk/351.

]]>
Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, which was the recipient of the NeurIPS 2019 Outstanding Paper award. The paper, which focuses on high-dimensional robust learning, is regarded as the first progress made around distribution-independent learning with noise since the 80s. In our conversation, we explore robustness in machine learning, problems with corrupt data in high-dimensional settings, and of course, a deep dive into the paper. 

Check out our full write up on the paper and the interview at twimlai.com/talk/351.

]]>
34:48 clean podcast,of,science,technology,tech,data,intelligence,learning,university,paper,wisconsin,artificial,machine,ai,ml,outstanding,icml,twiml,neurips Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, recipient of the NeurIPS 2019 Outstanding Paper award. The paper is regarded as the first progress made around distribution-independent learning with noise since the 80s. In our conversation, we explore robustness in ML, problems with corrupt data in high-dimensional settings, and of course, the paper. 351 full Sam Charrington
How AI Predicted the Coronavirus Outbreak with Kamran Khan - #350 How AI Predicted the Coronavirus Outbreak with Kamran Khan Wed, 19 Feb 2020 18:31:13 +0000 Today we’re joined by Kamran Khan, founder & CEO of BlueDot, and professor of medicine and public health at the University of Toronto. BlueDot, a digital health company with a focus on surveilling global infectious disease outbreaks, has been the recipient of a lot of attention for being the first to publicly warn about the coronavirus that started in Wuhan. How did the company’s system of algorithms and data processing techniques help flag the potential dangers of the disease? In this interview, Kamran talks us through how the technology works, its limits, and the motivation behind the work. 

Check out our new and improved show notes article at twimlai.com/talk/350.

]]>
Today we’re joined by Kamran Khan, founder & CEO of BlueDot, and professor of medicine and public health at the University of Toronto. BlueDot, a digital health company with a focus on surveilling global infectious disease outbreaks, has been the recipient of a lot of attention for being the first to publicly warn about the coronavirus that started in Wuhan. How did the company’s system of algorithms and data processing techniques help flag the potential dangers of the disease? In this interview, Kamran talks us through how the technology works, its limits, and the motivation behind the work. 

Check out our new and improved show notes article at twimlai.com/talk/350.

]]>
50:05 clean podcast,of,science,technology,tech,data,toronto,intelligence,learning,university,artificial,machine,ai,khan,ml,coronavirus,kamran,twiml,bluedot Today we’re joined by Kamran Khan, founder & CEO of BlueDot, and professor of medicine and public health at the University of Toronto. BlueDot has been the recipient of a lot of attention for being the first to publicly warn about the coronavirus that started in Wuhan. How did the company’s system of algorithms and data processing techniques help flag the potential dangers of the disease? In our conversation, Kamran talks us through how the technology works, its limits, and the motivation behind the wor 350 full Sam Charrington
Turning Ideas into ML Powered Products with Emmanuel Ameisen - #349 Turning Ideas into ML Powered Products with Emmanuel Ameisen Mon, 17 Feb 2020 22:02:00 +0000 Today we’re joined by Emmanuel Ameisen, machine learning engineer at Stripe, and author of the recently published book “Building Machine Learning Powered Applications; Going from Idea to Product.” In our conversation, we discuss structuring end-to-end machine learning projects, debugging and explainability in the context of models, the various types of models covered in the book, and the importance of post-deployment monitoring. 

Check out our full show notes article at twimlai.com/talk/349.

]]>
Today we’re joined by Emmanuel Ameisen, machine learning engineer at Stripe, and author of the recently published book “Building Machine Learning Powered Applications; Going from Idea to Product.” In our conversation, we discuss structuring end-to-end machine learning projects, debugging and explainability in the context of models, the various types of models covered in the book, and the importance of post-deployment monitoring. 

Check out our full show notes article at twimlai.com/talk/349.

]]>
42:53 clean podcast,science,ross,technology,tech,data,intelligence,learning,oreilly,artificial,machine,ai,geoffrey,emmanuel,ml,hinton,twiml,ameisen,fadely Today we’re joined by Emmanuel Ameisen, machine learning engineer at Stripe, and author of the recently published book “Building Machine Learning Powered Applications; Going from Idea to Product.” In our conversation, we discuss structuring end-to-end machine learning projects, debugging and explainability in the context of models, the various types of models covered in the book, and the importance of post-deployment monitoring.  349 full Sam Charrington
Algorithmic Injustices and Relational Ethics with Abeba Birhane - #348 Algorithmic Injustices and Relational Ethics with Abeba Birhane Thu, 13 Feb 2020 20:53:56 +0000 Today we’re joined by Abeba Birhane, PhD Student at University College Dublin and author of the recent paper Algorithmic Injustices: Towards a Relational Ethics. We caught up with Abeba, whose aforementioned paper was the recipient of the Best Paper award at the most recent Black in AI Workshop at NeurIPS, to go in-depth on the paper and the thought process around AI ethics.

In our conversation, we discuss the “harm of categorization”, and how the thinking around these categorizations should be discussed, how ML generally doesn’t account for the ethics of various scenarios and how relational ethics could solve this issue, her most recent paper “Robot Rights? Let’s Talk about Human Welfare Instead,” and much more.

Check out our complete write-up and resource page at twimlai.com/talk/348. 

]]>
Today we’re joined by Abeba Birhane, PhD Student at University College Dublin and author of the recent paper Algorithmic Injustices: Towards a Relational Ethics. We caught up with Abeba, whose aforementioned paper was the recipient of the Best Paper award at the most recent Black in AI Workshop at NeurIPS, to go in-depth on the paper and the thought process around AI ethics.

In our conversation, we discuss the “harm of categorization”, and how the thinking around these categorizations should be discussed, how ML generally doesn’t account for the ethics of various scenarios and how relational ethics could solve this issue, her most recent paper “Robot Rights? Let’s Talk about Human Welfare Instead,” and much more.

Check out our complete write-up and resource page at twimlai.com/talk/348. 

]]>
41:19 clean podcast,science,technology,tech,data,intelligence,learning,university,college,artificial,van,jelle,machine,ai,dublin,ml,abeba,twiml,birhane Today we’re joined by Abeba Birhane, PhD Student at University College Dublin and author of the recent paper Algorithmic Injustices: Towards a Relational Ethics, which was the recipient of the Best Paper award at the 2019 Black in AI Workshop at NeurIPS. In our conversation, break down the paper and the thought process around AI ethics, the “harm of categorization,” how ML generally doesn’t account for the ethics of various scenarios and how relational ethics could solve the issue, and much more. 348 full Sam Charrington
AI for Agriculture and Global Food Security with Nemo Semret - #347 AI for Agriculture and Global Food Security with Nemo Semret Mon, 10 Feb 2020 20:29:12 +0000 Today we’re excited to kick off our annual Black in AI Series joined by Nemo Semret, CTO of Gro Intelligence. Gro provides an agricultural data platform dedicated to improving global food security, focused on applying AI at macro scale. In our conversation with Nemo, we discuss Gro’s approach to data acquisition, how they apply machine learning to various problems, and their approach to modeling. 

 

Check out the full interview and show notes at twimlai.com/talk/347.

]]>
Today we’re excited to kick off our annual Black in AI Series joined by Nemo Semret, CTO of Gro Intelligence. Gro provides an agricultural data platform dedicated to improving global food security, focused on applying AI at macro scale. In our conversation with Nemo, we discuss Gro’s approach to data acquisition, how they apply machine learning to various problems, and their approach to modeling. 

 

Check out the full interview and show notes at twimlai.com/talk/347.

]]>
01:06:38 clean podcast,science,black,technology,tech,in,data,intelligence,learning,sara,artificial,machine,ai,gro,ml,twiml,neurips Today we’re excited to kick off our annual Black in AI Series joined by Nemo Semret, CTO of Gro Intelligence. Gro provides an agricultural data platform dedicated to improving global food security, focused on applying AI at macro scale. In our conversation with Nemo, we discuss Gro’s approach to data acquisition, how they apply machine learning to various problems, and their approach to modeling. 347 full Sam Charrington
Practical Differential Privacy at LinkedIn with Ryan Rogers - #346 Practical Differential Privacy at LinkedIn with Ryan Rogers Fri, 07 Feb 2020 19:39:47 +0000 Today we’re joined by Ryan Rogers, Senior Software Engineer at LinkedIn. We caught up with Ryan at NeurIPS, where he presented the paper “Practical Differentially Private Top-k Selection with Pay-what-you-get Composition” as a spotlight talk. In our conversation, we discuss how LinkedIn allows its data scientists to access aggregate user data for exploratory analytics while maintaining its users’ privacy with differential privacy, and the major components of the paper. We also talk through one of the big innovations in the paper, which is discovering the connection between a common algorithm for implementing differential privacy, the exponential mechanism, and Gumbel noise, which is commonly used in machine learning.

 

The complete show notes for this episode can be found at twimlai.com/talk/346

]]>
Today we’re joined by Ryan Rogers, Senior Software Engineer at LinkedIn. We caught up with Ryan at NeurIPS, where he presented the paper “Practical Differentially Private Top-k Selection with Pay-what-you-get Composition” as a spotlight talk. In our conversation, we discuss how LinkedIn allows its data scientists to access aggregate user data for exploratory analytics while maintaining its users’ privacy with differential privacy, and the major components of the paper. We also talk through one of the big innovations in the paper, which is discovering the connection between a common algorithm for implementing differential privacy, the exponential mechanism, and Gumbel noise, which is commonly used in machine learning.

 

The complete show notes for this episode can be found at twimlai.com/talk/346

]]>
33:31 clean podcast,science,technology,linkedin,tech,data,rogers,microsoft,intelligence,apple,learning,ryan,artificial,wwdc,machine,ai,ml,twiml,neurips Today we’re joined by Ryan Rogers, Senior Software Engineer at LinkedIn, to discuss his paper “Practical Differentially Private Top-k Selection with Pay-what-you-get Composition.” In our conversation, we discuss how LinkedIn allows its data scientists to access aggregate user data for exploratory analytics while maintaining its users’ privacy through differential privacy, and the connection between a common algorithm for implementing differential privacy, the exponential mechanism, and Gumbel noise. 346 full Sam Charrington
Networking Optimizations for Multi-Node Deep Learning on Kubernetes with Erez Cohen - #345 Networking Optimizations for Multi-Node Deep Learning on Kubernetes with Erez Cohen Wed, 05 Feb 2020 17:33:06 +0000 Today we conclude our KubeCon ‘19 Series joined by Erez Cohen, VP of CloudX & AI at Mellanox. In our conversation, we discuss:

  • Erez’s talk “Networking Optimizations for Multi-Node Deep Learning on Kubernetes.” where he discusses problems and solutions related to networking discovered during the journey to reduce training time. 
  • NVIDIA’s recent acquisition of Mellanox, and what fruits that relationship hopes to bear. 
  • The evolution of technologies like RDMA, GPU Direct, and Sharp, Mellanox’s solution to improve the performance of MPI operations, which can be found in NVIDIA’s NCCL collective communications library.
  • How Mellanox is enabling Kubernetes and other platforms to take advantage of the various technologies mentioned above. 
  • Why we should care about networking in Deep Learning, which is inherently a compute-bound process. 

The complete show notes for this episode can be found at twimlai.com/talk/345.

]]>
Today we conclude our KubeCon ‘19 Series joined by Erez Cohen, VP of CloudX & AI at Mellanox. In our conversation, we discuss:

  • Erez’s talk “Networking Optimizations for Multi-Node Deep Learning on Kubernetes.” where he discusses problems and solutions related to networking discovered during the journey to reduce training time. 
  • NVIDIA’s recent acquisition of Mellanox, and what fruits that relationship hopes to bear. 
  • The evolution of technologies like RDMA, GPU Direct, and Sharp, Mellanox’s solution to improve the performance of MPI operations, which can be found in NVIDIA’s NCCL collective communications library.
  • How Mellanox is enabling Kubernetes and other platforms to take advantage of the various technologies mentioned above. 
  • Why we should care about networking in Deep Learning, which is inherently a compute-bound process. 

The complete show notes for this episode can be found at twimlai.com/talk/345.

]]>
34:00 clean podcast,science,networking,technology,tech,data,deep,intelligence,learning,artificial,cohen,machine,ai,ml,twiml,rdma,mellanox,erex Today we conclude the KubeCon ‘19 series joined by Erez Cohen, VP of CloudX & AI at Mellanox, who we caught up with before his talk “Networking Optimizations for Multi-Node Deep Learning on Kubernetes.” In our conversation, we discuss NVIDIA’s recent acquisition of Mellanox, the evolution of technologies like RDMA and GPU Direct, how Mellanox is enabling Kubernetes and other platforms to take advantage of the recent advancements in networking tech, and why we should care about networking in Deep Lea 345 full Sam Charrington
Managing Research Needs at the University of Michigan using Kubernetes w/ Bob Killen - #344 Managing Research Needs at the University of Michigan using Kubernetes w/ Bob Killen Mon, 03 Feb 2020 16:38:25 +0000 Today we’re joined by Bob Killen, Research Cloud Administrator at the University of Michigan. In our conversation, we discuss:

  • How his group is deploying Kubernetes at UM.
  • The user experience of his broad user base, including those using KubeFlow environments.
  • How users are taking advantage of distributed computing.
  • Should ML/AI focused Kubernetes users should fear that the larger non-ML/AI user base will negatively impact their feature needs?
  • Where do the largest gaps currently exist in trying to support ML/AI users’ workloads?
  • Where Bob sees things going from a user perspective, and what are the things those users are asking about most? 

The complete show notes for this episode can be found at twimlai.com/talk/344.

]]>
Today we’re joined by Bob Killen, Research Cloud Administrator at the University of Michigan. In our conversation, we discuss:

  • How his group is deploying Kubernetes at UM.
  • The user experience of his broad user base, including those using KubeFlow environments.
  • How users are taking advantage of distributed computing.
  • Should ML/AI focused Kubernetes users should fear that the larger non-ML/AI user base will negatively impact their feature needs?
  • Where do the largest gaps currently exist in trying to support ML/AI users’ workloads?
  • Where Bob sees things going from a user perspective, and what are the things those users are asking about most? 

The complete show notes for this episode can be found at twimlai.com/talk/344.

]]>
24:40 clean podcast,science,technology,tech,cloud,data,intelligence,learning,bob,workflow,artificial,volcano,machine,ai,ml,killen,kubernetes,twiml,kubeflow Today we’re joined by Bob Killen, Research Cloud Administrator at the University of Michigan. In our conversation, we explore how Bob and his group at UM are deploying Kubernetes, the user experience, and how those users are taking advantage of distributed computing. We also discuss if ML/AI focused Kubernetes users should fear that the larger non-ML/AI user base will negatively impact their feature needs, where gaps currently exist in trying to support these ML/AI users’ workloads, and more! 344 full Sam Charrington
Scalable and Maintainable Workflows at Lyft with Flyte w/ Haytham AbuelFutuh and Ketan Umare - #343 Scalable and Maintainable Workflows at Lyft with Flyte w/ Haytham AbuelFutuh and Ketan Umare Thu, 30 Jan 2020 19:30:40 +0000 Today we kick off our KubeCon ‘19 series joined by Haytham AbuelFutuh and Ketan Umare, a pair of software engineers at Lyft. In our conversation, we discuss: 

  • Their newly open-sourced, cloud-native ML and data processing platform, Flyte.
  • What prompted Ketan to undertake this project and his experience building Flyte.
  • The core value proposition of Flyte.
  • What type-systems mean for the user experience.
  • How Flyte relates to Kubeflow. 
  • How Flyte is used across Lyft.

The complete show notes for this episode can be found at twimlai.com/talk/343

]]>
Today we kick off our KubeCon ‘19 series joined by Haytham AbuelFutuh and Ketan Umare, a pair of software engineers at Lyft. In our conversation, we discuss: 

  • Their newly open-sourced, cloud-native ML and data processing platform, Flyte.
  • What prompted Ketan to undertake this project and his experience building Flyte.
  • The core value proposition of Flyte.
  • What type-systems mean for the user experience.
  • How Flyte relates to Kubeflow. 
  • How Flyte is used across Lyft.

The complete show notes for this episode can be found at twimlai.com/talk/343

]]>
45:24 clean podcast,science,technology,tech,data,intelligence,learning,artificial,machine,ai,platform,flyte,opensource,ml,papermill,lyft,twiml,kubecon,kubeflow Today we kick off our KubeCon ‘19 series joined by Haytham AbuelFutuh and Ketan Umare, a pair of software engineers at Lyft. We caught up with Haytham and Ketan at KubeCo, where they were presenting their newly open-sourced, cloud-native ML and data processing platform, Flyte. We discuss what prompted Ketan to undertake this project and his experience building Flyte, the core value proposition, what type systems mean for the user experience, how it relates to Kubeflow and how Flyte is used across Lyft. 343 full Sam Charrington
Causality 101 with Robert Osazuwa Ness - #342 Causality 101 with Robert Osazuwa Ness Mon, 27 Jan 2020 20:30:27 +0000 Today we’re accompanied by Robert Osazuwa Ness, Machine Learning Research Engineer at ML Startup Gamalon and Instructor at Northeastern University. Robert, who we had the pleasure of meeting at the Black in AI Workshop at NeurIPS last month, joins us to discuss:

  • Causality, what it means, and how that meaning changes across domains and users.
  • Benefits of causal models vs non-causal models.
  • Real-world applications of causality. 
  • Various tools and packages for causality, 
  • Areas where it is effectively being deployed, like ML in production.
  • Our upcoming study group based around his new course sequence, “Causal Modeling in Machine Learning,” for which you can find details at twimlai.com/community.

The complete show notes for this episode can be found at twimlai.com/talk/342.

]]>
Today we’re accompanied by Robert Osazuwa Ness, Machine Learning Research Engineer at ML Startup Gamalon and Instructor at Northeastern University. Robert, who we had the pleasure of meeting at the Black in AI Workshop at NeurIPS last month, joins us to discuss:

  • Causality, what it means, and how that meaning changes across domains and users.
  • Benefits of causal models vs non-causal models.
  • Real-world applications of causality. 
  • Various tools and packages for causality, 
  • Areas where it is effectively being deployed, like ML in production.
  • Our upcoming study group based around his new course sequence, “Causal Modeling in Machine Learning,” for which you can find details at twimlai.com/community.

The complete show notes for this episode can be found at twimlai.com/talk/342.

]]>
43:14 clean podcast,science,technology,tech,data,intelligence,learning,robert,artificial,machine,ai,ness,ml,causality,causal,twiml,timnit,gebru,gamalon Today Robert Osazuwa Ness, ML Research Engineer at Gamalon and Instructor at Northeastern University joins us to discuss Causality, what it means, and how that meaning changes across domains and users, and our upcoming study group based around his new course sequence, “Causal Modeling in Machine Learning," for which you can find details at twimlai.com/community. 342 full Sam Charrington
PaccMann^RL: Designing Anticancer Drugs with Reinforcement Learning w/ Jannis Born - #341 PaccMann^RL: Designing Anticancer Drugs with Reinforcement Learning w/ Jannis Born Thu, 23 Jan 2020 17:06:00 +0000 Today we’re joined by Jannis Born, Ph.D. student at ETH & IBM Research Zurich. We caught up with Jannis a few weeks back at NeurIPS, to discuss: 

  • His research paper “PaccMannRL: Designing anticancer drugs from transcriptomic data via reinforcement learning,” a framework built to accelerate new anticancer drug discovery. 
  • How his background in cognitive science and computational neuroscience applies to his current ML research.
  • How reinforcement learning fits into the goal of cancer drug discovery, and how deep learning has changed this research.
  • Jannis describes a few interesting observations made during the training of their DRL learner. 
  • And of course, Jannis offers us a step-by-step walkthrough of how the framework works to predict the sensitivity of cancer drugs on a cell and subsequently discover new anticancer drugs. 

Check out the complete show notes for this episode at twimlai.com/talk/341

]]>
Today we’re joined by Jannis Born, Ph.D. student at ETH & IBM Research Zurich. We caught up with Jannis a few weeks back at NeurIPS, to discuss: 

  • His research paper “PaccMannRL: Designing anticancer drugs from transcriptomic data via reinforcement learning,” a framework built to accelerate new anticancer drug discovery. 
  • How his background in cognitive science and computational neuroscience applies to his current ML research.
  • How reinforcement learning fits into the goal of cancer drug discovery, and how deep learning has changed this research.
  • Jannis describes a few interesting observations made during the training of their DRL learner. 
  • And of course, Jannis offers us a step-by-step walkthrough of how the framework works to predict the sensitivity of cancer drugs on a cell and subsequently discover new anticancer drugs. 

Check out the complete show notes for this episode at twimlai.com/talk/341

]]>
43:13 clean podcast,science,technology,tech,data,intelligence,learning,born,artificial,ibm,machine,ai,reinforcement,rl,ml,jannis,eth,twiml RL: Designing anticancer drugs from...]]> Today we’re joined by Jannis Born, Ph.D. student at ETH & IBM Research Zurich, to discuss his “PaccMann^RL” research. Jannis details how his background in computational neuroscience applies to this research, how RL fits into the goal of anticancer drug discovery, the effect DL has had on his research, and of course, a step-by-step walkthrough of how the framework works to predict the sensitivity of cancer drugs on a cell and then discover new anticancer drugs. 341 full Sam Charrington
Social Intelligence with Blaise Aguera y Arcas - #340 Social Intelligence with Blaise Aguera y Arcas Mon, 20 Jan 2020 19:56:49 +0000 Today we’re joined by Blaise Aguera y Arcas, a distinguished scientist at Google. We had the pleasure of catching up with Blaise at NeurIPS last month, where he was invited to speak on “Social Intelligence.” In our conversation, we discuss:

  • Blaise’s role at Google, where he leads the Cerebra team. 
  • Their approach to machine learning at the company, and how they differ from the more forward-facing Google Brain team. 
  • Blaise gives us a look into his presentation, discussing today’s ML landscape.
  • The gap between AI and ML/DS research, what it means and why it exists.
  • The difference between intelligent systems and what we would deem to be “actual intelligence.” 
  • What does optimizing truly mean when training models?

Check out the complete show notes for this episode at twimlai.com/talk/340.

]]>
Today we’re joined by Blaise Aguera y Arcas, a distinguished scientist at Google. We had the pleasure of catching up with Blaise at NeurIPS last month, where he was invited to speak on “Social Intelligence.” In our conversation, we discuss:

  • Blaise’s role at Google, where he leads the Cerebra team. 
  • Their approach to machine learning at the company, and how they differ from the more forward-facing Google Brain team. 
  • Blaise gives us a look into his presentation, discussing today’s ML landscape.
  • The gap between AI and ML/DS research, what it means and why it exists.
  • The difference between intelligent systems and what we would deem to be “actual intelligence.” 
  • What does optimizing truly mean when training models?

Check out the complete show notes for this episode at twimlai.com/talk/340.

]]>
46:56 clean podcast,science,social,technology,tech,google,data,intelligence,learning,artificial,machine,ai,agi,ml,twiml Today we’re joined by Blaise Aguera y Arcas, a distinguished scientist at Google. We had the pleasure of catching up with Blaise at NeurIPS last month, where he was invited to speak on “Social Intelligence.” In our conversation, we discuss his role at Google, and his team’s approach to machine learning, and of course his presentation, in which he touches discussing today’s ML landscape, the gap between AI and ML/DS, the difference between intelligent systems and true intelligence, and much more. 340 full Sam Charrington
Music & AI Plus a Geometric Perspective on Reinforcement Learning with Pablo Samuel Castro - #339 Music & AI Plus a Geometric Perspective on Reinforcement Learning with Pablo Samuel Castro Thu, 16 Jan 2020 19:27:40 +0000 Today we’re joined by Pablo Samuel Castro, Staff Research Software Developer at Google. Pablo, whose research is mainly focused on reinforcement learning, and I caught up at NeurIPS last month. We cover a lot of ground in our conversation, including his love for music, and how that has guided his work on the Lyric AI project, and a few of his other NeurIPS submissions, including “A Geometric Perspective on Optimal Representations for Reinforcement Learning” and “Estimating Policy Functions in Payments Systems using Deep Reinforcement Learning.” 

Check out the complete show notes at twimlai.com/talk/339.

]]>
Today we’re joined by Pablo Samuel Castro, Staff Research Software Developer at Google. Pablo, whose research is mainly focused on reinforcement learning, and I caught up at NeurIPS last month. We cover a lot of ground in our conversation, including his love for music, and how that has guided his work on the Lyric AI project, and a few of his other NeurIPS submissions, including “A Geometric Perspective on Optimal Representations for Reinforcement Learning” and “Estimating Policy Functions in Payments Systems using Deep Reinforcement Learning.” 

Check out the complete show notes at twimlai.com/talk/339.

]]>
43:49 clean podcast,and,science,technology,music,tech,data,intelligence,learning,artificial,machine,ai,reinforcement,ml,geometric,twiml Today we’re joined by Pablo Samuel Castro, Staff Research Software Developer at Google. We cover a lot of ground in our conversation, including his love for music, and how that has guided his work on the Lyric AI project, and a few of his papers including “A Geometric Perspective on Optimal Representations for Reinforcement Learning” and “Estimating Policy Functions in Payments Systems using Deep Reinforcement Learning.” 339 full Sam Charrington
Trends in Computer Vision with Amir Zamir - #338 Trends in Computer Vision with Amir Zamir Mon, 13 Jan 2020 23:10:19 +0000 Today we close out AI Rewind 2019 joined by Amir Zamir, who recently began his tenure as an Assistant Professor of Computer Science at the Swiss Federal Institute of Technology.

Amir joined us back in 2018 to discuss his CVPR Best Paper winner, and in today’s conversation, we continue with the thread of Computer Vision. In our conversation, we discuss quite a few topics, including Vision-for-Robotics, the expansion of the field of 3D Vision, Self-Supervised Learning for CV Tasks, and much more! Check out the rest of the series at twimlai.com/rewind19.

The complete show notes for this episode can be found at twimlai.com/talk/338.

 

]]>
Today we close out AI Rewind 2019 joined by Amir Zamir, who recently began his tenure as an Assistant Professor of Computer Science at the Swiss Federal Institute of Technology.

Amir joined us back in 2018 to discuss his CVPR Best Paper winner, and in today’s conversation, we continue with the thread of Computer Vision. In our conversation, we discuss quite a few topics, including Vision-for-Robotics, the expansion of the field of 3D Vision, Self-Supervised Learning for CV Tasks, and much more! Check out the rest of the series at twimlai.com/rewind19.

The complete show notes for this episode can be found at twimlai.com/talk/338.

 

]]>
01:30:18 clean podcast,of,science,technology,tech,data,intelligence,models,vision,learning,computer,3d,federal,swiss,artificial,institute,amir,machine,ai,ml,supervised,epfl,zamir,twiml,selfsupervised,visionforrobotics Today we close out AI Rewind 2019 joined by Amir Zamir, who recently began his tenure as an Assistant Professor of Computer Science at the Swiss Federal Institute of Technology. Amir joined us back in 2018 to discuss his CVPR Best Paper winner, and in today’s conversation, we continue with the thread of Computer Vision. In our conversation, we discuss quite a few topics, including Vision-for-Robotics, the expansion of the field of 3D Vision, Self-Supervised Learning for CV Tasks, and much more! 338 full Sam Charrington
Trends in Natural Language Processing with Nasrin Mostafazadeh - #337 Trends in Natural Language Processing with Nasrin Mostafazadeh Thu, 09 Jan 2020 22:33:10 +0000 Today we continue the AI Rewind 2019 joined by friend-of-the-show Nasrin Mostafazadeh, Senior AI Research Scientist at Elemental Cognition. We caught up with Nasrin to discuss the latest and greatest developments and trends in Natural Language Processing, including Interpretability, Ethics, and Bias in NLP, how large pre-trained models have transformed NLP research, and top tools and frameworks in the space.

The complete show notes can be found at twimlai.com/talk/337

Check out the rest of the series at twimlai.com/rewind19!

]]>
Today we continue the AI Rewind 2019 joined by friend-of-the-show Nasrin Mostafazadeh, Senior AI Research Scientist at Elemental Cognition. We caught up with Nasrin to discuss the latest and greatest developments and trends in Natural Language Processing, including Interpretability, Ethics, and Bias in NLP, how large pre-trained models have transformed NLP research, and top tools and frameworks in the space.

The complete show notes can be found at twimlai.com/talk/337

Check out the rest of the series at twimlai.com/rewind19!

]]>
01:12:17 clean podcast,science,technology,tech,data,language,intelligence,models,learning,processing,natural,ethics,artificial,cognition,rewind,machine,bias,ai,bert,transformer,nlp,elemental,ml,explanation,nasrin,twiml,mostafazadeh,interpretability,gpt2,allennlp Today we continue the AI Rewind 2019 joined by friend-of-the-show Nasrin Mostafazadeh, Senior AI Research Scientist at Elemental Cognition. We caught up with Nasrin to discuss the latest and greatest developments and trends in Natural Language Processing, including Interpretability, Ethics, and Bias in NLP, how large pre-trained models have transformed NLP research, and top tools and frameworks in the space. 337 full Sam Charrington
Trends in Fairness and AI Ethics with Timnit Gebru - #336 Trends in Fairness and AI Ethics with Timnit Gebru - #336 Mon, 06 Jan 2020 20:02:14 +0000 Today we keep the 2019 AI Rewind series rolling with friend-of-the-show Timnit Gebru, a research scientist on the Ethical AI team at Google. A few weeks ago at NeurIPS, Timnit joined us to discuss the ethics and fairness landscape in 2019. In our conversation, we discuss diversification of NeurIPS, with groups like Black in AI, WiML and others taking huge steps forward, trends in the fairness community, quite a few papers, and much more.

We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via twitter @samcharrington or @twimlai.

The complete show notes for this episode can be found at twimlai.com/talk/336.

Check out the rest of the series at twimlai.com/rewind19!

]]>
Today we keep the 2019 AI Rewind series rolling with friend-of-the-show Timnit Gebru, a research scientist on the Ethical AI team at Google. A few weeks ago at NeurIPS, Timnit joined us to discuss the ethics and fairness landscape in 2019. In our conversation, we discuss diversification of NeurIPS, with groups like Black in AI, WiML and others taking huge steps forward, trends in the fairness community, quite a few papers, and much more.

We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via twitter @samcharrington or @twimlai.

The complete show notes for this episode can be found at twimlai.com/talk/336.

Check out the rest of the series at twimlai.com/rewind19!

]]>
49:45 clean podcast,science,black,technology,tech,in,google,data,360,microsoft,intelligence,learning,landscape,queer,ethics,artificial,fairness,ibm,machine,joy,gender,bias,ai,shades,jews,toolkit,ml,twiml,neurips,wiml,timnit,gebru,buolamwini Today we keep the 2019 AI Rewind series rolling with friend-of-the-show Timnit Gebru, a research scientist on the Ethical AI team at Google. A few weeks ago at NeurIPS, Timnit joined us to discuss the ethics and fairness landscape in 2019. In our conversation, we discuss diversification of NeurIPS, with groups like Black in AI, WiML and others taking huge steps forward, trends in the fairness community, quite a few papers, and much more. 336 full Sam Charrington
Trends in Reinforcement Learning with Chelsea Finn - #335 Trends in Reinforcement Learning with Chelsea Finn Thu, 02 Jan 2020 19:59:28 +0000 Today we continue to review the year that was 2019 via our AI Rewind series, and do so with friend of the show Chelsea Finn, Assistant Professor in the Computer Science Department at Stanford University. Chelsea’s research focuses on Reinforcement Learning, so we couldn’t think of a better person to join us to discuss the topic. In this conversation, we cover topics like Model-based RL, solving hard exploration problems, along with RL libraries and environments that Chelsea thought moved the needle last year. 

We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via twitter @samcharrington or @twimlai.

The complete show notes for this episode can be found at twimlai.com/talk/335.

Check out the rest of the series at twimlai.com/rewind19!

]]>
Today we continue to review the year that was 2019 via our AI Rewind series, and do so with friend of the show Chelsea Finn, Assistant Professor in the Computer Science Department at Stanford University. Chelsea’s research focuses on Reinforcement Learning, so we couldn’t think of a better person to join us to discuss the topic. In this conversation, we cover topics like Model-based RL, solving hard exploration problems, along with RL libraries and environments that Chelsea thought moved the needle last year. 

We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via twitter @samcharrington or @twimlai.

The complete show notes for this episode can be found at twimlai.com/talk/335.

Check out the rest of the series at twimlai.com/rewind19!

]]>
01:06:57 clean podcast,science,tools,technology,tech,google,football,data,deep,intelligence,learning,research,chelsea,artificial,finn,machine,predictions,papers,ai,reinforcement,opensource,ml,metalearning,twiml,pytorch,modelbased Today we continue to review the year that was 2019 via our AI Rewind series, and do so with friend of the show Chelsea Finn, Assistant Professor in the CS Department at Stanford University. Chelsea’s research focuses on Reinforcement Learning, so we couldn’t think of a better person to join us to discuss the topic. In this conversation, we cover topics like Model-based RL, solving hard exploration problems, along with RL libraries and environments that Chelsea thought moved the needle last year. 335 full Sam Charrington
Trends in Machine Learning & Deep Learning with Zack Lipton - #334 Trends in Machine Learning & Deep Learning with Zack Lipton Mon, 30 Dec 2019 19:23:14 +0000 Today we kick off our 2019 AI Rewind Series joined by Zack Lipton, a jointly appointed Professor in the Tepper School of Business and the Machine Learning Department at CMU.

You might remember Zack from our conversation earlier this year, “Fairwashing” and the Folly of ML Solutionism, which you can find at twimlai.com/talk/285. In our conversation, Zack recaps advancements across the vast fields of Machine Learning and Deep Learning, including trends, tools, research papers and more.

We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via Twitter @samcharrington or @twimlai.

To get the complete show notes for this episode, head over to twimlai.com/talk/334. 

 

]]>
Today we kick off our 2019 AI Rewind Series joined by Zack Lipton, a jointly appointed Professor in the Tepper School of Business and the Machine Learning Department at CMU.

You might remember Zack from our conversation earlier this year, “Fairwashing” and the Folly of ML Solutionism, which you can find at twimlai.com/talk/285. In our conversation, Zack recaps advancements across the vast fields of Machine Learning and Deep Learning, including trends, tools, research papers and more.

We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via Twitter @samcharrington or @twimlai.

To get the complete show notes for this episode, head over to twimlai.com/talk/334. 

 

]]>
01:19:42 clean podcast,science,zack,technology,tech,data,deep,intelligence,models,learning,artificial,rewind,lipton,machine,ai,bert,cmu,classification,ml,causality,twiml,imagenet,cifair10,energybased,iid Today we kick off our 2019 AI Rewind Series joined by Zack Lipton, Professor at CMU. You might remember Zack from our conversation earlier this year, “Fairwashing” and the Folly of ML Solutionism. In today's conversation, Zack recaps advancements across the vast fields of Machine Learning and Deep Learning, including trends, tools, research papers and more. We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via Twitter @samcharrington or @twimlai. 334 full Sam Charrington
FaciesNet & Machine Learning Applications in Energy with Mohamed Sidahmed - #333 FaciesNet & Machine Learning Applications in Energy with Mohamed Sidahmed Fri, 27 Dec 2019 20:08:21 +0000 Today we close out our 2019 NeurIPS series with Mohamed Sidahmed, Machine Learning and Artificial Intelligence R&D Manager at Shell. In our conversation, we discuss: 

  • The papers Mohamed and his team submitted to the conference this year, in particular: 
    • Accelerating Least Squares Imaging Using Deep Learning Techniques, which details how researchers can computationally efficiently reconstruct imaging using a deep learning framework approach.

    • FaciesNet: Machine Learning Applications for Facies Classification in Well Logs, which Mohamed describes as “A novel way of designing a new architecture for how we use sequence modeling and recurrent networks to be able to break out of the benchmark for classifying the different types of rock.” 

The full show notes for this episode can be found at twimlai.com/talk/333. Make sure you head over to twimlai.com/neurips2019 to follow along with this series!

]]>
Today we close out our 2019 NeurIPS series with Mohamed Sidahmed, Machine Learning and Artificial Intelligence R&D Manager at Shell. In our conversation, we discuss: 

  • The papers Mohamed and his team submitted to the conference this year, in particular: 
    • Accelerating Least Squares Imaging Using Deep Learning Techniques, which details how researchers can computationally efficiently reconstruct imaging using a deep learning framework approach.
    • FaciesNet: Machine Learning Applications for Facies Classification in Well Logs, which Mohamed describes as “A novel way of designing a new architecture for how we use sequence modeling and recurrent networks to be able to break out of the benchmark for classifying the different types of rock.” 

The full show notes for this episode can be found at twimlai.com/talk/333. Make sure you head over to twimlai.com/neurips2019 to follow along with this series!

]]>
40:31 clean podcast,science,technology,image,tech,data,deep,intelligence,modeling,vision,learning,computer,artificial,mohamed,machine,ai,sequence,shell,geology,classification,geophysics,ml,2019,geophysicist,twiml,neurips,sidahmed,facies Today we close out our 2019 NeurIPS series with Mohamed Sidahmed, Machine Learning and Artificial Intelligence R&D Manager at Shell. In our conversation, we discuss two papers Mohamed and his team submitted to the conference this year, Accelerating Least Squares Imaging Using Deep Learning Techniques, and FaciesNet: Machine Learning Applications for Facies Classification in Well Logs. The show notes for this episode can be found at twimlai.com/talk/333/, where you’ll find links to both of these papers! 333 full Sam Charrington
Machine Learning: A New Approach to Drug Discovery with Daphne Koller - #332 Machine Learning: A New Approach to Drug Discovery with Daphne Koller Thu, 26 Dec 2019 18:41:47 +0000 Today we continue our 2019 NeurIPS coverage joined by Daphne Koller, co-Founder and former co-CEO of Coursera and Founder and CEO of Insitro. We caught up with Daphne to discuss: 

  • Her background in machine learning, beginning in ‘93, and her work with the Stanford online machine learning courses, and eventually her work at Coursera.
  • The current landscape of pharmaceutical drug discovery, including the current pricing of drugs and misnomers with why drugs are so expensive, 
  • Her work at Insitro, a company looking to advance drug discovery and development with machine learning. 
  • An overview of Insitro’s goal of using ML as a “compass” in drug discovery. 
  • How Insitro functions as a company in this space, including their focus on the biology of drug discovery and the landscape of ML techniques being used
  • Daphne’s thoughts on AutoML, and much more!

The full show notes for this episode can be found at twimlai.com/talk/332. Make sure you head over to twimlai.com/neurips2019 to follow along with this series!

]]>
Today we continue our 2019 NeurIPS coverage joined by Daphne Koller, co-Founder and former co-CEO of Coursera and Founder and CEO of Insitro. We caught up with Daphne to discuss: 

  • Her background in machine learning, beginning in ‘93, and her work with the Stanford online machine learning courses, and eventually her work at Coursera.
  • The current landscape of pharmaceutical drug discovery, including the current pricing of drugs and misnomers with why drugs are so expensive, 
  • Her work at Insitro, a company looking to advance drug discovery and development with machine learning. 
  • An overview of Insitro’s goal of using ML as a “compass” in drug discovery. 
  • How Insitro functions as a company in this space, including their focus on the biology of drug discovery and the landscape of ML techniques being used
  • Daphne’s thoughts on AutoML, and much more!

The full show notes for this episode can be found at twimlai.com/talk/332. Make sure you head over to twimlai.com/neurips2019 to follow along with this series!

]]>
43:40 clean podcast,science,technology,tech,data,intelligence,biology,drugs,learning,university,discovery,pharmaceutical,andrew,drug,stanford,artificial,machine,ai,daphne,ng,ml,2019,coursera,koller,twiml,automl,neurips,insitro Today we’re joined by Daphne Koller, co-Founder and former co-CEO of Coursera and Founder and CEO of Insitro. In our conversation, discuss the current landscape of pharmaceutical drugs and drug discovery, including the current pricing of drugs, and an overview of Insitro’s goal of using ML as a “compass” in drug discovery. We also explore how Insitro functions as a company, their focus on the biology of drug discovery and the landscape of ML techniques being used, Daphne’s thoughts on AutoML, and 332 full Sam Charrington
Sensory Prediction Error Signals in the Neocortex with Blake Richards - #331 Sensory Prediction Error Signals in the Neocortex with Blake Richards - #331 Tue, 24 Dec 2019 18:55:44 +0000 Today we continue our 2019 NeurIPS coverage, this time around joined by Blake Richards, Assistant Professor at McGill University and a Core Faculty Member at Mila. In our conversation, we discuss:

  • His invited talk at the Neuro-AI Workshop “Sensory Prediction Error Signals in the Neocortex.” 
  • His recent studies on two-photon calcium imaging, predictive coding, and hierarchical inference.
  • Blake’s recent work on memory systems for reinforcement learning. 

The complete show notes for this episode can be found at twimlai.com/talk/331.

Make sure you head over to twimlai.com/neurips2019 to follow along with this series!

]]>
Today we continue our 2019 NeurIPS coverage, this time around joined by Blake Richards, Assistant Professor at McGill University and a Core Faculty Member at Mila. In our conversation, we discuss:

  • His invited talk at the Neuro-AI Workshop “Sensory Prediction Error Signals in the Neocortex.” 
  • His recent studies on two-photon calcium imaging, predictive coding, and hierarchical inference.
  • Blake’s recent work on memory systems for reinforcement learning. 

The complete show notes for this episode can be found at twimlai.com/talk/331.

Make sure you head over to twimlai.com/neurips2019 to follow along with this series!

]]>
41:05 clean podcast,science,technology,tech,data,intelligence,learning,university,neurology,richards,blake,imaging,calcium,mila,artificial,machine,ai,reinforcement,mcgill,neocortex,ml,2019,neurscience,twiml,neurips,yoshua,bengio,twophoton Today we continue our 2019 NeurIPS coverage, this time around joined by Blake Richards, Assistant Professor at McGill University and a Core Faculty Member at Mila. Blake was an invited speaker at the Neuro-AI Workshop, and presented his research on “Sensory Prediction Error Signals in the Neocortex.” In our conversation, we discuss a series of recent studies on two-photon calcium imaging. We talk predictive coding, hierarchical inference, and Blake’s recent work on memory systems for reinforcement lea 331 full Sam Charrington
How to Know with Celeste Kidd - #330 How to Know with Celeste Kidd Mon, 23 Dec 2019 18:46:40 +0000 Today we begin our coverage of the 2019 NeurIPS conference with Celeste Kidd, Assistant Professor of Psychology at UC Berkeley. In our conversation, we discuss:

  • The research at the Kidd Lab, which is focused on understanding “how people come to know what they know.”
  • Her invited talk “How to Know,” which details the core cognitive systems people use to guide their learning about the world.
  • Why people are curious about some things but not others.
  • How our past experiences and existing knowledge shape our future interests.
  • Why people believe what they believe, and how these beliefs are influenced in one direction or another.
  • How machine learning figures into this equation.

Check out the complete show notes for this episode at twimlai.com/talk/330. You can also follow along with this series at twimlai.com/neurips2019.

]]>
Today we begin our coverage of the 2019 NeurIPS conference with Celeste Kidd, Assistant Professor of Psychology at UC Berkeley. In our conversation, we discuss:

  • The research at the Kidd Lab, which is focused on understanding “how people come to know what they know.”
  • Her invited talk “How to Know,” which details the core cognitive systems people use to guide their learning about the world.
  • Why people are curious about some things but not others.
  • How our past experiences and existing knowledge shape our future interests.
  • Why people believe what they believe, and how these beliefs are influenced in one direction or another.
  • How machine learning figures into this equation.

Check out the complete show notes for this episode at twimlai.com/talk/330. You can also follow along with this series at twimlai.com/neurips2019.

]]>
54:03 clean podcast,of,science,technology,how,to,tech,data,systems,intelligence,psychology,learning,university,california,lab,berkeley,cognitive,artificial,kidd,machine,ai,know,celeste,uc,ml,twiml,neurips Today we’re joined by Celeste Kidd, Assistant Professor at UC Berkeley, to discuss her invited talk “How to Know” which details her lab’s research about the core cognitive systems people use to guide their learning about the world. We explore why people are curious about some things but not others, and how past experiences and existing knowledge shape future interests, why people believe what they believe, and how these beliefs are influenced, and how machine learning figures into the equation. 330 full Sam Charrington
Using Deep Learning to Predict Wildfires with Feng Yan - #329 Using Deep Learning to Predict Wildfires with Feng Yan Fri, 20 Dec 2019 22:17:04 +0000 Today we’re joined by Feng Yan, Assistant Professor at the University of Nevada, Reno. In our conversation, we discuss:

  • ALERTWildfire, a camera-based network infrastructure that captures satellite imagery of wildfires.
  • The many purposes of ALERTWildfire, including the discovery of wildfires, the ability to scale resources accordingly, and a few others
  • The development of the machine learning models and surrounding infrastructure used in ALERTWildfire. 
  • Problem formulation and challenges with using camera and satellite data in this use case.
  • How they have combined the use of Infra-as-a-Service and Function-as-a-Service tools for cost-effectiveness and scalability. 

Check out the complete show notes at twimlai.com/talk/329.

]]>
Today we’re joined by Feng Yan, Assistant Professor at the University of Nevada, Reno. In our conversation, we discuss:

  • ALERTWildfire, a camera-based network infrastructure that captures satellite imagery of wildfires.
  • The many purposes of ALERTWildfire, including the discovery of wildfires, the ability to scale resources accordingly, and a few others
  • The development of the machine learning models and surrounding infrastructure used in ALERTWildfire. 
  • Problem formulation and challenges with using camera and satellite data in this use case.
  • How they have combined the use of Infra-as-a-Service and Function-as-a-Service tools for cost-effectiveness and scalability. 

Check out the complete show notes at twimlai.com/talk/329.

]]>
49:49 clean podcast,of,science,technology,tech,cloud,computing,data,deep,intelligence,vision,learning,university,computer,satellite,nevada,artificial,infrastructure,machine,reno,imagery,ai,wildfire,scale,prediction,feng,ml,aws,reinvent,2019,yan,lambda,twiml Today we’re joined by Feng Yan, Assistant Professor at the University of Nevada, Reno to discuss ALERTWildfire, a camera-based network infrastructure that captures satellite imagery of wildfires. In our conversation, Feng details the development of the machine learning models and surrounding infrastructure. We also talk through problem formulation, challenges with using camera and satellite data in this use case, and how he has combined the use of IaaS and FaaS tools for cost-effectiveness and scalability 329 full Sam Charrington
Advancing Machine Learning at Capital One with Dave Castillo - #328 Advancing Machine Learning at Capital One with Dave Castillo Thu, 19 Dec 2019 16:56:58 +0000 Today we’re joined by Dave Castillo, Managing Vice President for ML at Capital One and head of their Center for Machine Learning. We caught up with David at re:Invent to discuss the aforementioned Center for Machine Learning, and what has changed since our last discussing with Capital One, which you can find at twimlai.com/talk/147. In our conversation we explore:

  • Capital One’s transition from “lab-based” machine learning to “enterprise-wide” adoption and support of ML.
  • Surprising machine learning use cases like granting employee access privileges via an automated system.
  • Their current platform ecosystem, including their design vision in building this into a larger, all-encompassing platform, pain points in building this platform, and more. 

Check out the complete show notes for this episode at twimlai.com/talk/328.

]]>
Today we’re joined by Dave Castillo, Managing Vice President for ML at Capital One and head of their Center for Machine Learning. We caught up with David at re:Invent to discuss the aforementioned Center for Machine Learning, and what has changed since our last discussing with Capital One, which you can find at twimlai.com/talk/147. In our conversation we explore:

  • Capital One’s transition from “lab-based” machine learning to “enterprise-wide” adoption and support of ML.
  • Surprising machine learning use cases like granting employee access privileges via an automated system.
  • Their current platform ecosystem, including their design vision in building this into a larger, all-encompassing platform, pain points in building this platform, and more. 

Check out the complete show notes for this episode at twimlai.com/talk/328.

]]>
33:26 clean podcast,science,center,technology,tech,for,data,enterprise,intelligence,one,scientists,dave,learning,artificial,capital,machine,ai,platform,scale,ecosystem,engineer,castillo,ml,twiml Today we’re joined by Dave Castillo, Managing VP for ML at Capital One and head of their Center for Machine Learning. In our conversation, we explore Capital One’s transition from “lab-based” ML to enterprise-wide adoption and support of ML, surprising ML use cases, their current platform ecosystem, their design vision in building this into a larger, all-encompassing platform, pain points in building this platform, and much more. 328 full Sam Charrington
Helping Fish Farmers Feed the World with Deep Learning w/ Bryton Shang - #327 Helping Fish Farmers Feed the World with Deep Learning w/ Bryton Shang Tue, 17 Dec 2019 17:00:07 +0000 Today we’re joined by Bryton Shang, Founder & CEO at Aquabyte. We caught up with Bryton after his talk at re:Invent’s ML Summit to discuss:

  • Aquabyte, a company focused on the application of computer vision fish farming.
  • How Bryton identified the various problems associated with mass fish farming and how he eventually moved to Norway to develop the solution.
  • The challenges with developing machine learning solutions that can measure the height and weight of fish,
  • How they use computer vision algorithms to asses issues like sea lice, which can be up to 25% of the cost associated with running farms.
  • Cool new features currently in the works like facial recognition for fish!

The complete show notes for this episode can be found at twimlai.com/talk/327.

]]>
Today we’re joined by Bryton Shang, Founder & CEO at Aquabyte. We caught up with Bryton after his talk at re:Invent’s ML Summit to discuss:

  • Aquabyte, a company focused on the application of computer vision fish farming.
  • How Bryton identified the various problems associated with mass fish farming and how he eventually moved to Norway to develop the solution.
  • The challenges with developing machine learning solutions that can measure the height and weight of fish,
  • How they use computer vision algorithms to asses issues like sea lice, which can be up to 25% of the cost associated with running farms.
  • Cool new features currently in the works like facial recognition for fish!

The complete show notes for this episode can be found at twimlai.com/talk/327.

]]>
38:06 clean podcast,science,fish,network,technology,tech,data,farming,deep,intelligence,vision,learning,computer,cnn,artificial,neural,machine,ai,norway,ml,aws,reinvent,shang,bryton,twiml,aquabyte Today we’re joined by Bryton Shang, Founder & CEO at Aquabyte, a company focused on the application of computer vision to various fish farming use cases. In our conversation, we discuss how Bryton identified the various problems associated with mass fish farming, challenges developing computer algorithms that can measure the height and weight of fish, assess issues like sea lice, and how they’re developing interesting new features such as facial recognition for fish! 327 full Sam Charrington
Metaflow, a Human-Centric Framework for Data Science with Ville Tuulos - #326 Metaflow, a Human-Centric Framework for Data Science with Ville Tuulos Fri, 13 Dec 2019 20:56:49 +0000 Today we kick off our re:Invent 2019 series with Ville Tuulos, Machine Learning Infrastructure Manager at Netflix. At re:Invent, Netflix announced the open-sourcing of Metaflow, their “human-centric framework for data science.” In our conversation, we discuss all things Metaflow, including:

  • The problem Metaflow is trying to solve
  • Why it was important for Netflix to open-source Metaflow
  • Core Features
  • The user experience accessing and managing data, experimentation, training and model development
  • The various supported tools and libraries


If you’re interested in checking out a Metaflow democast with Villa, reach out at twimlai.com/contact! 

]]>
Today we kick off our re:Invent 2019 series with Ville Tuulos, Machine Learning Infrastructure Manager at Netflix. At re:Invent, Netflix announced the open-sourcing of Metaflow, their “human-centric framework for data science.” In our conversation, we discuss all things Metaflow, including:

  • The problem Metaflow is trying to solve
  • Why it was important for Netflix to open-source Metaflow
  • Core Features
  • The user experience accessing and managing data, experimentation, training and model development
  • The various supported tools and libraries

If you’re interested in checking out a Metaflow democast with Villa, reach out at twimlai.com/contact! 

]]>
56:17 clean podcast,science,web,technology,recommendations,tech,cloud,data,local,intelligence,learning,sql,services,artificial,infrastructure,framework,machine,ai,amazon,netflix,versioning,engineer,ville,ml,aws,papermill,twiml,sagemaker,metaflow,tuulos Today we kick off our re:Invent 2019 series with Ville Tuulos, Machine Learning Infrastructure Manager at Netflix. At re:Invent, Netflix announced the open-sourcing of Metaflow, their “human-centric framework for data science.” In our conversation, we discuss all things Metaflow, including features, user experience, tooling, supported libraries, and much more. If you’re interested in checking out a Metaflow democast with Villa, reach out at twimlai.com/contact! 326 full Sam Charrington
Single Headed Attention RNN: Stop Thinking With Your Head with Stephen Merity - #325 Single Headed Attention RNN: Stop Thinking With Your Head with Stephen Merity Thu, 12 Dec 2019 19:04:00 +0000 Today we’re joined by Stephen Merity, startup founder and independent researcher, with  a focus on NLP and Deep Learning. In our conversation, we discuss:

  • Stephen’s newest paper, Single Headed Attention RNN: Stop Thinking With Your Head.
  • His motivations behind writing the paper; the fact that NLP research has been recently dominated by the use of transformer models, and the fact that these models are not the most accessible/trainable for broad use.
  • The architecture of transformers models.
  • How Stephen decided to use SHA-RNNs for this research.
  • How Stephen built and trained the model, for which the code is available on Github.
  • His approach to benchmarking this project.
  • Stephen’s goals for this research in the broader NLP research community. 

The complete show notes for this episode can be found at twimlai.com/talk/325. There you’ll find links to both the paper referenced in this interview, and the code. Enjoy!

]]>
Today we’re joined by Stephen Merity, startup founder and independent researcher, with  a focus on NLP and Deep Learning. In our conversation, we discuss:

  • Stephen’s newest paper, Single Headed Attention RNN: Stop Thinking With Your Head.
  • His motivations behind writing the paper; the fact that NLP research has been recently dominated by the use of transformer models, and the fact that these models are not the most accessible/trainable for broad use.
  • The architecture of transformers models.
  • How Stephen decided to use SHA-RNNs for this research.
  • How Stephen built and trained the model, for which the code is available on Github.
  • His approach to benchmarking this project.
  • Stephen’s goals for this research in the broader NLP research community. 

The complete show notes for this episode can be found at twimlai.com/talk/325. There you’ll find links to both the paper referenced in this interview, and the code. Enjoy!

]]>
59:04 clean podcast,science,network,technology,tech,model,data,language,deep,intelligence,stephen,modeling,learning,artificial,neural,machine,ai,transformer,nlp,recurrent,ml,lstm,rnn,twiml,merity,sharnn Today we’re joined by Stephen Merity, an independent researcher focused on NLP and Deep Learning. In our conversation, we discuss Stephens latest paper, Single Headed Attention RNN: Stop Thinking With Your Head, detailing his primary motivations behind the paper, the decision to use SHA-RNNs for this research, how he built and trained the model, his approach to benchmarking, and finally his goals for the research in the broader research community. 325 full Sam Charrington
Automated Model Tuning with SigOpt - #324 Automated Model Tuning with SigOpt with SigOpt Mon, 09 Dec 2019 20:43:21 +0000 In this TWIML Democast, we're joined by SigOpt Co-Founder and CEO Scott Clark. Scott details the SigOpt platform, and gives us a live demo!

This episode is best consumed by watching the corresponding video demo, which you can find at twimlai.com/talk/324

 

 

]]>
In this TWIML Democast, we're joined by SigOpt Co-Founder and CEO Scott Clark. Scott details the SigOpt platform, and gives us a live demo!

This episode is best consumed by watching the corresponding video demo, which you can find at twimlai.com/talk/324

 

 

]]>
46:10 clean podcast,science,technology,tech,data,intelligence,scott,learning,clark,artificial,machine,ai,platform,optimizing,ml,twiml,twimlcon,sigopt,hyperparameters In this TWIML Democast, we're joined by SigOpt Co-Founder and CEO Scott Clark. Scott details the SigOpt platform, and gives us a live demo! This episode is best consumed by watching the corresponding video demo, which you can find at twimlai.com/talk/324.  324 full Sam Charrington
Automated Machine Learning with Erez Barak - #323 Automated Machine Learning with Erez Barak Fri, 06 Dec 2019 16:32:25 +0000 In the final episode of our Azure ML series, we’re joined by Erez Barak, Partner Group Manager of Azure ML at Microsoft. In our conversation, we discuss:

  • Erez’s AutoML philosophy, including how he defines “true AutoML” and his take on the AutoML space, its role and its importance.
  • We also discuss in great detail the application of AutoML as a contributor to the end-to-end data science process, which Erez breaks down into 3 key areas; Featurization, Learner/Model Selection, and Tuning/Optimizing Hyperparameters.
  • Finally, we discuss post-deployment AutoML use cases and other areas under the AutoML umbrella that are currently generating excitement.

Check out the complete show notes at twimlai.com/talk/323!

]]>
In the final episode of our Azure ML series, we’re joined by Erez Barak, Partner Group Manager of Azure ML at Microsoft. In our conversation, we discuss:

  • Erez’s AutoML philosophy, including how he defines “true AutoML” and his take on the AutoML space, its role and its importance.
  • We also discuss in great detail the application of AutoML as a contributor to the end-to-end data science process, which Erez breaks down into 3 key areas; Featurization, Learner/Model Selection, and Tuning/Optimizing Hyperparameters.
  • Finally, we discuss post-deployment AutoML use cases and other areas under the AutoML umbrella that are currently generating excitement.

Check out the complete show notes at twimlai.com/talk/323!

]]>
43:25 clean podcast,science,technology,tech,model,data,microsoft,intelligence,learning,feature,engineering,scientist,artificial,selection,machine,ai,bert,barak,optimization,deployment,ml,erez,twiml,automl,hyperparameter,tuninig Today we’re joined by Erez Barak, Partner Group Manager of Azure ML at Microsoft. In our conversation, Erez gives us a full breakdown of his AutoML philosophy, and his take on the AutoML space, its role, and its importance. We also discuss the application of AutoML as a contributor to the end-to-end data science process, which Erez breaks down into 3 key areas; Featurization, Learner/Model Selection, and Tuning/Optimizing Hyperparameters. We also discuss post-deployment AutoML use cases, and much more! 323 full Sam Charrington
Responsible AI in Practice with Sarah Bird - #322 Responsible AI in Practice with Sarah Bird Wed, 04 Dec 2019 16:10:39 +0000 Today we continue our Azure ML at Microsoft Ignite series joined by Sarah Bird, Principal Program Manager at Microsoft. In our conversation, we discuss:

  • Sarah’s work in machine learning systems, with a focus on bringing machine learning research into production through Azure ML, with an emphasis on responsible AI.

  • A set of newly released tools focused on responsible machine learning, Azure Machine Learning 'Machine Learning Interpretability Toolkit’
  • Moving from “Black-Box” models to “Glass-Box Models”
  • Sarah’s recent work in differential privacy, including risks and benefits
  • Her work in the broader ML community, including being a founding member of the MLSys conference and workshops.

Check out the complete show notes at twimlai.com/talk/322.

]]>
Today we continue our Azure ML at Microsoft Ignite series joined by Sarah Bird, Principal Program Manager at Microsoft. In our conversation, we discuss:

  • Sarah’s work in machine learning systems, with a focus on bringing machine learning research into production through Azure ML, with an emphasis on responsible AI.
  • A set of newly released tools focused on responsible machine learning, Azure Machine Learning 'Machine Learning Interpretability Toolkit’
  • Moving from “Black-Box” models to “Glass-Box Models”
  • Sarah’s recent work in differential privacy, including risks and benefits
  • Her work in the broader ML community, including being a founding member of the MLSys conference and workshops.

Check out the complete show notes at twimlai.com/talk/322.

]]>
38:41 clean podcast,science,black,box,technology,tech,data,systems,microsoft,intelligence,models,learning,bird,azure,artificial,privacy,glass,sarah,machine,ai,ignite,toolkit,responsible,ml,differential,twiml,interpretability Today we continue our Azure ML at Microsoft Ignite series joined by Sarah Bird, Principal Program Manager at Microsoft. At Ignite, Microsoft released new tools focused on responsible machine learning, which fall under the umbrella of the Azure ML 'Machine Learning Interpretability Toolkit.’ In our conversation, Sarah walks us this toolkit, detailing use cases and the user experience. We also discuss her work in differential privacy, and in the broader ML community, in particular, the MLSys conference. 322 full Sam Charrington
Enterprise Readiness, MLOps and Lifecycle Management with Jordan Edwards - #321 Enterprise Readiness, MLOps and Lifecycle Management with Jordan Edwards Mon, 02 Dec 2019 16:24:31 +0000 Today we’re joined by Jordan Edwards, Principal Program Manager for MLOps on Azure ML at Microsoft. In our conversation, Jordan details:

  • How Azure ML accelerates model lifecycle management with MLOps, enabling data scientists to collaborate with IT teams to increase the pace of model development and deployment.
  • Problems associated with generalizing ML at scale at Microsoft, and how those problems are prioritized, 
  • What is MLOps, and the role of testing is in an MLOps environment, and experiences working with customers to implement these tests. 
  • The “four phases” along the journey of customer implementation of MLOps, how companies should look at hiring ML Engineers vs DevOps Engineers, and other aspects of managing model life cycles that Jordan finds important for us to think about. 

The complete show notes can be found at twimlai.com/talk/321. 

]]>
Today we’re joined by Jordan Edwards, Principal Program Manager for MLOps on Azure ML at Microsoft. In our conversation, Jordan details:

  • How Azure ML accelerates model lifecycle management with MLOps, enabling data scientists to collaborate with IT teams to increase the pace of model development and deployment.
  • Problems associated with generalizing ML at scale at Microsoft, and how those problems are prioritized, 
  • What is MLOps, and the role of testing is in an MLOps environment, and experiences working with customers to implement these tests. 
  • The “four phases” along the journey of customer implementation of MLOps, how companies should look at hiring ML Engineers vs DevOps Engineers, and other aspects of managing model life cycles that Jordan finds important for us to think about. 

The complete show notes can be found at twimlai.com/talk/321. 

]]>
39:43 clean podcast,science,technology,tech,data,microsoft,intelligence,learning,scientist,azure,workflow,jordan,artificial,infrastructure,pipeline,machine,lifecycle,edwards,ai,platform,scale,engineer,platforms,ml,devops,twiml,mlops Today we’re joined by Jordan Edwards, Principal Program Manager for MLOps on Azure ML at Microsoft. In our conversation, Jordan details how Azure ML accelerates model lifecycle management with MLOps, which enables data scientists to collaborate with IT teams to increase the pace of model development and deployment. We discuss various problems associated with generalizing ML at scale at Microsoft, what exactly MLOps is, the “four phases” along the journey of customer implementation of MLOps, and much m 321 full Sam Charrington
DevOps for ML with Dotscience - #320 DevOps for ML with Dotscience Tue, 26 Nov 2019 00:44:04 +0000 Today we’re joined by Luke Marsden, Founder and CEO of Dotscience. Luke walks us through the Dotscience platform and their manifesto on DevOps for ML.

Thanks to Luke and Dotscience for their sponsorship of this Democast and their continued support of TWIML.  

Head to https://twimlai.com/democast/dotscience to watch the full democast!

]]>
Today we’re joined by Luke Marsden, Founder and CEO of Dotscience. Luke walks us through the Dotscience platform and their manifesto on DevOps for ML.

Thanks to Luke and Dotscience for their sponsorship of this Democast and their continued support of TWIML.  

Head to https://twimlai.com/democast/dotscience to watch the full democast!

]]>
47:04 clean podcast,technology,tech,in,this,week,intelligence,learning,artificial,machine,ai,platforms,ml,twiml,twimlcon,twimlai,dotscience Today we’re joined by Luke Marsden, Founder and CEO of Dotscience. Luke walks us through the Dotscience platform and their manifesto on DevOps for ML. Thanks to Luke and Dotscience for their sponsorship of this Democast and their continued support of TWIML.   Head to https://twimlai.com/democast/dotscience to watch the full democast! 320 full Sam Charrington
Building an Autonomous Knowledge Graph with Mike Tung - #319 Building an Autonomous Knowledge Graph with Mike Tung Thu, 21 Nov 2019 20:27:15 +0000 Today we’re joined by Mike Tung, Founder, and CEO of Diffbot. In our conversation, we discuss: 

  • Their various tools, including their Knowledge Graph, Extraction API, and CrawlBot.
  • How Knowledge Graph was inspired by Imagenet, how it was built, and how it differs from other, more mainstream knowledge graphs like Google Search and MSFT Bing.
  • How they balance being a research company that is also commercially viable.
  • The developer experience with their tools, and challenges faced.

The complete show notes can be found at twimlai.com/talk/319.

]]>
Today we’re joined by Mike Tung, Founder, and CEO of Diffbot. In our conversation, we discuss: 

  • Their various tools, including their Knowledge Graph, Extraction API, and CrawlBot.
  • How Knowledge Graph was inspired by Imagenet, how it was built, and how it differs from other, more mainstream knowledge graphs like Google Search and MSFT Bing.
  • How they balance being a research company that is also commercially viable.
  • The developer experience with their tools, and challenges faced.

The complete show notes can be found at twimlai.com/talk/319.

]]>
44:47 clean podcast,science,mike,technology,tech,google,data,microsoft,intelligence,learning,li,knowledge,search,stanford,bing,artificial,machine,fei,ai,graph,ml,tung,twiml,imagenet,diffbot Today we’re joined by Mike Tung, Founder, and CEO of Diffbot. In our conversation, we discuss Diffbot’s Knowledge Graph, including how it differs from more mainstream use cases like Google Search and MSFT Bing. We also discuss the developer experience with the knowledge graph and other tools, like Extraction API and Crawlbot, challenges like knowledge fusion, balancing being a research company that is also commercially viable, and how they approach their role in the research community. 319 full Sam Charrington
The Next Generation of Self-Driving Engineers with Aaron Ma - Talk #318 The Next Generation of Self-Driving Engineers with Aaron Ma Mon, 18 Nov 2019 21:13:18 +0000 Today we’re joined by our youngest guest ever (by far), Aaron Ma, an 11-year-old middle school student and machine learning engineer in training. Aaron has completed over 80(!) Coursera courses and is the recipient of 3 Udacity nano-degrees. In our conversation, we discuss:

  • Aaron’s research interests, reinforcement learning, and self-driving cars,
  • His experiences participating in over 35 kaggle competitions
  • How he balances his passion for machine learning with things like chores and homework.

This was a really fun interview! 

The complete show notes for this episode can be found at twimlai.com/talk/318.

]]>
Today we’re joined by our youngest guest ever (by far), Aaron Ma, an 11-year-old middle school student and machine learning engineer in training. Aaron has completed over 80(!) Coursera courses and is the recipient of 3 Udacity nano-degrees. In our conversation, we discuss:

  • Aaron’s research interests, reinforcement learning, and self-driving cars,
  • His experiences participating in over 35 kaggle competitions
  • How he balances his passion for machine learning with things like chores and homework.

This was a really fun interview! 

The complete show notes for this episode can be found at twimlai.com/talk/318.

]]>
47:53 clean podcast,science,technology,tech,data,intelligence,learning,flying,car,artificial,machine,ai,reinforcement,vehicle,ml,autonomous,coursera,selfdriving,udacity,twiml,kaggle Today we’re joined by our youngest guest ever (by far), Aaron Ma, an 11-year-old middle school student and machine learning engineer in training. Aaron has completed over 80(!) Coursera courses and is the recipient of 3 Udacity nano-degrees. In our conversation, we discuss Aaron’s research interests in reinforcement learning and self-driving cars, his journey from programmer to ML engineer, his experiences participating in kaggle competitions, and how he balances his passion for ML with day-to-day life. 318 full Sam Charrington
Spiking Neural Networks: A Primer with Terrence Sejnowski - #317 Spiking Neural Networks: A Primer with Dr. Terrence Sejnowski Thu, 14 Nov 2019 17:46:31 +0000 On today’s episode, we’re joined by Terrence Sejnowski, Francis Crick Chair, head of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies and faculty member at UC San Diego. In our conversation with Terry, we discuss:

  • His role as a founding researcher in the field of computational neuroscience, and as a founder of the annual Telluride Neuromorphic Cognition Engineering Workshop. 
  • We dive deep into the world of spiking neural networks and brain architecture,
  • the relationship of neuroscience to machine learning, and ways to make NN’s more efficient through spiking. 
  • Terry also gives us some insight into hardware used in this field, characterizes the major research problems currently being undertaken, and the future of spiking networks. 

Check out the complete show notes at twimlai.com/talk/317.

 

]]>
On today’s episode, we’re joined by Terrence Sejnowski, Francis Crick Chair, head of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies and faculty member at UC San Diego. In our conversation with Terry, we discuss:

  • His role as a founding researcher in the field of computational neuroscience, and as a founder of the annual Telluride Neuromorphic Cognition Engineering Workshop. 
  • We dive deep into the world of spiking neural networks and brain architecture,
  • the relationship of neuroscience to machine learning, and ways to make NN’s more efficient through spiking. 
  • Terry also gives us some insight into hardware used in this field, characterizes the major research problems currently being undertaken, and the future of spiking networks. 

Check out the complete show notes at twimlai.com/talk/317.

 

]]>
49:34 clean podcast,science,technology,networks,tech,data,intelligence,neuroscience,learning,primer,artificial,neural,machine,terrence,ai,ml,twiml,spiking,sejnowski On today’s episode, we’re joined by Terrence Sejnowski, to discuss the ins and outs of spiking neural networks, including brain architecture, the relationship between neuroscience and machine learning, and ways to make NN’s more efficient through spiking. Terry also gives us some insight into hardware used in this field, characterizes the major research problems currently being undertaken, and the future of spiking networks. 317 full Sam Charrington
Bridging the Patient-Physician Gap with ML and Expert Systems w/ Xavier Amatriain - #316 Bridging the Patient-Physician Gap with ML and Expert Systems w/ Xavier Amatriain Mon, 11 Nov 2019 22:05:16 +0000 Today we’re joined by return guest Xavier Amatriain, Co-founder and CTO of Curai. In our conversation, we discuss

  • Curai’s goal of providing the world’s best primary care to patients via their smartphone, and how ML & AI will bring down costs healthcare accessible and scaleable. 
  • The shortcomings of traditional primary care, and how Curai fills that role, 
  • Some of the unique challenges his team faces in applying this use case in the healthcare space. 
  • Their use of expert systems, how they develop and train their models with synthetic data through noise injection
  • How NLP projects like BERT, Transformer, and GPT-2 fit into what Curai is building. 

Check out the complete show notes page at twimlai.com/talk/316

]]>
Today we’re joined by return guest Xavier Amatriain, Co-founder and CTO of Curai. In our conversation, we discuss

  • Curai’s goal of providing the world’s best primary care to patients via their smartphone, and how ML & AI will bring down costs healthcare accessible and scaleable. 
  • The shortcomings of traditional primary care, and how Curai fills that role, 
  • Some of the unique challenges his team faces in applying this use case in the healthcare space. 
  • Their use of expert systems, how they develop and train their models with synthetic data through noise injection
  • How NLP projects like BERT, Transformer, and GPT-2 fit into what Curai is building. 

Check out the complete show notes page at twimlai.com/talk/316

]]>
39:01 clean podcast,science,technology,tech,data,systems,intelligence,learning,healthcare,expert,artificial,machine,ai,bert,xavier,ml,twiml,gpt2,curai,amatriain Today we’re joined by return guest Xavier Amatriain, Co-founder and CTO of Curai, whose goal is to make healthcare accessible and scaleable while bringing down costs. In our conversation, we touch on the shortcomings of traditional primary care, and how Curai fills that role, and some of the unique challenges his team faces in applying ML in the healthcare space. We also discuss the use of expert systems, how they train them, and how NLP projects like BERT and GPT-2 fit into what they’re building. 316 full Sam Charrington
What Does it Mean for a Machine to "Understand"? with Thomas Dietterich - #315 What Does it Mean for a Machine to "Understand"? with Thomas Dietterich Thu, 07 Nov 2019 19:50:53 +0000 Today we’re joined by Tom Dietterich, Distinguished Professor Emeritus at Oregon State University. We had the pleasure of discussing Tom’s recent blog post, “What does it mean for a machine to “understand,” in which he discusses:

  • Tom’s position on what qualifies as machine “understanding”, including a few examples of systems that he believes exhibit understanding.
  • The role of deep learning in achieving artificial general intelligence.
  • The current “Hype Engine” that exists around AI Research, and SOOO much more.  

Make sure you check out the show notes at twimlai.com/talk/315, where you’ll find links to Tom’s blog post, as well as a ton of other references. 

]]>
Today we’re joined by Tom Dietterich, Distinguished Professor Emeritus at Oregon State University. We had the pleasure of discussing Tom’s recent blog post, “What does it mean for a machine to “understand,” in which he discusses:

  • Tom’s position on what qualifies as machine “understanding”, including a few examples of systems that he believes exhibit understanding.
  • The role of deep learning in achieving artificial general intelligence.
  • The current “Hype Engine” that exists around AI Research, and SOOO much more.  

Make sure you check out the show notes at twimlai.com/talk/315, where you’ll find links to Tom’s blog post, as well as a ton of other references. 

]]>
38:09 clean podcast,science,technology,tech,data,deep,intelligence,greg,tom,learning,university,general,state,oregon,thomas,artificial,lipton,machine,joy,ai,zach,understanding,agi,ml,brockman,dietterich,twiml,timnit,gebru,buolamwini Today we have the pleasure of being joined by Tom Dietterich, Distinguished Professor Emeritus at Oregon State University. Tom recently wrote a blog post titled "What does it mean for a machine to “understand”, and in our conversation, he goes into great detail on his thoughts. We cover a lot of ground, including Tom’s position in the debate, his thoughts on the role of systems like deep learning in potentially getting us to AGI, the “hype engine” around AI advancements, and so much more. 315 full Sam Charrington
Scaling TensorFlow at LinkedIn with Jonathan Hung - #314 Scaling TensorFlow with LinkedIn with Jonathan Hung Mon, 04 Nov 2019 19:46:11 +0000 Today we’re joined by Jonathan Hung, Sr. Software Engineer at LinkedIn, who we caught up with at TensorFlow World last week. In our conversation, we discuss: 

  • Jonathan’s presentation at the event focused on LinkedIn’s efforts scaling Tensorflow.
  • Jonathan’s work as part of the Hadoop infrastructure team, including experimenting on Hadoop with various frameworks, and their motivation for using TensorFlow on their pre-existing Hadoop clusters infrastructure. 
  • TonY, or TensorFlow on Yard, LinkedIn’s framework that natively runs deep learning jobs on Hadoop, and its relationship with Pro-ML, LinkedIn’s internal AI Platform, which we’ve discussed on earlier episodes of the podcast (Link).
  • Finally, we discuss how far LinkedIn’s Hadoop infrastructure has come since 2017, and their foray into using Kubernetes for research. 

The complete show notes can be found at twimlai.com/talk/314.

]]>
Today we’re joined by Jonathan Hung, Sr. Software Engineer at LinkedIn, who we caught up with at TensorFlow World last week. In our conversation, we discuss: 

  • Jonathan’s presentation at the event focused on LinkedIn’s efforts scaling Tensorflow.
  • Jonathan’s work as part of the Hadoop infrastructure team, including experimenting on Hadoop with various frameworks, and their motivation for using TensorFlow on their pre-existing Hadoop clusters infrastructure. 
  • TonY, or TensorFlow on Yard, LinkedIn’s framework that natively runs deep learning jobs on Hadoop, and its relationship with Pro-ML, LinkedIn’s internal AI Platform, which we’ve discussed on earlier episodes of the podcast (Link).
  • Finally, we discuss how far LinkedIn’s Hadoop infrastructure has come since 2017, and their foray into using Kubernetes for research. 

The complete show notes can be found at twimlai.com/talk/314.

]]>
35:07 clean podcast,science,technology,linkedin,tech,on,data,world,intelligence,learning,hung,jonathan,tony,artificial,machine,ai,yard,hadoop,ml,tensorflow,twiml,proml Today we’re joined by Jonathan Hung, Sr. Software Engineer at LinkedIn. Jonathan presented at TensorFlow world last week, titled Scaling TensorFlow at LinkedIn. In our conversation, we discuss their motivation for using TensorFlow on their pre-existing Hadoop clusters infrastructure, TonY, or TensorFlow on Yard, LinkedIn’s framework that natively runs deep learning jobs on Hadoop, and its relationship with Pro-ML, LinkedIn’s internal AI Platform, and their foray into using Kubernetes for research. 314 full Sam Charrington
Machine Learning at GitHub with Omoju Miller - #313 Machine Learning at GitHub with Omoju Miller Thu, 31 Oct 2019 19:43:46 +0000 Today we’re joined by Omoju Miller, a Sr. machine learning engineer at GitHub. In our conversation, we discuss:

  • Her dissertation, Hiphopathy, A Socio-Curricular Study of Introductory Computer Science, 
  • Her work as an inaugural member of the Github machine learning team
  • Her two presentations at Tensorflow World, “Why is machine learning seeing exponential growth in its communities” and “Automating your developer workflow on GitHub with Tensorflow.”

The complete show notes for this episode can be found at twimlai.com/talk/313. 

]]>
Today we’re joined by Omoju Miller, a Sr. machine learning engineer at GitHub. In our conversation, we discuss:

  • Her dissertation, Hiphopathy, A Socio-Curricular Study of Introductory Computer Science, 
  • Her work as an inaugural member of the Github machine learning team
  • Her two presentations at Tensorflow World, “Why is machine learning seeing exponential growth in its communities” and “Automating your developer workflow on GitHub with Tensorflow.”

The complete show notes for this episode can be found at twimlai.com/talk/313. 

]]>
43:41 clean podcast,science,miller,technology,tech,data,world,intelligence,learning,marketplace,artificial,machine,repo,lifecycle,cycles,ai,ml,github,tfw,tensorflow,twiml,omoju Today we’re joined by Omoju Miller, a Sr. machine learning engineer at GitHub. In our conversation, we discuss: • Her dissertation, Hiphopathy, A Socio-Curricular Study of Introductory Computer Science,  • Her work as an inaugural member of the Github machine learning team • Her two presentations at Tensorflow World, “Why is machine learning seeing exponential growth in its communities” and “Automating your developer workflow on GitHub with Tensorflow.” 313 full Sam Charrington
Using AI to Diagnose and Treat Neurological Disorders with Archana Venkataraman - #312 Using AI to Diagnose and Treat Neurological Disorders with Archana Venkataraman Mon, 28 Oct 2019 21:43:31 +0000 Today we’re joined by Archana Venkataraman, John C. Malone Assistant Professor of Electrical and Computer Engineering at Johns Hopkins University, and MIT 35 innovators under 35 recipient.

Archana’s research at the Neural Systems Analysis Laboratory focuses on developing tools, frameworks, and algorithms to better understand, and treat neurological and psychiatric disorders, including autism, epilepsy, and others. In our conversation, we explore her lab’s work in applying machine learning to these problems, including biomarker discovery, disorder severity prediction, as well as some of the various techniques and frameworks used.

The complete show notes for this episode can be found at twimlai.com/talk/312.

]]>
Today we’re joined by Archana Venkataraman, John C. Malone Assistant Professor of Electrical and Computer Engineering at Johns Hopkins University, and MIT 35 innovators under 35 recipient.

Archana’s research at the Neural Systems Analysis Laboratory focuses on developing tools, frameworks, and algorithms to better understand, and treat neurological and psychiatric disorders, including autism, epilepsy, and others. In our conversation, we explore her lab’s work in applying machine learning to these problems, including biomarker discovery, disorder severity prediction, as well as some of the various techniques and frameworks used.

The complete show notes for this episode can be found at twimlai.com/talk/312.

]]>
47:48 clean podcast,science,technology,tech,data,intelligence,neuroscience,learning,neurological,disorder,hopkins,artificial,johns,machine,psychiatric,ai,mit,ml,archana,twiml,venkataraman Today we’re joined by Archana Venkataraman, John C. Malone Assistant Professor of Electrical and Computer Engineering at Johns Hopkins University. Archana’s research at the Neural Systems Analysis Laboratory focuses on developing tools, frameworks, and algorithms to better understand, and treat neurological and psychiatric disorders, including autism, epilepsy, and others. We explore her work applying machine learning to these problems, including biomarker discovery, disorder severity prediction and mor 312 full Sam Charrington
Deep Learning for Earthquake Aftershock Patterns with Phoebe DeVries & Brendan Meade - #311 Deep Learning for Earthquake Aftershock Patterns with Phoebe DeVries & Brendan Meade Fri, 25 Oct 2019 17:35:36 +0000 Today we are joined by Phoebe DeVries, Postdoctoral Fellow in the Department of Earth and Planetary Sciences at Harvard and assistant faculty at the University of Connecticut and Brendan Meade, Professor of Earth and Planetary Sciences and affiliate faculty in computers sciences at Harvard. In this episode, we discuss:

  • Phoebe and Brendan’s work is focused on discovering as much as possible about earthquakes before they happen, and through measuring how the earth’s surface moves, predicting future movement location
  • Their recent paper, ‘Deep learning of aftershock patterns following large earthquakes’, and 
  • The preliminary steps that guided them to using machine learning in the earth sciences
  • Their current research involving calculating stress changes in the crust and upper mantle after a large earthquake and using a neural network to map those changes to predict aftershock locations
  • The complex systems that encompass earth science studies, including the approaches, challenges, surprises, and results that come with incorporating machine learning models and data sets into a new field of study

The complete show notes for this episode can be found at twimlai.com/talk/311.

]]>
Today we are joined by Phoebe DeVries, Postdoctoral Fellow in the Department of Earth and Planetary Sciences at Harvard and assistant faculty at the University of Connecticut and Brendan Meade, Professor of Earth and Planetary Sciences and affiliate faculty in computers sciences at Harvard. In this episode, we discuss:

  • Phoebe and Brendan’s work is focused on discovering as much as possible about earthquakes before they happen, and through measuring how the earth’s surface moves, predicting future movement location
  • Their recent paper, ‘Deep learning of aftershock patterns following large earthquakes’, and 
  • The preliminary steps that guided them to using machine learning in the earth sciences
  • Their current research involving calculating stress changes in the crust and upper mantle after a large earthquake and using a neural network to map those changes to predict aftershock locations
  • The complex systems that encompass earth science studies, including the approaches, challenges, surprises, and results that come with incorporating machine learning models and data sets into a new field of study

The complete show notes for this episode can be found at twimlai.com/talk/311.

]]>
35:44 clean podcast,science,network,technology,tech,data,intelligence,devries,harvard,learning,artificial,earthquake,meade,neural,machine,brendan,ai,phoebe,ml,aftershock,twiml Today we are joined by Phoebe DeVries, Postdoctoral Fellow in the Department of Earth and Planetary Sciences at Harvard and Brendan Meade, Professor of Earth and Planetary Sciences at Harvard. Phoebe and Brendan’s work is focused on discovering as much as possible about earthquakes before they happen, and by measuring how the earth’s surface moves, predicting future movement location, as seen in their paper: ‘Deep learning of aftershock patterns following large earthquakes'. 311 full Sam Charrington
Live from TWIMLcon! Operationalizing Responsible AI - #310 Live from TWIMLcon! Operationalizing Responsible AI Tue, 22 Oct 2019 13:59:48 +0000 An often forgotten about topic garnered high praise at TWIMLcon this month: operationalizing responsible and ethical AI. This important topic was combined with an impressive panel of speakers, including: Rachel Thomas, Director, Center for Applied Data Ethics at the USF Data Institute, Guillaume Saint-Jacques, Head of Computational Science at LinkedIn, and Parinaz Sobahni, Director of Machine Learning at Georgian Partners, moderated by Khari Johnson, Senior AI Staff Writer at VentureBeat. This episode covers:

  • The basics of operationalizing AI ethics in a range of orgs and insight into an array of tools, approaches, and methods that have been found useful for teams to use
  • The biggest concerns, like focusing more on harm as opposed to algorithmic bias and encouraging specific responsibility for systems
  • Educating the general public on the realities and misconceptions of probabilistic methods and putting into place preventative guardrails has become imperative for any operation
  • The long-term benefits of ethical decision-making and the challenges of established versus startup companies
  • Questions from the TWIMLcon audience, some common examples of power dynamics in AI ethics, and what we as a community can be doing to push the needle in the very powerful world of responsible AI

The complete show notes can be found at twimlai.com/talk/310

]]>
An often forgotten about topic garnered high praise at TWIMLcon this month: operationalizing responsible and ethical AI. This important topic was combined with an impressive panel of speakers, including: Rachel Thomas, Director, Center for Applied Data Ethics at the USF Data Institute, Guillaume Saint-Jacques, Head of Computational Science at LinkedIn, and Parinaz Sobahni, Director of Machine Learning at Georgian Partners, moderated by Khari Johnson, Senior AI Staff Writer at VentureBeat. This episode covers:

  • The basics of operationalizing AI ethics in a range of orgs and insight into an array of tools, approaches, and methods that have been found useful for teams to use
  • The biggest concerns, like focusing more on harm as opposed to algorithmic bias and encouraging specific responsibility for systems
  • Educating the general public on the realities and misconceptions of probabilistic methods and putting into place preventative guardrails has become imperative for any operation
  • The long-term benefits of ethical decision-making and the challenges of established versus startup companies
  • Questions from the TWIMLcon audience, some common examples of power dynamics in AI ethics, and what we as a community can be doing to push the needle in the very powerful world of responsible AI

The complete show notes can be found at twimlai.com/talk/310

]]>
30:33 clean podcast,science,technology,tech,data,intelligence,learning,safety,ethics,artificial,fairness,trust,machine,bias,ai,diversity,responsible,ml,twiml An often forgotten about topic garnered high praise at TWIMLcon this month: operationalizing responsible and ethical AI. This important topic was combined with an impressive panel of speakers, including: Rachel Thomas, Director, Center for Applied Data Ethics at the USF Data Institute, Guillaume Saint-Jacques, Head of Computational Science at LinkedIn, and Parinaz Sobahni, Director of Machine Learning at Georgian Partners, moderated by Khari Johnson, Senior AI Staff Writer at VentureBeat. 310 full Sam Charrington
Live from TWIMLcon! Scaling ML in the Traditional Enterprise - #309 Live from TWIMLcon! Scaling ML in the Traditional Enterprise Fri, 18 Oct 2019 14:58:20 +0000 In this episode from a stellar TWIMLcon panel, the state and future of larger, more established brands is analyzed and discussed. Hear from Amr Awadallah, Founder and Global CTO of Cloudera, Pallav Agrawal, Director of Data Science at Levi Strauss & Co., and Jürgen Weichenberger, Data Science Senior Principal & Global AI Lead at Accenture, moderated by Josh Bloom Professor at UC Berkeley. In this episode we discuss:

  • For an ML/AI initiative to be successful, a conscious and noticeable shift is now required in how things used to be managed while educating cross-functional teams in data science terms and methodologies 
  • It can be tempting and exciting to constantly be trying out the latest technologies, but brand consistency and sustainability is imperative to success
  • How the real business value - the money - can be found by putting your big ML/AI goals and projects in the core competencies of the company.  
  • Are traditional enterprises fundamentally changing their business through ML/AI, and if so, why? 
  • Real-world examples and thought-provoking ideas for scaling ML/AI in the traditional enterprise

The complete show notes can be found at twimlai.com/talk/309.

]]>
In this episode from a stellar TWIMLcon panel, the state and future of larger, more established brands is analyzed and discussed. Hear from Amr Awadallah, Founder and Global CTO of Cloudera, Pallav Agrawal, Director of Data Science at Levi Strauss & Co., and Jürgen Weichenberger, Data Science Senior Principal & Global AI Lead at Accenture, moderated by Josh Bloom Professor at UC Berkeley. In this episode we discuss:

  • For an ML/AI initiative to be successful, a conscious and noticeable shift is now required in how things used to be managed while educating cross-functional teams in data science terms and methodologies 
  • It can be tempting and exciting to constantly be trying out the latest technologies, but brand consistency and sustainability is imperative to success
  • How the real business value - the money - can be found by putting your big ML/AI goals and projects in the core competencies of the company.  
  • Are traditional enterprises fundamentally changing their business through ML/AI, and if so, why? 
  • Real-world examples and thought-provoking ideas for scaling ML/AI in the traditional enterprise

The complete show notes can be found at twimlai.com/talk/309.

]]>
33:37 clean podcast,science,build,technology,tech,data,enterprise,intelligence,learning,sustainability,artificial,privacy,v,machine,ai,buy,traditional,ml,twiml Machine learning and AI is finding a place in the traditional enterprise - although the path to get there is different. In this episode, our panel analyzes the state and future of larger, more established brands. Hear from Amr Awadallah, Founder and Global CTO of Cloudera, Pallav Agrawal, Director of Data Science at Levi Strauss & Co., and Jürgen Weichenberger, Data Science Senior Principal & Global AI Lead at Accenture, moderated by Josh Bloom, Professor at UC Berkeley. 309 full Sam Charrington
Live from TWIMLcon! Culture & Organization for Effective ML at Scale (Panel) - #308 Live from TWIMLcon! Culture & Organization for Effective ML at Scale (Panel) Tue, 15 Oct 2019 18:51:40 +0000 TWIMLcon brought together so many in the ML/AI community to discuss the unique challenges to building and scaling machine learning platforms. In this episode, hear from a diverse set of panelists including: Pardis Noorzad, Data Science Manager at Twitter, Eric Colson, Chief Algorithms Officer Emeritus at Stitch Fix, and Jennifer Prendki, Founder & CEO at Alectio, moderated by Maribel Lopez, Founder & Principal Analyst at Lopez Research:

  • How to approach changing the way companies think about machine learning
  • Engaging different groups to work together effectively - i.e. c-suite, marketing, sales, engineering, etc. 
  • The importance of clear communication about ML lifecycle management
  • How full stack roles can provide immense value
  • Tips and tricks to work faster, more efficiently, and create an org-wide culture that holds machine learning as a valued priority

The complete show notes can be found at twimlai.com/talk/308.

]]>
TWIMLcon brought together so many in the ML/AI community to discuss the unique challenges to building and scaling machine learning platforms. In this episode, hear from a diverse set of panelists including: Pardis Noorzad, Data Science Manager at Twitter, Eric Colson, Chief Algorithms Officer Emeritus at Stitch Fix, and Jennifer Prendki, Founder & CEO at Alectio, moderated by Maribel Lopez, Founder & Principal Analyst at Lopez Research:

  • How to approach changing the way companies think about machine learning
  • Engaging different groups to work together effectively - i.e. c-suite, marketing, sales, engineering, etc. 
  • The importance of clear communication about ML lifecycle management
  • How full stack roles can provide immense value
  • Tips and tricks to work faster, more efficiently, and create an org-wide culture that holds machine learning as a valued priority

The complete show notes can be found at twimlai.com/talk/308.

]]>
27:59 clean podcast,science,technology,tech,data,culture,management,intelligence,learning,product,collaboration,artificial,machine,lifecycle,ai,ml,twiml TWIMLcon brought together so many in the ML/AI community to discuss the unique challenges to building and scaling machine learning platforms. In this episode, hear about changing the way companies think about machine learning from a diverse set of panelists including Pardis Noorzad, Data Science Manager at Twitter, Eric Colson, Chief Algorithms Officer Emeritus at Stitch Fix, and Jennifer Prendki, Founder & CEO at Alectio, moderated by Maribel Lopez, Founder & Principal Analyst at Lopez Research. 308 full Sam Charrington
Live from TWIMLcon! Use-Case Driven ML Platforms with Franziska Bell - #307 Live from TWIMLcon! Use-Case Driven ML Platforms with Franziska Bell Thu, 10 Oct 2019 17:47:43 +0000 Franziska Bell, Ph.D., is the Director of Data Science Platforms at Uber, and joined Sam on stage at TWIMLcon last week to discuss all things platform at Uber. With the goal of providing cutting edge data science company-wide at the push of a button, Fran has developed a portfolio of platforms, ranging from forecasting to anomaly detection to conversational AI. In this episode, we discuss:

  • Through strategic use cases, Fran’s team of data scientists works closely with teams across the organization at every stage to solve problems and build infrastructure
  • The evolving working relationship between her team and Michelangelo (Uber’s ML Platform), including the challenges and benefits that such a platform provides
  • Insight into Uber’s development methodology and how the data science team is organized from start to finish to create a culture of learning and expertise that results in fast results and reduced risk
  • Fran’s take on the future of ML platforms and more!

Check out the complete show notes at twimlai.com/talk/307

]]>
Franziska Bell, Ph.D., is the Director of Data Science Platforms at Uber, and joined Sam on stage at TWIMLcon last week to discuss all things platform at Uber. With the goal of providing cutting edge data science company-wide at the push of a button, Fran has developed a portfolio of platforms, ranging from forecasting to anomaly detection to conversational AI. In this episode, we discuss:

  • Through strategic use cases, Fran’s team of data scientists works closely with teams across the organization at every stage to solve problems and build infrastructure
  • The evolving working relationship between her team and Michelangelo (Uber’s ML Platform), including the challenges and benefits that such a platform provides
  • Insight into Uber’s development methodology and how the data science team is organized from start to finish to create a culture of learning and expertise that results in fast results and reduced risk
  • Fran’s take on the future of ML platforms and more!

Check out the complete show notes at twimlai.com/talk/307

]]>
32:14 clean podcast,science,technology,tech,data,intelligence,learning,use,cases,artificial,anomaly,forecasting,machine,ai,detection,conversational,michelangelo,platforms,ml,twiml Today we're Franziska Bell, Ph.D., the Director of Data Science Platforms at Uber, who joined Sam on stage at TWIMLcon last week. Fran provided a look into the cutting edge data science available company-wide at the push of a button. Since joining Uber, Fran has developed a portfolio of platforms, ranging from forecasting to conversational AI. Hear how use cases can strategically guide platform development, the evolving relationship between her team and Michelangelo (Uber’s ML Platform) and much more! 307 full Sam Charrington
Live from TWIMLcon! Operationalizing ML at Scale with Hussein Mehanna - #306 Live from TWIMLcon! Operationalizing ML at Scale with Hussein Mehanna Tue, 08 Oct 2019 15:56:33 +0000 The live interviews from TWIMLcon continue with Hussein Mehanna, Head of Machine Learning and Artificial Intelligence at Cruise. From his start at Facebook and then Google and now to Cruise, leading the trend of autonomous vehicles, Hussein has seen first hand what it takes to scale and sustain machine learning programs. In this episode, hear him and Sam discuss:

  • At Facebook, a few early wins in the realm of infrastructure building set the stage for scaling via faster algorithms and soon the entire Facebook organization could achieve a new level of ML scaling with all workflows shareable, reusable and discoverable through a search interface
  • Cruise’s unique focus on the interplay between applied research problems and the underlying tools and platforms
  • The immense capacity that the industry of autonomous vehicles has to push ML and AI to new limits of depth and scale
  • The challenges (and joys) of working in the industry and his insight into analyzing scale when innovation is happening in parallel with the development
  • Hussein’s experiences at Facebook, Google, and Cruise, along with his thoughts on productivity being a "usability" vs "modeling" challenge and his prediction for the future of ML platforms!

The complete show notes can be found at twimlai.com/talk/306.

]]>
The live interviews from TWIMLcon continue with Hussein Mehanna, Head of Machine Learning and Artificial Intelligence at Cruise. From his start at Facebook and then Google and now to Cruise, leading the trend of autonomous vehicles, Hussein has seen first hand what it takes to scale and sustain machine learning programs. In this episode, hear him and Sam discuss:

  • At Facebook, a few early wins in the realm of infrastructure building set the stage for scaling via faster algorithms and soon the entire Facebook organization could achieve a new level of ML scaling with all workflows shareable, reusable and discoverable through a search interface
  • Cruise’s unique focus on the interplay between applied research problems and the underlying tools and platforms
  • The immense capacity that the industry of autonomous vehicles has to push ML and AI to new limits of depth and scale
  • The challenges (and joys) of working in the industry and his insight into analyzing scale when innovation is happening in parallel with the development
  • Hussein’s experiences at Facebook, Google, and Cruise, along with his thoughts on productivity being a "usability" vs "modeling" challenge and his prediction for the future of ML platforms!

The complete show notes can be found at twimlai.com/talk/306.

]]>
33:39 clean podcast,science,technology,tech,data,intelligence,learning,cruise,artificial,infrastructure,vehicles,machine,ai,scaling,platforms,ml,autonomous,twiml,twimlcon The live interviews from TWIMLcon continue with Hussein Mehanna, Head of ML and AI at Cruise. From his start at Facebook to his current work at Cruise, Hussein has seen first hand what it takes to scale and sustain machine learning programs. Hear him discuss the challenges (and joys) of working in the industry, his insight into analyzing scale when innovation is happening in parallel with development, his experiences at Facebook, Google, and Cruise, and his predictions for the future of ML platforms! 306 full Sam Charrington
Live from TWIMLcon! Encoding Company Culture in Applied AI Systems - #305 LIVE FROM TWIMLcon! Encoding Company Culture in Applied AI Systems Fri, 04 Oct 2019 09:00:00 +0000 In this episode, Sam is joined by Deepak Agarwal, VP of Engineering at LinkedIn, who graced the stage at TWIMLcon: AI Platforms for a keynote interview. In this episode Deepak shares:

  • The incredible impact that standardizing processes and tools have on a company’s culture and overall productivity levels
  • Insight into the best way to increase ML ROI and how to sell ML programs to the C-Suite (two things that often go hand in hand)
  • The Pro-ML initiative for delivering machine learning systems at scale, specifically looking at aligning improvement of tooling and infrastructure with the pace of innovation and more!

Check out the complete show notes at twimlai.com/talk/305.

]]>
In this episode, Sam is joined by Deepak Agarwal, VP of Engineering at LinkedIn, who graced the stage at TWIMLcon: AI Platforms for a keynote interview. In this episode Deepak shares:

  • The incredible impact that standardizing processes and tools have on a company’s culture and overall productivity levels
  • Insight into the best way to increase ML ROI and how to sell ML programs to the C-Suite (two things that often go hand in hand)
  • The Pro-ML initiative for delivering machine learning systems at scale, specifically looking at aligning improvement of tooling and infrastructure with the pace of innovation and more!

Check out the complete show notes at twimlai.com/talk/305.

]]>
32:22 clean podcast,science,technology,linkedin,tech,first,data,intelligence,learning,experimental,process,artificial,machine,ai,industrial,velocity,deepak,ro,ml,agarwal,standardization,twiml,proml In this episode, Sam is joined by Deepak Agarwal, VP of Engineering at LinkedIn, who graced the stage at TWIMLcon: AI Platforms for a keynote interview. Deepak shares the impact that standardizing processes and tools have on a company’s culture and productivity levels, and best practices to increasing ML ROI. He also details the Pro-ML initiative for delivering machine learning systems at scale, specifically looking at aligning improvement of tooling and infrastructure with the pace of innovation and more 305 full Sam Charrington
Live from TWIMLcon! Overcoming the Barriers to Deep Learning in Production with Andrew Ng - #304 LIVE FROM TWIMLcon! Overcoming the Barriers to Deep Learning in Production with Andrew Ng Tue, 01 Oct 2019 18:55:44 +0000 Earlier today, Andrew Ng joined us onstage at TWIMLcon to share some of his immense knowledge. As the Founder and CEO of Landing AI, Co-Chairman and Co-Founder of Coursera, and founding lead of Google Brain, Andrew is no stranger to knowing what it takes for AI and machine learning to be successful.

In this episode, hear about:

  • The work that Landing AI is doing to help organizations adopt modern AI
  • His experiences in overcoming the challenges that large companies face
  • Insight into how enterprises can get the most value for their ML investment
  • The ‘essential complexity’ of software engineering and more! 

The complete show notes can be found at twimlai.com/talk/304.

]]>
Earlier today, Andrew Ng joined us onstage at TWIMLcon to share some of his immense knowledge. As the Founder and CEO of Landing AI, Co-Chairman and Co-Founder of Coursera, and founding lead of Google Brain, Andrew is no stranger to knowing what it takes for AI and machine learning to be successful.

In this episode, hear about:

  • The work that Landing AI is doing to help organizations adopt modern AI
  • His experiences in overcoming the challenges that large companies face
  • Insight into how enterprises can get the most value for their ML investment
  • The ‘essential complexity’ of software engineering and more! 

The complete show notes can be found at twimlai.com/talk/304.

]]>
33:59 clean podcast,science,technology,tech,data,deep,intelligence,learning,andrew,artificial,machine,ai,landing,deployment,ng,ml,systematic,twiml Earlier today, Andrew Ng joined us onstage at TWIMLcon - as the Founder and CEO of Landing AI and founding lead of Google Brain, Andrew is no stranger to knowing what it takes for AI and machine learning to be successful. Hear about the work that Landing AI is doing to help organizations adopt modern AI, his experience in overcoming challenges for large companies, how enterprises can get the most value for their ML investment as well as addressing the ‘essential complexity’ of software engineering. 304 full Sam Charrington
The Future of Mixed-Autonomy Traffic with Alexandre Bayen - #303 The Future of Mixed-Autonomy Traffic with Alexandre Bayen Fri, 27 Sep 2019 18:29:28 +0000 Today we are joined by Alexandre Bayen, Director of the Institute for Transportation Studies and Professor at UC Berkeley.In this episode, we discuss Alex’s background in machine learning, his current research in mixed-autonomy traffic, and the idea of swarming in terms of the impact just a few self-driving cars can have on traffic mobility. In the AWS re:Invent conference last year, Alex presented on the future of mixed-autonomy traffic and the two major revolutions he predicts will take place in the next 10-15 years. This includes model-free deep reinforcement learning techniques and end-to-end pixel learning. Looking ahead, Alex shares his take on the future of transportation systems and the potential for varying levels of automation in sub-communities.

The complete show notes can be found at twimlai.com/talk/303.

]]>
Today we are joined by Alexandre Bayen, Director of the Institute for Transportation Studies and Professor at UC Berkeley.In this episode, we discuss Alex’s background in machine learning, his current research in mixed-autonomy traffic, and the idea of swarming in terms of the impact just a few self-driving cars can have on traffic mobility. In the AWS re:Invent conference last year, Alex presented on the future of mixed-autonomy traffic and the two major revolutions he predicts will take place in the next 10-15 years. This includes model-free deep reinforcement learning techniques and end-to-end pixel learning. Looking ahead, Alex shares his take on the future of transportation systems and the potential for varying levels of automation in sub-communities.

The complete show notes can be found at twimlai.com/talk/303.

]]>
43:44 clean podcast,science,technology,tech,data,traffic,deep,intelligence,learning,mobility,artificial,machine,ai,simulation,reinforcement,ml,swarming,twiml,mixedautonomy Today we are joined by Alexandre Bayen, Director of the Institute for Transportation Studies and Professor at UC Berkeley. Alex's current research is in mixed-autonomy traffic to understand how the growing automation in self-driving vehicles can be used to improve mobility and flow of traffic. At the AWS re:Invent conference last year, Alex presented on the future of mixed-autonomy traffic and the two major revolutions he predicts will take place in the next 10-15 years. 303 full Sam Charrington
Deep Reinforcement Learning for Logistics at Instadeep with Karim Beguir - #302 Deep Reinforcement Learning for Logistics at Instadeep with Karim Beguir Wed, 25 Sep 2019 12:54:54 +0000 Today we are joined by Karim Beguir, Co-Founder and CEO of InstaDeep, a company in Tunisia, Africa focusing on building advanced decision-making systems for the enterprise. In this episode, we discuss where his and InstaDeep’s journey began in Tunisia, Africa (00:27), the challenges that enterprise companies are seeing in logistics that can be solved by deep learning and machine learning (05:43), how InstaDeep is applying DL and RL to real world problems (09:45), and what are the data sets used to train these models and the application of transfer learning between similar data sets (13:00). Additionally, we go over ‘Rank Rewards’, a paper Karim published last year, in which adversarial self-play in two-player games has delivered impressive results when used with reinforcement learning algorithms (22:40), the overall efficiency of RL for logistical problems (23:05), and details on the InstaDeep process (35:37).

The complete show notes for this episode can be found at twimlai.com/talk/302. 

]]>
Today we are joined by Karim Beguir, Co-Founder and CEO of InstaDeep, a company in Tunisia, Africa focusing on building advanced decision-making systems for the enterprise. In this episode, we discuss where his and InstaDeep’s journey began in Tunisia, Africa (00:27), the challenges that enterprise companies are seeing in logistics that can be solved by deep learning and machine learning (05:43), how InstaDeep is applying DL and RL to real world problems (09:45), and what are the data sets used to train these models and the application of transfer learning between similar data sets (13:00). Additionally, we go over ‘Rank Rewards’, a paper Karim published last year, in which adversarial self-play in two-player games has delivered impressive results when used with reinforcement learning algorithms (22:40), the overall efficiency of RL for logistical problems (23:05), and details on the InstaDeep process (35:37).

The complete show notes for this episode can be found at twimlai.com/talk/302. 

]]>
43:45 clean podcast,science,technology,tech,data,enterprise,deep,intelligence,learning,artificial,decisionmaking,machine,ai,transfer,reinforcement,logistics,ml,twiml Today we are joined by Karim Beguir, Co-Founder and CEO of InstaDeep, a company focusing on building advanced decision-making systems for the enterprise. In this episode, we focus on logistical problems that require decision-making in complex environments using deep learning and reinforcement learning. Karim explains the InstaDeep process and mindset, where they get their data sets, the efficiency of RL, heuristic vs learnability approaches and how explainability fits into the model. 302 full Sam Charrington
Deep Learning with Structured Data w/ Mark Ryan - #301 Deep Learning with Structured Data w/ Mark Ryan Thu, 19 Sep 2019 01:43:40 +0000 Today we're joined by Mark Ryan, author of Deep Learning with Structured Data, currently in the Manning Early Access Program (MEAP), due for publication in Spring 2020. While working on the Support team at IBM Data and AI, he saw that there was a lack of general structured data sets that people could apply their models to. Using the streetcar network in his hometown of Toronto, Mark created a deep learning model to predict delays, but more importantly, gathered an open data set that was the perfect size and variety, and jump started the research for his latest book. In this episode, Mark shares the benefits of applying deep learning to structured data (and recent reduced barriers to entry), details of his experience with a range of data sets, the everlasting appreciation he and Sam shares for the Fast.ai course by Jeremy Howard, and the contents of his new book, aimed to help set up and maintain deep learning models with structured data.

With just two weeks left, time is running out for you to register for TWIMLcon: AI Platforms. Don't be left out! Register NOW at twimlcon.com/register

]]>
Today we're joined by Mark Ryan, author of Deep Learning with Structured Data, currently in the Manning Early Access Program (MEAP), due for publication in Spring 2020. While working on the Support team at IBM Data and AI, he saw that there was a lack of general structured data sets that people could apply their models to. Using the streetcar network in his hometown of Toronto, Mark created a deep learning model to predict delays, but more importantly, gathered an open data set that was the perfect size and variety, and jump started the research for his latest book. In this episode, Mark shares the benefits of applying deep learning to structured data (and recent reduced barriers to entry), details of his experience with a range of data sets, the everlasting appreciation he and Sam shares for the Fast.ai course by Jeremy Howard, and the contents of his new book, aimed to help set up and maintain deep learning models with structured data.

With just two weeks left, time is running out for you to register for TWIMLcon: AI Platforms. Don't be left out! Register NOW at twimlcon.com/register

]]>
39:30 clean podcast,science,technology,tech,data,deep,intelligence,learning,artificial,machine,ai,sets,structured,ml,twiml,embeddings,fastai Today we're joined by Mark Ryan, author of the upcoming book Deep Learning with Structured Data. Working on the support team at IBM Data and AI, he saw a lack of general structured data sets people could apply their models to. Using the streetcar network in Toronto, Mark gathered an open data set that started the research for his latest book. In this episode, Mark shares the benefits of applying deep learning to structured data, details of his experience with a range of data sets, and details his new book. 301 full Sam Charrington
Time Series Clustering for Monitoring Fueling Infrastructure Performance with Kalai Ramea - #300 Time Series Clustering for Monitoring Fueling Infrastructure Performance with Kalai Ramea Wed, 18 Sep 2019 02:04:53 +0000 Today we're joined by Kalai Ramea, Data Scientist at PARC, a Xerox Company. With a background in transportation, energy efficiency, art, and machine learning, Kalai has been fortunate enough to follow her passions through her work. In this episode we discuss:

  • Her environmentally efficient pursuit that lead to the purchase of a hydrogen car, and the subsequent journey and paper that followed assessing fueling stations 
  • Kalai’s next paper, looking at fuel consumption at hydrogen stations using temporal clustering to identify signatures of usage over time, grouping the stations into categories 
  • With the construction of fueling stations is planned to increase dramatically in the next 5 years, building reliability on their performance is crucial
  • A sneak peek into how Kalai incorporates her love of art into her work!

Check out the show notes, and the refresh, at twimlai.com

]]>
Today we're joined by Kalai Ramea, Data Scientist at PARC, a Xerox Company. With a background in transportation, energy efficiency, art, and machine learning, Kalai has been fortunate enough to follow her passions through her work. In this episode we discuss:

  • Her environmentally efficient pursuit that lead to the purchase of a hydrogen car, and the subsequent journey and paper that followed assessing fueling stations 
  • Kalai’s next paper, looking at fuel consumption at hydrogen stations using temporal clustering to identify signatures of usage over time, grouping the stations into categories 
  • With the construction of fueling stations is planned to increase dramatically in the next 5 years, building reliability on their performance is crucial
  • A sneak peek into how Kalai incorporates her love of art into her work!

Check out the show notes, and the refresh, at twimlai.com

]]>
30:04 clean podcast,science,design,energy,technology,tech,data,deep,intelligence,learning,car,artificial,efficient,hydrogen,machine,ai,transfer,clusters,temporal,ml,twiml Today we're joined by Kalai Ramea, Data Scientist at PARC, a Xerox Company. In this episode we discuss her journey buying a hydrogen car and the subsequent journey and paper that followed assessing fueling stations. In her next paper, Kalai looked at fuel consumption at hydrogen stations and used temporal clustering to identify signatures of usage over time. As the number of fueling stations is planned to increase dramatically in the future, building reliability on their performance is crucial. 300 full Sam Charrington
Swarm AI for Event Outcome Prediction with Gregg Willcox - TWIML Talk #299 Swarm AI for Event Outcome Prediction with Gregg Willcox Fri, 13 Sep 2019 16:58:09 +0000 Today we're joined by Gregg Willcox, Director of Research and Development at Unanimous AI. Inspired by the natural phenomenon called 'swarming', which uses the collective intelligence of a group to produce more accurate results than an individual alone, ‘Swarm AI’ was born. A game-like platform that channels the convictions of individuals to come to a consensus and using a behavioral neural network trained on people’s behavior called ‘Conviction’, to further amplify the results. 

The complete show notes for this episode can be found at twimlai.com/talk/299.

We're just over two weeks out from TWIMLcon: AI Platforms! You definitely want to be there. Visit twimlcon.com for more info, or to register. 

]]>
Today we're joined by Gregg Willcox, Director of Research and Development at Unanimous AI. Inspired by the natural phenomenon called 'swarming', which uses the collective intelligence of a group to produce more accurate results than an individual alone, ‘Swarm AI’ was born. A game-like platform that channels the convictions of individuals to come to a consensus and using a behavioral neural network trained on people’s behavior called ‘Conviction’, to further amplify the results. 

The complete show notes for this episode can be found at twimlai.com/talk/299.

We're just over two weeks out from TWIMLcon: AI Platforms! You definitely want to be there. Visit twimlcon.com for more info, or to register. 

]]>
42:35 clean podcast,science,network,technology,tech,data,intelligence,learning,behavioral,collaboration,artificial,robotics,neural,machine,ai,ml,swarming,humanintheloop,twiml Today we're joined by Gregg Willcox, Director of Research and Development at Unanimous AI. Inspired by the natural phenomenon called 'swarming', which uses the collective intelligence of a group to produce more accurate results than an individual alone, ‘Swarm AI’ was born. A game-like platform that channels the convictions of individuals to come to a consensus and using a behavioral neural network trained on people’s behavior called ‘Conviction’, to further amplify the results. 299 full Sam Charrington
Rebooting AI: What's Missing, What's Next with Gary Marcus - TWIML Talk #298 Rebooting AI: What's Missing, What's Next with Gary Marcus Tue, 10 Sep 2019 14:21:35 +0000 Today we're joined by Gary Marcus, CEO and Founder at Robust.AI, former CEO and Founder of Geometric Intelligence (acquired by Uber) and well-known scientist, bestselling author, professor and entrepreneur. In this episode hear Gary discuss:

  • His latest book, ‘Rebooting AI: Building Artificial Intelligence We Can Trust’, an extensive look into the current gaps, pitfalls and areas for improvement in the field of machine learning and AI 
  • A break down of the difference between reinforcement learning and real learning 
  • Why we need machines with both automation and autonomy to be truly usable in the world today 
  • Examples from his book, including Teslas driving into tow trucks and Microsoft’s SQuAD reading test results
  • Insight into what we should be talking and thinking about to make even greater (and safer) strides in AI

The complete show notes for this episode can be found at twimlai.com/talk/298.

Only 3 weeks left to register for TWIMLcon: AI Platforms! Visit twimlcon.com/register now!

 

]]>
Today we're joined by Gary Marcus, CEO and Founder at Robust.AI, former CEO and Founder of Geometric Intelligence (acquired by Uber) and well-known scientist, bestselling author, professor and entrepreneur. In this episode hear Gary discuss:

  • His latest book, ‘Rebooting AI: Building Artificial Intelligence We Can Trust’, an extensive look into the current gaps, pitfalls and areas for improvement in the field of machine learning and AI 
  • A break down of the difference between reinforcement learning and real learning 
  • Why we need machines with both automation and autonomy to be truly usable in the world today 
  • Examples from his book, including Teslas driving into tow trucks and Microsoft’s SQuAD reading test results
  • Insight into what we should be talking and thinking about to make even greater (and safer) strides in AI

The complete show notes for this episode can be found at twimlai.com/talk/298.

Only 3 weeks left to register for TWIMLcon: AI Platforms! Visit twimlcon.com/register now!

 

]]>
47:49 clean podcast,science,gary,technology,tech,data,intelligence,learning,cognitive,artificial,machine,ai,marcus,reinforcement,autonomy,ml,twiml Today we're joined by Gary Marcus, CEO and Founder at Robust.AI, well-known scientist, bestselling author, professor and entrepreneur. Hear Gary discuss his latest book, ‘Rebooting AI: Building Artificial Intelligence We Can Trust’, an extensive look into the current gaps, pitfalls and areas for improvement in the field of machine learning and AI. In this episode, Gary provides insight into what we should be talking and thinking about to make even greater (and safer) strides in AI. 298 full Sam Charrington
DeepQB: Deep Learning to Quantify Quarterback Decision-Making with Brian Burke - TWIML Talk #297 DeepQB: Deep Learning to Quantify Quarterback Decision-Making with Brian Burke Thu, 05 Sep 2019 18:11:17 +0000 Today we're joined by Brian Burke, Analytics Specialist with the Stats & Information Group at ESPN. A former Navy pilot and lifelong football fan, Brian saw the correlation between fighter pilots and quarterbacks in the quick, pressure-filled decisions both roles have to make on a regular basis. In this episode, we discuss:

  • Brian’s self-taught modeling techniques and his journey finding and handling vast amounts of sports data 
  • His findings in the paper, “DeepQB: Deep Learning with Player Tracking to Quantify Quarterback Decision-Making & Performance”
  • Brian talks through the making of his model, with geometry, algebra and a self-proclaimed ‘vanilla’ neural network
  • His excitement for the future of machine learning in sports and more!

The complete show notes for this episode can be found at twimlai.com/talk/297.

]]>
Today we're joined by Brian Burke, Analytics Specialist with the Stats & Information Group at ESPN. A former Navy pilot and lifelong football fan, Brian saw the correlation between fighter pilots and quarterbacks in the quick, pressure-filled decisions both roles have to make on a regular basis. In this episode, we discuss:

  • Brian’s self-taught modeling techniques and his journey finding and handling vast amounts of sports data 
  • His findings in the paper, “DeepQB: Deep Learning with Player Tracking to Quantify Quarterback Decision-Making & Performance”
  • Brian talks through the making of his model, with geometry, algebra and a self-proclaimed ‘vanilla’ neural network
  • His excitement for the future of machine learning in sports and more!

The complete show notes for this episode can be found at twimlai.com/talk/297.

]]>
51:15 clean podcast,science,technology,tech,data,deep,intelligence,brian,espn,learning,artificial,nfl,machine,ai,burke,qb,quarterback,ml,twiml,deepqb Today we're joined by Brian Burke, Analytics Specialist with the Stats & Information Group at ESPN. A former Navy pilot and lifelong football fan, Brian saw the correlation between fighter pilots and quarterbacks in the quick decisions both roles make on a regular basis. In this episode, we discuss his paper: “DeepQB: Deep Learning with Player Tracking to Quantify Quarterback Decision-Making & Performance”, what it means for football, and his excitement for machine learning in sports. 297 full Sam Charrington
Measuring Performance Under Pressure Using ML with Lotte Bransen - TWIML Talk #296 Measuring Performance Under Pressure Using ML with Lotte Bransen Tue, 03 Sep 2019 17:30:13 +0000 Today we're joined by Lotte Bransen, Scientific Researcher at SciSports. With a background in mathematics, econometrics and soccer, Lotte has honed her research on analytics of the game and its players. More specifically, using trained models to understand the impact of mental pressure on a player’s performance. In this episode, Lotte discusses:

  • Her latest paper, ‘Choke or Shine? Quantifying Soccer Players' Abilities to Perform Under Mental Pressure’ and shares 
  • The basis of the models through two aspects of mental pressure: pre-game and in-game, and three performance metrics: the chance of a goal with every action a player takes (contribution), the quality of that decision and the quality of the execution
  • The implications of her research in the world of sports
  • Just a few of the exponential applications for her work - check it out!

Check out the full show notes at twimlai.com/talk/296.

]]>
Today we're joined by Lotte Bransen, Scientific Researcher at SciSports. With a background in mathematics, econometrics and soccer, Lotte has honed her research on analytics of the game and its players. More specifically, using trained models to understand the impact of mental pressure on a player’s performance. In this episode, Lotte discusses:

  • Her latest paper, ‘Choke or Shine? Quantifying Soccer Players' Abilities to Perform Under Mental Pressure’ and shares 
  • The basis of the models through two aspects of mental pressure: pre-game and in-game, and three performance metrics: the chance of a goal with every action a player takes (contribution), the quality of that decision and the quality of the execution
  • The implications of her research in the world of sports
  • Just a few of the exponential applications for her work - check it out!

Check out the full show notes at twimlai.com/talk/296.

]]>
34:57 clean podcast,science,technology,tech,data,intelligence,soccer,learning,mental,pressure,artificial,inference,binary,machine,ai,differentiation,gradient,automatic,classification,ml,boosting,variational,twiml Today we're joined by Lotte Bransen, a Scientific Researcher at SciSports. With a background in mathematics, econometrics, and soccer, Lotte has honed her research on analytics of the game and its players, using trained models to understand the impact of mental pressure on a player’s performance. In this episode, Lotte discusses her paper, ‘Choke or Shine? Quantifying Soccer Players' Abilities to Perform Under Mental Pressure’ and the implications of her research in the world of sports. 296 full Sam Charrington
Managing Deep Learning Experiments with Lukas Biewald - TWIML Talk #295 Managing Deep Learning Experiments with Lukas Biewald Thu, 29 Aug 2019 18:09:23 +0000 Today we're joined by Lukas Biewald, CEO and Co-Founder of Weights & Biases. Lukas, previously CEO and Founder of Figure Eight (CrowdFlower), has a straightforward goal: provide researchers with SaaS that is easy to install, simple to operate, and always accessible. Seeing a need for reproducibility in deep learning experiments, Lukas founded Weights & Biases. In this episode we discuss:

  • The experiment tracking tool, how it works, and the components that make it unique in the ML marketplace
  • The open, collaborative culture that Lukas promotes
  • How Lukas got his start in deep learning experiments, what his experiment tracking used to look like, 
  • The current Weights & Biases business success strategy and what his team is working on today

The complete show notes for this episode can be found at twimlai.com/talk/295

Thanks to our friends at Weights & Biases for their support of the show, their sponsorship of this episode, and our upcoming event, TWIMLcon: AI Platforms. 

Registration for TWIMLcon is still open! Visit twimlcon.com/register today! 

]]>
Today we're joined by Lukas Biewald, CEO and Co-Founder of Weights & Biases. Lukas, previously CEO and Founder of Figure Eight (CrowdFlower), has a straightforward goal: provide researchers with SaaS that is easy to install, simple to operate, and always accessible. Seeing a need for reproducibility in deep learning experiments, Lukas founded Weights & Biases. In this episode we discuss:

  • The experiment tracking tool, how it works, and the components that make it unique in the ML marketplace
  • The open, collaborative culture that Lukas promotes
  • How Lukas got his start in deep learning experiments, what his experiment tracking used to look like, 
  • The current Weights & Biases business success strategy and what his team is working on today

The complete show notes for this episode can be found at twimlai.com/talk/295

Thanks to our friends at Weights & Biases for their support of the show, their sponsorship of this episode, and our upcoming event, TWIMLcon: AI Platforms. 

Registration for TWIMLcon is still open! Visit twimlcon.com/register today! 

]]>
43:39 clean podcast,and,science,technology,tech,data,deep,intelligence,learning,artificial,machine,weights,biases,ai,platform,tracking,lukas,saas,experiments,ml,twiml,biewald Today we're joined by Lukas Biewald, CEO and Co-Founder of Weights & Biases. Lukas founded the company after seeing a need for reproducibility in deep learning experiments. In this episode, we discuss his experiment tracking tool, how it works, the components that make it unique, and the collaborative culture that Lukas promotes. Listen in to how he got his start in deep learning and experiment tracking, the current Weights & Biases success strategy, and what his team is working on today. 295 full Sam Charrington
Re-Architecting Data Science at iRobot with Angela Bassa - TWIML Talk #294 Re-Architecting Data Science at iRobot with Angela Bassa Mon, 26 Aug 2019 18:54:24 +0000 Today we’re joined by Angela Bassa, Director of Data Science at iRobot. In our conversation, Angela and I discuss:

• iRobot's re-architecture, and a look at the evolution of iRobot.

• Where iRobot gets its data from and how they taxonomize data science.

• The platforms and processes that have been put into place to support delivering models in production.

•The role of DevOps in bringing these various platforms together, and much more!

The complete show notes can be found at twimlai.com/talk/294.

Check out the recently released speaker list for TWIMLcon: AI Platforms now! twimlcon.com/speakers.

]]>
Today we’re joined by Angela Bassa, Director of Data Science at iRobot. In our conversation, Angela and I discuss:

• iRobot's re-architecture, and a look at the evolution of iRobot. • Where iRobot gets its data from and how they taxonomize data science. • The platforms and processes that have been put into place to support delivering models in production. •The role of DevOps in bringing these various platforms together, and much more!

The complete show notes can be found at twimlai.com/talk/294.

Check out the recently released speaker list for TWIMLcon: AI Platforms now! twimlcon.com/speakers.

]]>
49:27 clean podcast,science,technology,production,tech,data,intelligence,models,learning,architecture,angela,artificial,machine,ai,platform,ml,devops,roomba,irobot,twiml,bassa Today we’re joined by Angela Bassa, Director of Data Science at iRobot. In our conversation, Angela and I discuss: • iRobot's re-architecture, and a look at the evolution of iRobot. • Where iRobot gets its data from and how they taxonomize data science. • The platforms and processes that have been put into place to support delivering models in production. •The role of DevOps in bringing these various platforms together, and much more! 294 full Sam Charrington
Disentangled Representations & Google Research Football with Olivier Bachem - TWIML Talk #293 Disentangled Representations & Google Research Football with Olivier Bachem Thu, 22 Aug 2019 17:00:45 +0000 Today we’re joined by Olivier Bachem, a research scientist at Google AI on the Brain team.

Initially, Olivier joined us to discuss his work on Google’s research football project, their foray into building a novel reinforcement learning environment, but we spent a fair amount of time exploring his research in disentangled representations. Olivier and Sam also discuss what makes the football environment different than other available reinforcement learning environments like OpenAI Gym and PyGame, what other techniques they explored while using this environment, and what’s on the horizon for their team and Football RLE.

Check out the full show notes at twimlai.com/talk/293

]]>
Today we’re joined by Olivier Bachem, a research scientist at Google AI on the Brain team.

Initially, Olivier joined us to discuss his work on Google’s research football project, their foray into building a novel reinforcement learning environment, but we spent a fair amount of time exploring his research in disentangled representations. Olivier and Sam also discuss what makes the football environment different than other available reinforcement learning environments like OpenAI Gym and PyGame, what other techniques they explored while using this environment, and what’s on the horizon for their team and Football RLE.

Check out the full show notes at twimlai.com/talk/293

]]>
43:29 clean podcast,science,technology,tech,brain,google,football,data,intelligence,soccer,learning,research,artificial,machine,ai,reinforcement,olivier,ml,representations,bachem,twiml,disentangled Today we’re joined by Olivier Bachem, a research scientist at Google AI on the Brain team. Olivier joins us to discuss his work on Google’s research football project, their foray into building a novel reinforcement learning environment. Olivier and Sam discuss what makes this environment different than other available RL environments, such as OpenAI Gym and PyGame, what other techniques they explored while using this environment, and what’s on the horizon for their team and Football RLE. 293 full Sam Charrington
Neural Network Quantization and Compression with Tijmen Blankevoort - TWIML Talk #292 Neural Network Quantization and Compression with Tijmen Blankevoort Mon, 19 Aug 2019 18:07:03 +0000 Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. Tijmen is also co-founder of ML startup Scyfer, along with Qualcomm colleague Max Welling, who we spoke with back on episode 267. In our conversation with Tijmen we discuss: 

• The ins and outs of compression and quantization of ML models, specifically NNs,

• How much models can actually be compressed, and the best way to achieve compression, 

• We also look at a few recent papers including “Lottery Hypothesis."  

Check out the full show notes at twimlai.com/talk/292.

 

]]>
Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. Tijmen is also co-founder of ML startup Scyfer, along with Qualcomm colleague Max Welling, who we spoke with back on episode 267. In our conversation with Tijmen we discuss: 

• The ins and outs of compression and quantization of ML models, specifically NNs,

• How much models can actually be compressed, and the best way to achieve compression, 

• We also look at a few recent papers including “Lottery Hypothesis."  

Check out the full show notes at twimlai.com/talk/292.

 

]]>
51:09 clean podcast,science,technology,networks,tech,data,intelligence,learning,artificial,neural,machine,compression,ai,qualcomm,ml,quantization,twiml,tijmen,blankevoort Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. In our conversation with Tijmen we discuss:  • The ins and outs of compression and quantization of ML models, specifically NNs, • How much models can actually be compressed, and the best way to achieve compression,  • We also look at a few recent papers including “Lottery Hypothesis." 292 full Sam Charrington
Identifying New Materials with NLP with Anubhav Jain - TWIML Talk #291 Identifying New Materials with NLP with Anubhav Jain Thu, 15 Aug 2019 18:58:01 +0000 Today we are joined by Anubhav Jain, Staff Scientist & Chemist at Lawrence Berkeley National Lab. Anubhav leads the Hacker Materials Research Group, where his research focuses on applying computing to accelerate the process of finding new materials for functional applications. With the immense amount of published scientific research out there, it can be difficult to understand how that information can be applied to future studies, let alone find a way to read it all. In this episode we discuss:

- His latest paper, ‘Unsupervised word embeddings capture latent knowledge from materials science literature’

- The design of a system that takes the literature and uses natural language processing to analyze, synthesize and then conceptualize complex material science concepts

- How the method is shown to recommend materials for functional applications in the future - scientific literature mining at its best.

Check out the complete show notes at twimlai.com/talk/291.

]]>
Today we are joined by Anubhav Jain, Staff Scientist & Chemist at Lawrence Berkeley National Lab. Anubhav leads the Hacker Materials Research Group, where his research focuses on applying computing to accelerate the process of finding new materials for functional applications. With the immense amount of published scientific research out there, it can be difficult to understand how that information can be applied to future studies, let alone find a way to read it all. In this episode we discuss:

- His latest paper, ‘Unsupervised word embeddings capture latent knowledge from materials science literature’

- The design of a system that takes the literature and uses natural language processing to analyze, synthesize and then conceptualize complex material science concepts

- How the method is shown to recommend materials for functional applications in the future - scientific literature mining at its best.

Check out the complete show notes at twimlai.com/talk/291.

]]>
39:54 clean podcast,science,technology,tech,literature,data,mining,intelligence,learning,artificial,machine,ai,material,nlp,ml,twiml,word2vec Today we are joined by Anubhav Jain, Staff Scientist & Chemist at Lawrence Berkeley National Lab. We discuss his latest paper, ‘Unsupervised word embeddings capture latent knowledge from materials science literature’. Anubhav explains the design of a system that takes the literature and uses natural language processing to conceptualize complex material science concepts. He also discusses scientific literature mining and how the method can recommend materials for functional applications in the future. 291 full Sam Charrington
The Problem with Black Boxes with Cynthia Rudin - TWIML Talk #290 The Problem with Black Boxes with Cynthia Rudin Wed, 14 Aug 2019 13:38:00 +0000 You asked, we listened! Today, by listener request, we are joined by Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University. Cynthia is passionate about machine learning and social justice, with extensive work and leadership in both areas. In this episode we discuss:

  • Her paper, ‘Please Stop Explaining Black Box Models for High Stakes Decisions’
  • How interpretable models make for less error-prone and more comprehensible decisions - and why we should care
  • A break down of black box and interpretable models, including their development, sample use cases, and more!

Check out the complete show notes at https://twimlai.com/talk/290

]]>
You asked, we listened! Today, by listener request, we are joined by Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University. Cynthia is passionate about machine learning and social justice, with extensive work and leadership in both areas. In this episode we discuss:

  • Her paper, ‘Please Stop Explaining Black Box Models for High Stakes Decisions’
  • How interpretable models make for less error-prone and more comprehensible decisions - and why we should care
  • A break down of black box and interpretable models, including their development, sample use cases, and more!

Check out the complete show notes at https://twimlai.com/talk/290

]]>
48:25 clean podcast,science,black,box,technology,tech,model,data,intelligence,learning,justice,artificial,criminal,machine,ai,ml,compas,explainability,twiml,interpretability Today we are joined by Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University. In this episode we discuss her paper, ‘Please Stop Explaining Black Box Models for High Stakes Decisions’, and how interpretable models make for more comprehensible decisions - extremely important when dealing with human lives. Cynthia explains black box and interpretable models, their development, use cases, and her future plans in the field. 290 full Sam Charrington
Human-Robot Interaction and Empathy with Kate Darling - TWIML Talk #289 Human-Robot Interaction and Empathy with Kate Darling Thu, 08 Aug 2019 16:42:24 +0000 Today we’re joined by Dr. Kate Darling, Research Specialist at the MIT Media Lab. Kate’s focus is on robot ethics and interaction, namely the social implication of how people treat robots and the purposeful design of robots in our daily lives. This episode is a fascinating look into the intersection of psychology and how we are using technology. We cover topics like:

  • How to measure empathy
  • The impact of robot treatment on kids behavior
  • The correlation between animals and robots 
  • Why ‘successful’ robots aren’t always humanoid and so much more!
]]>
Today we’re joined by Dr. Kate Darling, Research Specialist at the MIT Media Lab. Kate’s focus is on robot ethics and interaction, namely the social implication of how people treat robots and the purposeful design of robots in our daily lives. This episode is a fascinating look into the intersection of psychology and how we are using technology. We cover topics like:

  • How to measure empathy
  • The impact of robot treatment on kids behavior
  • The correlation between animals and robots 
  • Why ‘successful’ robots aren’t always humanoid and so much more!
]]>
43:56 clean podcast,science,technology,tech,data,robots,intelligence,learning,ethics,robot,artificial,empathy,machine,bias,ai,automatic,ml,anthropomorphism,humanoid,twiml Today we’re joined by Dr. Kate Darling, Research Specialist at the MIT Media Lab. Kate’s focus is on robot ethics, the social implication of how people treat robots and the purposeful design of robots in our daily lives. We discuss measuring empathy, the impact of robot treatment on kids behavior, the correlation between animals and robots, and why 'effective' robots aren’t always humanoid. Kate combines a wealth of knowledge with an analytical mind that questions the why and how of human-robot intera full Sam Charrington
Automated ML for RNA Design with Danny Stoll - TWIML Talk #288 Automated ML for RNA Design with Danny Stoll Mon, 05 Aug 2019 17:31:43 +0000 Today we’re joined by Danny Stoll, Research Assistant at the University of Freiburg. Since high school, Danny has been fascinated by Deep Learning which has grown into a desire to make machine learning available to anyone with interest. Danny’s current research can be encapsulated in his latest paper, ‘Learning to Design RNA’. Designing RNA molecules has become increasingly popular as RNA is responsible for regulating biological process, even connected to diseases like Alzheimers and Epilepsy. In this episode, Danny discusses:

  • The RNA design process through reverse engineering
  • How his team’s deep learning algorithm is applied to train and design sequences
  • Transfer learning & multitask learning
  • Ablation studies, hyperparameter optimization, the difference between chemical and statistical based approaches and more!
]]>
Today we’re joined by Danny Stoll, Research Assistant at the University of Freiburg. Since high school, Danny has been fascinated by Deep Learning which has grown into a desire to make machine learning available to anyone with interest. Danny’s current research can be encapsulated in his latest paper, ‘Learning to Design RNA’. Designing RNA molecules has become increasingly popular as RNA is responsible for regulating biological process, even connected to diseases like Alzheimers and Epilepsy. In this episode, Danny discusses:

  • The RNA design process through reverse engineering
  • How his team’s deep learning algorithm is applied to train and design sequences
  • Transfer learning & multitask learning
  • Ablation studies, hyperparameter optimization, the difference between chemical and statistical based approaches and more!
]]>
36:29 clean podcast,science,technology,tech,data,deep,intelligence,learning,artificial,machine,ai,transfer,optimization,rna,ml,twiml,hyperparameter Today we’re joined by Danny Stoll, Research Assistant at the University of Freiburg. Danny’s current research can be encapsulated in his latest paper, ‘Learning to Design RNA’. In this episode, Danny explains the design process through reverse engineering and how his team’s deep learning algorithm is applied to train and design sequences. We discuss transfer learning, multitask learning, ablation studies, hyperparameter optimization and the difference between chemical and statistical based approac full Sam Charrington
Developing a brain atlas using deep learning with Theofanis Karayannis - TWIML Talk #287 Developing a brain atlas using deep learning with Theofanis Karayannis Thu, 01 Aug 2019 16:33:26 +0000 Today we’re joined by Theofanis Karayannis, Assistant Professor at the Brain Research Institute of the University of Zurich. Theo’s research is currently focused on understanding how circuits in the brain are formed during development and modified by experiences. Working with animal models, Theo segments and classifies the brain regions, then detects cellular signals that make connections throughout and between each region. How? The answer is (relatively) simple: Deep Learning. In this episode we discuss:

  • Adapting DL methods to fit the biological scope of work
  • The distribution of connections that makes neurological decisions in both animals and humans every day
  • The way images of the brain are collected
  • Genetic trackability, and more!
]]>
Today we’re joined by Theofanis Karayannis, Assistant Professor at the Brain Research Institute of the University of Zurich. Theo’s research is currently focused on understanding how circuits in the brain are formed during development and modified by experiences. Working with animal models, Theo segments and classifies the brain regions, then detects cellular signals that make connections throughout and between each region. How? The answer is (relatively) simple: Deep Learning. In this episode we discuss:

  • Adapting DL methods to fit the biological scope of work
  • The distribution of connections that makes neurological decisions in both animals and humans every day
  • The way images of the brain are collected
  • Genetic trackability, and more!
]]>
38:37 clean podcast,science,mask,technology,tech,brain,data,genetic,deep,intelligence,learning,research,neurology,mapping,artificial,institute,machine,ai,ml,twiml,trackability,rcnn Today we’re joined by Theofanis Karayannis, Assistant Professor at the Brain Research Institute of the University of Zurich. Theo’s research is focused on brain circuit development and uses Deep Learning methods to segment the brain regions, then detect the connections around each region. He then looks at the distribution of connections that make neurological decisions in both animals and humans every day. From the way images of the brain are collected to genetic trackability, this episode has it all. full Sam Charrington
Environmental Impact of Large-Scale NLP Model Training with Emma Strubell - TWIML Talk #286 Environmental Impact of Large-Scale NLP Model Training with Emma Strubell Mon, 29 Jul 2019 18:26:08 +0000 Today we’re joined by Emma Strubell, currently a visiting scientist at Facebook AI Research. Emma’s focus is on NLP and bringing state of the art NLP systems to practitioners by developing efficient and robust machine learning models. Her paper, Energy and Policy Considerations for Deep Learning in NLP, hones in on one of the biggest topics of the generation: environmental impact. In this episode we discuss:

  • How training neural networks have resulted in an increase in accuracy, however the computational resources required to train these models is staggering - and carbon footprints are only getting bigger
  • Emma’s research methods for determining carbon emissions
  • How companies are reacting to environmental concerns
  • What we, as an industry, can be doing better
]]>
Today we’re joined by Emma Strubell, currently a visiting scientist at Facebook AI Research. Emma’s focus is on NLP and bringing state of the art NLP systems to practitioners by developing efficient and robust machine learning models. Her paper, Energy and Policy Considerations for Deep Learning in NLP, hones in on one of the biggest topics of the generation: environmental impact. In this episode we discuss:

  • How training neural networks have resulted in an increase in accuracy, however the computational resources required to train these models is staggering - and carbon footprints are only getting bigger
  • Emma’s research methods for determining carbon emissions
  • How companies are reacting to environmental concerns
  • What we, as an industry, can be doing better
]]>
38:36 clean podcast,science,energy,technology,tech,data,intelligence,policy,learning,computational,artificial,resources,carbon,footprint,machine,ai,nlp,efficiency,ml,twiml Today we’re joined by Emma Strubell, currently a visiting scientist at Facebook AI Research. Emma’s focus is bringing state of the art NLP systems to practitioners by developing efficient and robust machine learning models. Her paper, Energy and Policy Considerations for Deep Learning in NLP, reviews carbon emissions of training neural networks despite an increase in accuracy. In this episode, we discuss Emma’s research methods, how companies are reacting to environmental concerns, and how we can do b full Sam Charrington
“Fairwashing” and the Folly of ML Solutionism with Zachary Lipton - TWIML Talk #285 “Fairwashing” and the Folly of ML Solutionism with Zachary Lipton Thu, 25 Jul 2019 15:47:19 +0000 Today we’re joined by Zachary Lipton, Assistant Professor in the Tepper School of Business. With an overarching theme of data quality and interpretation, Zachary's research and work is focused on machine learning in healthcare, with the goal of not replacing doctors, but to assist through an understanding of the diagnosis and treatment process. Zachary is also working on the broader question of fairness and ethics in machine learning systems across multiple industries. We delve into these topics today, discussing: 

  • Supervised learning in the medical field, 
  • Guaranteed robustness under distribution shifts, 
  • The concept of ‘fairwashing’,
  • How there is insufficient language in machine learning to encompass abstract ethical behavior, and much, much more
]]>
Today we’re joined by Zachary Lipton, Assistant Professor in the Tepper School of Business. With an overarching theme of data quality and interpretation, Zachary's research and work is focused on machine learning in healthcare, with the goal of not replacing doctors, but to assist through an understanding of the diagnosis and treatment process. Zachary is also working on the broader question of fairness and ethics in machine learning systems across multiple industries. We delve into these topics today, discussing: 

  • Supervised learning in the medical field, 
  • Guaranteed robustness under distribution shifts, 
  • The concept of ‘fairwashing’,
  • How there is insufficient language in machine learning to encompass abstract ethical behavior, and much, much more
]]>
01:15:39 clean podcast,science,technology,tech,data,intelligence,learning,ethics,artificial,fairness,inference,machine,ai,distribution,ml,shifts,supervised,algorithmic,causal,twiml,fairwashing Today we’re joined by Zachary Lipton, Assistant Professor in the Tepper School of Business. With a theme of data interpretation, Zachary’s research is focused on machine learning in healthcare, with the goal of assisting physicians through the diagnosis and treatment process. We discuss supervised learning in the medical field, robustness under distribution shifts, ethics in machine learning systems across industries, the concept of ‘fairwashing, and more. full Sam Charrington
Retinal Image Generation for Disease Discovery with Stephen Odaibo - TWIML Talk #284 Retinal Image Generation for Disease Discovery with Stephen Odaibo Mon, 22 Jul 2019 16:05:26 +0000 Today we’re joined by Dr. Stephen Odaibo, Founder and CEO of RETINA-AI Health Inc. Stephen’s unique journey to machine learning and AI includes degrees in math, medicine and computer science, which led him to an ophthalmology practice before taking on the ultimate challenge as an entrepreneur. In this episode we discuss:

  • How RETINA-AI Health harnesses the power of machine learning to build autonomous systems that diagnose and treat retinal diseases 
  • The importance of domain experience and how Stephen’s expertise in ophthalmology and engineering along with the current state of both industries that led to the founding of his company
  • His work with GANs to create artificial retinal images and why more data isn’t always better!
]]>
Today we’re joined by Dr. Stephen Odaibo, Founder and CEO of RETINA-AI Health Inc. Stephen’s unique journey to machine learning and AI includes degrees in math, medicine and computer science, which led him to an ophthalmology practice before taking on the ultimate challenge as an entrepreneur. In this episode we discuss:

  • How RETINA-AI Health harnesses the power of machine learning to build autonomous systems that diagnose and treat retinal diseases 
  • The importance of domain experience and how Stephen’s expertise in ophthalmology and engineering along with the current state of both industries that led to the founding of his company
  • His work with GANs to create artificial retinal images and why more data isn’t always better!
]]>
41:39 clean podcast,science,technology,images,tech,experience,data,intelligence,learning,ophthalmology,artificial,domain,machine,ai,ml,gan,retinal,twiml Today we’re joined by Dr. Stephen Odaibo, Founder and CEO of RETINA-AI Health Inc. Stephen’s journey to machine learning and AI includes degrees in math, medicine and computer science, which led him to an ophthalmology practice before becoming an entrepreneur. In this episode we discuss his expertise in ophthalmology and engineering along with the current state of both industries that lead him to build autonomous systems that diagnose and treat retinal diseases. full Sam Charrington
Real world model explainability with Rayid Ghani - TWiML Talk #283 Real world model explainability with Rayid Ghani Thu, 18 Jul 2019 16:00:00 +0000 Today we’re joined by Rayid Ghani, Director of the Center for Data Science and Public Policy at the University of Chicago. Rayid’s goal is to combine his skills in machine learning and data with his desire to improve public policy and the social sector. Drawing on his range of experience from the corporate world to Chief Scientist for the 2012 Obama Campaign, we delve into the world of automated predictions and explainability methods. Here we discuss:

  • How automated predictions can be helpful, but they don’t always paint a full picture 
  • When dealing with public policy and the social sector, the key to an effective explainability method is the correct context
  • Machine feedback loops that help humans override the wrong predictions and reinforce the right ones
  • Supporting proactive intervention through complex explanability tools
]]>
Today we’re joined by Rayid Ghani, Director of the Center for Data Science and Public Policy at the University of Chicago. Rayid’s goal is to combine his skills in machine learning and data with his desire to improve public policy and the social sector. Drawing on his range of experience from the corporate world to Chief Scientist for the 2012 Obama Campaign, we delve into the world of automated predictions and explainability methods. Here we discuss:

  • How automated predictions can be helpful, but they don’t always paint a full picture 
  • When dealing with public policy and the social sector, the key to an effective explainability method is the correct context
  • Machine feedback loops that help humans override the wrong predictions and reinforce the right ones
  • Supporting proactive intervention through complex explanability tools
]]>
50:58 clean podcast,of,science,technology,tech,data,public,intelligence,policy,learning,university,chicago,feedback,artificial,machine,ai,loop,lime,ml,explainability,twiml Today we’re joined by Rayid Ghani, Director of the Center for Data Science and Public Policy at the University of Chicago. Drawing on his range of experience, Rayid saw that while automated predictions can be helpful, they don’t always paint a full picture. The key is the relevant context when making tough decisions involving humans and their lives. We delve into the world of explainability methods, necessary human involvement, machine feedback loop and more. full Sam Charrington
Inspiring New Machine Learning Platforms w/ Bioelectric Computation with Michael Levin - TWiML Talk #282 Inspiring New Machine Learning Platforms with Bioelectric Computation with Michael Levin Mon, 15 Jul 2019 16:38:01 +0000 Today we’re joined by Michael Levin, Director of the Allen Discovery Institute at Tufts University. Michael joined us back at NeurIPS to discuss his invited talk “What Bodies Think About: Bioelectric Computation Beyond the Nervous System as Inspiration for New Machine Learning Platforms.” In our conversation, we talk about:

  • Synthetic living machines, novel AI architectures and brain-body plasticity
  • How our DNA doesn’t control everything like we thought and how the behavior of cells in living organisms can be modified and adapted
  • Biological systems dynamic remodeling in the future of developmental biology and regenerative medicine...and more!

The complete show notes for this episode can be found at twimlai.com/talk/282

Register for TWIMLcon: AI Platforms now at twimlcon.com!

]]>
Today we’re joined by Michael Levin, Director of the Allen Discovery Institute at Tufts University. Michael joined us back at NeurIPS to discuss his invited talk “What Bodies Think About: Bioelectric Computation Beyond the Nervous System as Inspiration for New Machine Learning Platforms.” In our conversation, we talk about:

  • Synthetic living machines, novel AI architectures and brain-body plasticity
  • How our DNA doesn’t control everything like we thought and how the behavior of cells in living organisms can be modified and adapted
  • Biological systems dynamic remodeling in the future of developmental biology and regenerative medicine...and more!

The complete show notes for this episode can be found at twimlai.com/talk/282

Register for TWIMLcon: AI Platforms now at twimlcon.com!

]]>
25:55 clean podcast,science,technology,system,tech,brain,michael,data,living,intelligence,biology,learning,university,neurology,levin,artificial,synthetic,machine,ai,machines,nervous,tufts,computation,ml,bioelectricity,twiml,neurips Today we’re joined by Michael Levin, Director of the Allen Discovery Institute at Tufts University. In our conversation, we talk about synthetic living machines, novel AI architectures and brain-body plasticity. Michael explains how our DNA doesn’t control everything and how the behavior of cells in living organisms can be modified and adapted. Using research on biological systems dynamic remodeling, Michael discusses the future of developmental biology and regenerative medicine. 282 full Sam Charrington
Simulation and Synthetic Data for Computer Vision with Batu Arisoy - TWiML Talk #281 Simulation and Synthetic Data for Computer Vision with Batu Arisoy Tue, 09 Jul 2019 17:38:51 +0000 Today we’re joined by Batu Arisoy, Research Manager with the Vision Technologies & Solutions team at Siemens Corporate Technology. Currently, Batu’s research focus is solving limited data computer vision problems, providing R&D for many of the business units throughout the company. In our conversation we discuss:

  • An emulation of a teacher teaching students information without the use of memorization
  • Discerning which parts of our neural network are required to make decisions
  • An activity recognition project with the Office of Naval Research that keeps ‘humans in the loop’ and more.

 The complete show notes for this episode can be found at twimlai.com/talk/281

Register for TWIMLcon: AI Platforms now at twimlcon.com!

Thanks to Siemens for their sponsorship of today's episode! Check out what they’re up to, visit twimlai.com/siemens.

]]>
Today we’re joined by Batu Arisoy, Research Manager with the Vision Technologies & Solutions team at Siemens Corporate Technology. Currently, Batu’s research focus is solving limited data computer vision problems, providing R&D for many of the business units throughout the company. In our conversation we discuss:

  • An emulation of a teacher teaching students information without the use of memorization
  • Discerning which parts of our neural network are required to make decisions
  • An activity recognition project with the Office of Naval Research that keeps ‘humans in the loop’ and more.

 The complete show notes for this episode can be found at twimlai.com/talk/281

Register for TWIMLcon: AI Platforms now at twimlcon.com!

Thanks to Siemens for their sponsorship of today's episode! Check out what they’re up to, visit twimlai.com/siemens.

]]>
41:36 clean podcast,science,technology,tech,data,intelligence,learning,university,novelty,artificial,cornell,darpa,siemens,machine,ai,recognition,detection,pattern,ml,biased,cvpr,twiml,nomaly Today we’re joined by Batu Arisoy, Research Manager with the Vision Technologies & Solutions team at Siemens Corporate Technology. Batu’s research focus is solving limited-data computer vision problems, providing R&D for business units throughout the company. In our conversation, Batu details his group's ongoing projects, like an activity recognition project with the ONR, and their many CVPR submissions, which include an emulation of a teacher teaching students information without the use of memorizatio full Sam Charrington
Spiking Neural Nets and ML as a Systems Challenge with Jeff Gehlhaar - TWIML Talk #280 Spiking Neural Nets and ML as a Systems Challenge with Jeff Gehlhaar Mon, 08 Jul 2019 19:07:07 +0000 Today we’re joined by Jeff Gehlhaar, VP of Technology and Head of AI Software Platforms at Qualcomm. As we’ve explored in our conversations with both Gary Brotman and Max Welling, Qualcomm has a hand in tons of machine learning research and hardware, and our conversation with Jeff is no different. We discuss:

• How the various training frameworks fit into the developer experience when working with their chipsets.

• Examples of federated learning in the wild.

• The role inference will play in data center devices and more.

The complete show notes for this episode can be found at twimlai.com/talk/280

Register for TWIMLcon now at twimlcon.com.

Thanks to Qualcomm for their sponsorship of today's episode! Check out what they're up to at twimlai.com/qualcomm.

]]>
Today we’re joined by Jeff Gehlhaar, VP of Technology and Head of AI Software Platforms at Qualcomm. As we’ve explored in our conversations with both Gary Brotman and Max Welling, Qualcomm has a hand in tons of machine learning research and hardware, and our conversation with Jeff is no different. We discuss:

• How the various training frameworks fit into the developer experience when working with their chipsets.

• Examples of federated learning in the wild.

• The role inference will play in data center devices and more.

The complete show notes for this episode can be found at twimlai.com/talk/280

Register for TWIMLcon now at twimlcon.com.

Thanks to Qualcomm for their sponsorship of today's episode! Check out what they're up to at twimlai.com/qualcomm.

]]>
54:08 clean podcast,science,training,technology,networks,tech,data,intelligence,jeff,learning,artificial,developer,darpa,neural,framework,machine,ai,federated,qualcomm,ml,twiml,gehlhaar,tinyml Today we’re joined by Jeff Gehlhaar, VP of Technology and Head of AI Software Platforms at Qualcomm. Qualcomm has a hand in tons of machine learning research and hardware, and in our conversation with Jeff we discuss: • How the various training frameworks fit into the developer experience when working with their chipsets. • Examples of federated learning in the wild. • The role inference will play in data center devices and much more. 280 full Sam Charrington
Transforming Oil & Gas with AI with Adi Bhashyam and Daniel Jeavons - TWIML Talk #279 Transforming Oil & Gas with AI with Adi Bhashyam and Daniel Jeavons Mon, 01 Jul 2019 18:33:09 +0000 Today we’re joined by return guest Daniel Jeavons, GM of Data Science at Shell, and Adi Bhashyam, GM of Data Science at C3, who we had the pleasure of speaking to at this years C3 Transform Conference. In our conversation, we discuss:

• The progress that Dan and his team has made since our last conversation, including an overview of their data platform.

• We explore the various types of users of the platform, and how those users informed the decision to use C3’s out-of-the-box platform solution instead of building their own internal platform.

• Adi gives us an overview of the evolution of C3 and their platform, along with a breakdown of a few Shell-specific use cases. 

The complete show notes can be found at twimlai.com/talk/279.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! Early-bird registration has been extended until this Wednesday, 7/3, register today for the lowest possible price!!

]]>
Today we’re joined by return guest Daniel Jeavons, GM of Data Science at Shell, and Adi Bhashyam, GM of Data Science at C3, who we had the pleasure of speaking to at this years C3 Transform Conference. In our conversation, we discuss:

• The progress that Dan and his team has made since our last conversation, including an overview of their data platform.

• We explore the various types of users of the platform, and how those users informed the decision to use C3’s out-of-the-box platform solution instead of building their own internal platform.

• Adi gives us an overview of the evolution of C3 and their platform, along with a breakdown of a few Shell-specific use cases. 

The complete show notes can be found at twimlai.com/talk/279.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! Early-bird registration has been extended until this Wednesday, 7/3, register today for the lowest possible price!!

]]>
46:27 clean podcast,dan,science,technology,production,tech,data,automation,intelligence,learning,artificial,machine,ai,platform,solution,optimization,shell,ady,ml,c3,jeavons,twiml,bhashyam Today we’re joined by return guest Daniel Jeavons, GM of Data Science at Shell, and Adi Bhashyam, GM of Data Science at C3, who we had the pleasure of speaking to at this years C3 Transform Conference. In our conversation, we discuss: • The progress that Dan and his team has made since our last conversation, including an overview of their data platform. • Adi gives us an overview of the evolution of C3 and their platform, along with a breakdown of a few Shell-specific use cases. 279 full Sam Charrington
Fast Radio Burst Pulse Detection with Gerry Zhang - TWIML Talk #278 Fast Radio Burst Pulse Detection with Gerry Zhang Thu, 27 Jun 2019 18:18:20 +0000 Today we’re joined by Yunfan Gerry Zhang, a PhD student in the Department of Astrophysics at UC Berkely, and an affiliate of Berkeley’s SETI research center. In our conversation, we discuss: 

• Gerry's research on applying machine learning techniques to astrophysics and astronomy.

• His paper “Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine Learning Approach”.

• We explore the types of data sources used for this project, challenges Gerry encountered along the way, the role of GANs and much more.

The complete show notes can be found at twimlai.com/talk/278.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! Early-bird registration ends TOMORROW 6/28! Register now!

]]>
Today we’re joined by Yunfan Gerry Zhang, a PhD student in the Department of Astrophysics at UC Berkely, and an affiliate of Berkeley’s SETI research center. In our conversation, we discuss: 

• Gerry's research on applying machine learning techniques to astrophysics and astronomy.

• His paper “Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine Learning Approach”.

• We explore the types of data sources used for this project, challenges Gerry encountered along the way, the role of GANs and much more.

The complete show notes can be found at twimlai.com/talk/278.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! Early-bird registration ends TOMORROW 6/28! Register now!

]]>
38:04 clean podcast,science,radio,technology,tech,data,intelligence,astronomy,learning,berkeley,fast,artificial,domain,adaptation,burst,machine,ai,detection,gerry,gans,uc,astrophysics,zhang,ml,twiml,yunfan Today we’re joined by Yunfan Gerry Zhang, a PhD student at UC Berkely, and an affiliate of Berkeley’s SETI research center. In our conversation, we discuss:  • Gerry's research on applying machine learning techniques to astrophysics and astronomy. • His paper “Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine Learning Approach”. • We explore the types of data sources used for this project, challenges Gerry encountered along the way, the role of GANs and much more. 278 full Sam Charrington
Tracking CO2 Emissions with Machine Learning with Laurence Watson - TWIML Talk #277 Tracking CO2 Emissions with Machine Learning with Laurence Watson Mon, 24 Jun 2019 19:29:08 +0000 Today we’re joined by Laurence Watson, Co-Founder and CTO of Plentiful Energy and a former data scientist at Carbon Tracker. In our conversation, we discuss:

• Carbon Tracker's goals, and their report “Nowhere to hide: Using satellite imagery to estimate the utilisation of fossil fuel power plants”.

• How they're using computer vision to process satellite images of coal plants, including how the images are labeled

•Various challenges with the scope and scale of this project, including dealing with varied time zones and imbalanced training classes.

The complete show notes can be found at twimlai.com/talk/277.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! Early-bird registration ends on 6/28!

]]>
Today we’re joined by Laurence Watson, Co-Founder and CTO of Plentiful Energy and a former data scientist at Carbon Tracker. In our conversation, we discuss:

• Carbon Tracker's goals, and their report “Nowhere to hide: Using satellite imagery to estimate the utilisation of fossil fuel power plants”.

• How they're using computer vision to process satellite images of coal plants, including how the images are labeled

•Various challenges with the scope and scale of this project, including dealing with varied time zones and imbalanced training classes.

The complete show notes can be found at twimlai.com/talk/277.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! Early-bird registration ends on 6/28!

]]>
41:08 clean podcast,science,technology,tracker,image,tech,data,intelligence,vision,learning,computer,laurence,watson,artificial,carbon,machine,co2,ai,emissions,labeling,ml,dataset,classifier,twiml Today we’re joined by Laurence Watson, Co-Founder and CTO of Plentiful Energy and a former data scientist at Carbon Tracker. In our conversation, we discuss: • Carbon Tracker's goals, and their report “Nowhere to hide: Using satellite imagery to estimate the utilisation of fossil fuel power plants”. • How they are using computer vision to process satellite images of coal plants, including how the images are labeled. •Various challenges with the scope and scale of this project. 277 full Sam Charrington
Topic Modeling for Customer Insights at USAA with William Fehlman - TWIML Talk #276 Topic Modeling for Customer Insights at USAA with William Fehlman Thu, 20 Jun 2019 19:26:52 +0000 Today we’re joined by William Fehlman, director of data science at USAA. We caught up with William a while back to discuss:

  • His work on topic modeling, which USAA uses in various scenarios, including chat channels with members via mobile and desktop interfaces.
  • How their datasets are generated.
  • Explored methodologies of topic modeling, including latent semantic indexing, latent Dirichlet allocation, and non-negative matrix factorization.
  • We also explore how terms are represented via a document-term matrix, and how they are scored based on coherence.

The complete show notes can be found at twimlai.com/talk/276.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! Early-bird registration ends on 6/28!

]]>
Today we’re joined by William Fehlman, director of data science at USAA. We caught up with William a while back to discuss:

  • His work on topic modeling, which USAA uses in various scenarios, including chat channels with members via mobile and desktop interfaces.
  • How their datasets are generated.
  • Explored methodologies of topic modeling, including latent semantic indexing, latent Dirichlet allocation, and non-negative matrix factorization.
  • We also explore how terms are represented via a document-term matrix, and how they are scored based on coherence.

The complete show notes can be found at twimlai.com/talk/276.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! Early-bird registration ends on 6/28!

]]>
44:27 clean podcast,science,technology,tech,data,intelligence,modeling,learning,bill,customer,insight,matrix,artificial,machine,ai,semantic,topic,indexing,ml,usaa,factorization,twiml,fehlman,nonnegative Today we’re joined by William Fehlman, director of data science at USAA, to discuss: • His work on topic modeling, which USAA uses in various scenarios, including member chat channels. • How their datasets are generated. • Explored methodologies of topic modeling, including latent semantic indexing, latent Dirichlet allocation, and non-negative matrix factorization. • We also explore how terms are represented via a document-term matrix, and how they are scored based on coherence. 276 full Sam Charrington
Phronesis of AI in Radiology with Judy Gichoya - TWIML Talk #275 Phronesis of AI in Radiology with Judy Gichoya Tue, 18 Jun 2019 20:46:53 +0000 Today we’re joined by Judy Gichoya an interventional radiology fellow at the Dotter Institute at Oregon Health and Science University. In our conversation, we discuss:

• Judy's research in “Phronesis of AI in Radiology: Superhuman meets Natural Stupidy,” reviewing the claims of “superhuman” AI performance in radiology.

• We explore potential roles in which AI can have success in radiology, along with some of the different types of biases that can manifest themselves across multiple use cases.

• We look at the CheXNet paper, which details how human and AI performance can complement and improve each other's performance for detecting pneumonia in chest X-rays.

The complete show notes can be found at twimlai.com/talk/275.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! 

]]>
Today we’re joined by Judy Gichoya an interventional radiology fellow at the Dotter Institute at Oregon Health and Science University. In our conversation, we discuss:

• Judy's research in “Phronesis of AI in Radiology: Superhuman meets Natural Stupidy,” reviewing the claims of “superhuman” AI performance in radiology.

• We explore potential roles in which AI can have success in radiology, along with some of the different types of biases that can manifest themselves across multiple use cases.

• We look at the CheXNet paper, which details how human and AI performance can complement and improve each other's performance for detecting pneumonia in chest X-rays.

The complete show notes can be found at twimlai.com/talk/275.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! 

]]>
43:04 clean podcast,and,science,technology,tech,data,intelligence,health,learning,oregon,radiology,artificial,institute,machine,bias,ai,judy,ml,twiml,gichoya Today we’re joined by Judy Gichoya an interventional radiology fellow at the Dotter Institute at Oregon Health and Science University. In our conversation, we discuss: • Judy's research on the paper “Phronesis of AI in Radiology: Superhuman meets Natural Stupidy,” reviewing the claims of “superhuman” AI performance in radiology. • Potential roles in which AI can have success in radiology, along with some of the different types of biases that can manifest themselves across multiple use c 275 full Sam Charrington
The Ethics of AI-Enabled Surveillance with Karen Levy - TWIML Talk #274 The Ethics of AI-Enabled Surveillance with Karen Levy Fri, 14 Jun 2019 19:31:37 +0000 Today we’re joined by Karen Levy, assistant professor in the department of information science at Cornell University. Karen’s research focuses on how rules and technologies interact to regulate behavior, especially the legal, organizational, and social aspects of surveillance and monitoring. In our conversation we discuss:

• Examples of how data tracking and surveillance can be used in ways that can be abusive to various marginalized groups, including detailing her extensive research into truck driver surveillance.

• Her thoughts on how the broader society will react to the increase in surveillance,

• The unintended consequences of surveillant systems, questions surrounding hybridization of jobs and systems, and more!

The complete show notes can be found at twimlai.com/talk/274.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! 

]]>
Today we’re joined by Karen Levy, assistant professor in the department of information science at Cornell University. Karen’s research focuses on how rules and technologies interact to regulate behavior, especially the legal, organizational, and social aspects of surveillance and monitoring. In our conversation we discuss:

• Examples of how data tracking and surveillance can be used in ways that can be abusive to various marginalized groups, including detailing her extensive research into truck driver surveillance.

• Her thoughts on how the broader society will react to the increase in surveillance,

• The unintended consequences of surveillant systems, questions surrounding hybridization of jobs and systems, and more!

The complete show notes can be found at twimlai.com/talk/274.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! 

]]>
42:34 clean podcast,science,tools,technology,tech,data,systems,intelligence,karen,learning,drivers,artificial,levy,cornell,truck,machine,predictive,ai,monitoring,surveillance,policing,ml,twiml,policiing Today we’re joined by Karen Levy, assistant professor in the department of information science at Cornell University. Karen’s research focuses on how rules and technologies interact to regulate behavior, especially the legal, organizational, and social aspects of surveillance and monitoring. In our conversation, we discuss how data tracking and surveillance can be used in ways that can be abusive to various marginalized groups, including detailing her extensive research into truck driver surveillance. 274 full Sam Charrington
Supporting Rapid Model Development at Two Sigma with Matt Adereth & Scott Clark - TWIML Talk #273 Supporting Rapid Model Development at Two Sigma with Matt Adereth & Scott Clark Tue, 11 Jun 2019 17:16:47 +0000 Today we’re joined by Matt Adereth, managing director of investments at Two Sigma, and return guest Scott Clark, co-founder and CEO of SigOpt, to discuss:

• The end to end modeling platform at Two Sigma, who it serves, and challenges faced in production and modeling.

• How Two Sigma has attacked the experimentation challenge with their platform.

• The relationship between the optimization and infrastructure teams at SigOpt.

• What motivates companies that aren’t already heavily invested in platforms, optimization or automation, to do so.

The complete show notes can be found at twimlai.com/talk/273.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! The first 10 listeners who register get their ticket for 75% off using the discount code TWIMLFIRST!

Follow along with the entire AI Platforms Vol 2 series at twimlai.com/aiplatforms2.

Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt.

]]>
Today we’re joined by Matt Adereth, managing director of investments at Two Sigma, and return guest Scott Clark, co-founder and CEO of SigOpt, to discuss:

• The end to end modeling platform at Two Sigma, who it serves, and challenges faced in production and modeling.

• How Two Sigma has attacked the experimentation challenge with their platform.

• The relationship between the optimization and infrastructure teams at SigOpt.

• What motivates companies that aren’t already heavily invested in platforms, optimization or automation, to do so.

The complete show notes can be found at twimlai.com/talk/273.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! The first 10 listeners who register get their ticket for 75% off using the discount code TWIMLFIRST!

Follow along with the entire AI Platforms Vol 2 series at twimlai.com/aiplatforms2.

Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt.

]]>
49:09 clean podcast,science,matt,technology,production,tech,data,automation,intelligence,scott,learning,sigma,two,clark,artificial,machine,ai,platform,optimization,opt,sig,ml,twiml,adereth Today we’re joined by Matt Adereth, managing director of investments at Two Sigma, and return guest Scott Clark, co-founder and CEO of SigOpt, to discuss: • The end to end modeling platform at Two Sigma, who it serves, and challenges faced in production and modeling. • How Two Sigma has attacked the experimentation challenge with their platform. • What motivates companies that aren’t already heavily invested in platforms, optimization or automation, to do so, and much more! 273 full Sam Charrington
Scaling Model Training with Kubernetes at Stripe with Kelley Rivoire - TWIML Talk #272 Scaling Model Training with Kubernetes at Stripe with Kelley Rivoire Thu, 06 Jun 2019 16:34:42 +0000 Today we’re joined by Kelley Rivoire, engineering manager working on machine learning infrastructure at Stripe. Kelley and I caught up at a recent Strata Data conference to discuss:

• Her talk "Scaling model training: From flexible training APIs to resource management with Kubernetes."

• Stripe’s machine learning infrastructure journey, including their start from a production focus.

• Internal tools used at Stripe, including Railyard, an API built to manage model training at scale & more!

The complete show notes can be found at twimlai.com/talk/272.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! The first 10 listeners who register get their ticket for 75% off using the discount code TWIMLFIRST!

Follow along with the entire AI Platforms Vol 2 series at twimlai.com/aiplatforms2.

Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt.

]]>
Today we’re joined by Kelley Rivoire, engineering manager working on machine learning infrastructure at Stripe. Kelley and I caught up at a recent Strata Data conference to discuss:

• Her talk "Scaling model training: From flexible training APIs to resource management with Kubernetes."

• Stripe’s machine learning infrastructure journey, including their start from a production focus.

• Internal tools used at Stripe, including Railyard, an API built to manage model training at scale & more!

The complete show notes can be found at twimlai.com/talk/272.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! The first 10 listeners who register get their ticket for 75% off using the discount code TWIMLFIRST!

Follow along with the entire AI Platforms Vol 2 series at twimlai.com/aiplatforms2.

Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt.

]]>
45:07 clean podcast,science,technology,tech,data,intelligence,learning,engineering,artificial,infrastructure,machine,ai,strata,kelley,engineer,platforms,stripe,ml,kubernetes,twiml,rivoire Today we’re joined by Kelley Rivoire, engineering manager working on machine learning infrastructure at Stripe. Kelley and I caught up at a recent Strata Data conference to discuss: • Her talk "Scaling model training: From flexible training APIs to resource management with Kubernetes." • Stripe’s machine learning infrastructure journey, including their start from a production focus. • Internal tools used at Stripe, including Railyard, an API built to manage model training at scale & more! 272 full Sam Charrington
Productizing ML at Scale at Twitter with Yi Zhuang - TWIML Talk #271 Productizing ML at Scale at Twitter with Yi Zhuang Mon, 03 Jun 2019 18:05:58 +0000 Today we continue our AI Platforms series joined by Yi Zhuang, Senior Staff Engineer at Twitter & Tech Lead for Machine Learning Core Environment at Twitter Cortex. In our conversation, we cover: 

• The machine learning landscape at Twitter, including with the history of the Cortex team

• Deepbird v2, which is used for model training and evaluation solutions, and it's integration with Tensorflow 2.0.

• The newly assembled “Meta” team, that is tasked with exploring the bias, fairness, and accountability of their machine learning models, and much more!

The complete show notes can be found at twimlai.com/talk/271.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! The first 10 listeners who register get their ticket for 75% off using the discount code TWIMLFIRST!

Follow along with the entire AI Platforms Vol 2 series at twimlai.com/aiplatforms2.

Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt.

Finally, visit twimlai.com/3bday to help us celebrate TWiML's 3rd Birthday!

]]>
Today we continue our AI Platforms series joined by Yi Zhuang, Senior Staff Engineer at Twitter & Tech Lead for Machine Learning Core Environment at Twitter Cortex. In our conversation, we cover: 

• The machine learning landscape at Twitter, including with the history of the Cortex team

• Deepbird v2, which is used for model training and evaluation solutions, and it's integration with Tensorflow 2.0.

• The newly assembled “Meta” team, that is tasked with exploring the bias, fairness, and accountability of their machine learning models, and much more!

The complete show notes can be found at twimlai.com/talk/271.

Visit twimlcon.com to learn more about the TWIMLcon: AI Platforms conference! The first 10 listeners who register get their ticket for 75% off using the discount code TWIMLFIRST!

Follow along with the entire AI Platforms Vol 2 series at twimlai.com/aiplatforms2.

Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt.

Finally, visit twimlai.com/3bday to help us celebrate TWiML's 3rd Birthday!

]]>
49:20 clean podcast,science,training,twitter,tools,technology,tech,model,data,intelligence,20,learning,artificial,framework,machine,ai,platform,evaluation,yi,cortex,ml,tensorflow,twiml,embeddings,zhaung,deepbird Today we continue our AI Platforms series joined by Yi Zhuang, Senior Staff Engineer at Twitter. In our conversation, we cover:  • The machine learning landscape at Twitter, including with the history of the Cortex team • Deepbird v2, which is used for model training and evaluation solutions, and it's integration with Tensorflow 2.0. • The newly assembled “Meta” team, that is tasked with exploring the bias, fairness, and accountability of their machine learning models, and much more! 271 full Sam Charrington
Snorkel: A System for Fast Training Data Creation with Alex Ratner - TWiML Talk #270 Snorkel: A System for Fast Training Data Creation with Alex Ratner - TWiML Talk #270 Thu, 30 May 2019 18:35:21 +0000 Today we’re joined by Alex Ratner, Ph.D. student at Stanford. In our conversation, we discuss:

• Snorkel, the open source framework that is the successor to Stanford's Deep Dive project.

• How Snorkel is used as a framework for creating training data with weak supervised learning techniques.

• Multiple use cases for Snorkel, including how it is used by large companies like Google. 

The complete show notes can be found at twimlai.com/talk/270.

Follow along with the entire AI Platforms Vol 2 series at twimlai.com/aiplatforms2.

Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt.

Finally, visit twimlai.com/3bday to help us celebrate TWiML's 3rd Birthday!

]]>
Today we’re joined by Alex Ratner, Ph.D. student at Stanford. In our conversation, we discuss:

• Snorkel, the open source framework that is the successor to Stanford's Deep Dive project.

• How Snorkel is used as a framework for creating training data with weak supervised learning techniques.

• Multiple use cases for Snorkel, including how it is used by large companies like Google. 

The complete show notes can be found at twimlai.com/talk/270.

Follow along with the entire AI Platforms Vol 2 series at twimlai.com/aiplatforms2.

Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt.

Finally, visit twimlai.com/3bday to help us celebrate TWiML's 3rd Birthday!

]]>
45:42 clean podcast,science,training,alex,technology,tech,data,intelligence,learning,label,programming,artificial,ratner,machine,snorkel,ai,platforms,ml,supervised,twiml,neurips,imagenet Today we’re joined by Alex Ratner, Ph.D. student at Stanford, to discuss: • Snorkel, the open source framework that is the successor to Stanford's Deep Dive project. • How Snorkel is used as a framework for creating training data with weak supervised learning techniques. • Multiple use cases for Snorkel, including how it is used by companies like Google.  The complete show notes can be found at twimlai.com/talk/270. Follow along with AI Platforms Vol. 2 at twimlai.com/aiplatforms2. 270 full Sam Charrington
Advancing Autonomous Vehicle Development Using Distributed Deep Learning with Adrien Gaidon - TWiML Talk #269 Advancing Autonomous Vehicle Development Using Distributed Deep Learning with Adrien Gaidon Tue, 28 May 2019 18:26:49 +0000 In this, the kickoff episode of AI Platforms Vol. 2, we're joined by Adrien Gaidon, Machine Learning Lead at Toyota Research Institute. Adrien and I caught up to discuss his team’s work on deploying distributed deep learning in the cloud, at scale. In our conversation, we discuss: 

• The beginning and gradual scaling up of TRI's platform.

• Their distributed deep learning methods, including their use of stock Pytorch.

• Applying devops to their research infrastructure, and much more!

The complete show notes for this episode can be found at twimlai.com/talk/269.

Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt.

Finally, visit twimlai.com/3bday to help us celebrate TWiML's 3rd Birthday!

]]>
In this, the kickoff episode of AI Platforms Vol. 2, we're joined by Adrien Gaidon, Machine Learning Lead at Toyota Research Institute. Adrien and I caught up to discuss his team’s work on deploying distributed deep learning in the cloud, at scale. In our conversation, we discuss: 

• The beginning and gradual scaling up of TRI's platform.

• Their distributed deep learning methods, including their use of stock Pytorch.

• Applying devops to their research infrastructure, and much more!

The complete show notes for this episode can be found at twimlai.com/talk/269.

Thanks to SigOpt for their continued support of the podcast, and their sponsorship of this episode! Check out their machine learning experimentation and optimization suite, and get a free trial at twimlai.com/sigopt.

Finally, visit twimlai.com/3bday to help us celebrate TWiML's 3rd Birthday!

]]>
50:04 clean podcast,science,2,technology,tech,cloud,data,deep,intelligence,learning,research,artificial,distributed,institute,toyota,machine,ai,adrien,scale,platforms,vol,ml,twiml,pytorch,gaidon In this, the kickoff episode of AI Platforms Vol. 2, we're joined by Adrien Gaidon, Machine Learning Lead at Toyota Research Institute. Adrien and I caught up to discuss his team’s work on deploying distributed deep learning in the cloud, at scale. In our conversation, we discuss:  • The beginning and gradual scaling up of TRI's platform. • Their distributed deep learning methods, including their use of stock Pytorch, and much more! 269 full Sam Charrington
Are We Being Honest About How Difficult AI Really Is? w/ David Ferrucci - TWiML Talk #268 Are We Being Honest About How Difficult AI Really Is? w/ David Ferrucci Thu, 23 May 2019 22:31:08 +0000 Today we’re joined by David Ferrucci, Founder, CEO, and Chief Scientist at Elemental Cognition, a company focused on building natural learning systems that understand the world the way people do. In our conversation, we discuss: 

• His experience leading the team that built the IBM Watson system that won on Jeopardy.


• The role of “understanding” in the context of AI systems, and the types of commitments and investments needed to achieve even modest levels of understanding in these systems.

• His thoughts on the power of deep learning, what the path to AGI looks like, and the need for hybrid systems to get there.

The complete show notes for this episode can be found at twimlai.com/talk/268.

Visit twimlai.com/3bday to help us celebrate TWiML's 3rd Birthday!

 

]]>
Today we’re joined by David Ferrucci, Founder, CEO, and Chief Scientist at Elemental Cognition, a company focused on building natural learning systems that understand the world the way people do. In our conversation, we discuss: 

• His experience leading the team that built the IBM Watson system that won on Jeopardy.

• The role of “understanding” in the context of AI systems, and the types of commitments and investments needed to achieve even modest levels of understanding in these systems.

• His thoughts on the power of deep learning, what the path to AGI looks like, and the need for hybrid systems to get there.

The complete show notes for this episode can be found at twimlai.com/talk/268.

Visit twimlai.com/3bday to help us celebrate TWiML's 3rd Birthday!

 

]]>
52:36 clean podcast,science,technology,tech,david,data,systems,intelligence,learning,general,bridgewater,architecture,watson,artificial,ibm,cognition,machine,hybrid,ai,jeopardy,elemental,ml,ferrucci,twiml Today we’re joined by David Ferrucci, Founder, CEO, and Chief Scientist at Elemental Cognition, a company focused on building natural learning systems that understand the world the way people do, to discuss: • The role of “understanding” in the context of AI systems, and the types of commitments and investments needed to achieve even modest levels of understanding. • His thoughts on the power of deep learning, what the path to AGI looks like, and the need for hybrid systems to get there. 568 full Sam Charrington
Gauge Equivariant CNNs, Generative Models, and the Future of AI with Max Welling - TWiML Talk #267 Gauge Equivariant CNNs, Generative Models, and the Future of AI with Max Welling Mon, 20 May 2019 19:58:52 +0000 Today we’re joined by Max Welling, research chair in machine learning at the University of Amsterdam, as well as VP of technologies at Qualcomm, and Fellow at the Canadian Institute for Advanced Research, or CIFAR. In our conversation, we discuss: 

• Max’s research at Qualcomm AI Research and the University of Amsterdam, including his work on Bayesian deep learning, Graph CNNs and Gauge Equivariant CNNs, and in power efficiency for AI via compression, quantization, and compilation.

• Max’s thoughts on the future of the AI industry, in particular, the relative importance of models, data and compute.

The complete show notes for this episode can be found at twimlai.com/talk/267.

Thanks to Qualcomm for sponsoring today's episode! Check out what they're up to at twimlai.com/qualcomm.

 

]]>
Today we’re joined by Max Welling, research chair in machine learning at the University of Amsterdam, as well as VP of technologies at Qualcomm, and Fellow at the Canadian Institute for Advanced Research, or CIFAR. In our conversation, we discuss: 

• Max’s research at Qualcomm AI Research and the University of Amsterdam, including his work on Bayesian deep learning, Graph CNNs and Gauge Equivariant CNNs, and in power efficiency for AI via compression, quantization, and compilation.

• Max’s thoughts on the future of the AI industry, in particular, the relative importance of models, data and compute.

The complete show notes for this episode can be found at twimlai.com/talk/267.

Thanks to Qualcomm for sponsoring today's episode! Check out what they're up to at twimlai.com/qualcomm.

 

]]>
01:04:26 clean podcast,science,technology,tech,data,deep,intelligence,max,learning,research,artificial,machine,gauge,ai,graph,qualcomm,ml,welling,cnns,cifar,twiml,bayensian,equivariant Today we’re joined by Max Welling, research chair in machine learning at the University of Amsterdam, and VP of Technologies at Qualcomm, to discuss:  • Max’s research at Qualcomm AI Research and the University of Amsterdam, including his work on Bayesian deep learning, Graph CNNs and Gauge Equivariant CNNs, power efficiency for AI via compression, quantization, and compilation. • Max’s thoughts on the future of the AI industry, in particular, the relative importance of models, data and com 267 full Sam Charrington
Can We Trust Scientific Discoveries Made Using Machine Learning? with Genevera Allen - TWiML Talk #266 Can We Trust Scientific Discoveries Made Using Machine Learning? with Genevera Allen Thu, 16 May 2019 16:48:55 +0000 Today we’re joined by Genevera Allen, associate professor of statistics in the EECS Department at Rice University, Founder and Director of the Rice Center for Transforming Data to Knowledge and Investigator with the Neurological Research Institute with the Baylor College of Medicine.

Genevera caused quite the stir at the American Association for the Advancement of Science meeting earlier this year with her presentation “Can We Trust Data-Driven Discoveries?" In our conversation we cover:

• The goal of Genevera's talk, and what was lost in translation.

• Use cases outlining the shortcomings of current machine learning techniques.

• Reproducibility, including inference vs discovery, and the lack of terminology for many of the various reproducibility issues, & much more!

The complete show notes for this episode can be found at twimlai.com/talk/266.

 

]]>
Today we’re joined by Genevera Allen, associate professor of statistics in the EECS Department at Rice University, Founder and Director of the Rice Center for Transforming Data to Knowledge and Investigator with the Neurological Research Institute with the Baylor College of Medicine.

Genevera caused quite the stir at the American Association for the Advancement of Science meeting earlier this year with her presentation “Can We Trust Data-Driven Discoveries?" In our conversation we cover:

• The goal of Genevera's talk, and what was lost in translation.

• Use cases outlining the shortcomings of current machine learning techniques.

• Reproducibility, including inference vs discovery, and the lack of terminology for many of the various reproducibility issues, & much more!

The complete show notes for this episode can be found at twimlai.com/talk/266.

 

]]>
41:54 clean podcast,science,technology,tech,data,intelligence,allen,learning,discovery,healthcare,artificial,aaas,inference,machine,ai,ml,reproducibility,twiml,genevera,eecs Today we’re joined by Genevera Allen, associate professor of statistics in the EECS Department at Rice University. Genevera caused quite the stir at the American Association for the Advancement of Science meeting earlier this year with her presentation “Can We Trust Data-Driven Discoveries?" In our conversation, we discuss the goal of Genevera's talk, the issues surrounding reproducibility in Machine Learning, and much more! 266 full Sam Charrington
Creative Adversarial Networks for Art Generation with Ahmed Elgammal - TWiML Talk #265 Creative Adversarial Networks for Art Generation with Ahmed Elgammal Mon, 13 May 2019 18:25:12 +0000 Today we’re joined by Ahmed Elgammal, a professor in the department of computer science at Rutgers, and director of The Art and Artificial Intelligence Lab. In my conversation with Ahmed, we discuss:

• His work on AICAN, a creative adversarial network that produces original portraits, trained with over 500 years of European canonical art.

• How complex the computational representations of the art actually are, and how he simplifies them.

• Specifics of the training process, including the various types of artwork used, and the constraints applied to the model.

The complete show notes for this episode can be found at twimlai.com/talk/265.

]]>
Today we’re joined by Ahmed Elgammal, a professor in the department of computer science at Rutgers, and director of The Art and Artificial Intelligence Lab. In my conversation with Ahmed, we discuss:

• His work on AICAN, a creative adversarial network that produces original portraits, trained with over 500 years of European canonical art.

• How complex the computational representations of the art actually are, and how he simplifies them.

• Specifics of the training process, including the various types of artwork used, and the constraints applied to the model.

The complete show notes for this episode can be found at twimlai.com/talk/265.

]]>
37:13 clean podcast,science,art,creative,technology,networks,tech,data,intelligence,learning,cans,artificial,machine,ai,gans,rutgers,ahmed,ml,gan,adversarial,twiml,elgammal Today we’re joined by Ahmed Elgammal, a professor in the department of computer science at Rutgers, and director of The Art and Artificial Intelligence Lab. We discuss his work on AICAN, a creative adversarial network that produces original portraits, trained with over 500 years of European canonical art. The complete show notes for this episode can be found at twimlai.com/talk/265. 265 full Sam Charrington
Diagnostic Visualization for Machine Learning with YellowBrick w/ Rebecca Bilbro - TWiML Talk #264 Diagnostic Visualization for Machine Learning with YellowBrick Fri, 10 May 2019 16:22:40 +0000 Today we close out our PyDataSci series joined by Rebecca Bilbro, head of data science at ICX media and co-creator of the popular open-source visualization library YellowBrick.

In our conversation, Rebecca details:

• Her relationship with toolmaking, which led to the eventual creation of Yellowbrick.

• Popular tools within YellowBrick, including a summary of their unit testing approach.

• Interesting use cases that she’s seen over time.

• The growth she’s seen in the community of contributors and examples of their contributions as they approach the release of YellowBrick 1.0.

The complete show notes for this episode can be found at twimlai.com/talk/264. Check out the rest of the PyDataSci series at twimlai.com/pydatasci.

We want to better understand your views on the importance of open source and the projects and players in this space. To access the survey visit twimlai.com/pythonsurvey.

Thanks to this weeks sponsor, IBM, for their support of the podcast! Visit twimlai.com/ibm to learn more about the IBM Data Science Community.

]]>
Today we close out our PyDataSci series joined by Rebecca Bilbro, head of data science at ICX media and co-creator of the popular open-source visualization library YellowBrick.

In our conversation, Rebecca details:

• Her relationship with toolmaking, which led to the eventual creation of Yellowbrick.

• Popular tools within YellowBrick, including a summary of their unit testing approach.

• Interesting use cases that she’s seen over time.

• The growth she’s seen in the community of contributors and examples of their contributions as they approach the release of YellowBrick 1.0.

The complete show notes for this episode can be found at twimlai.com/talk/264. Check out the rest of the PyDataSci series at twimlai.com/pydatasci.

We want to better understand your views on the importance of open source and the projects and players in this space. To access the survey visit twimlai.com/pythonsurvey.

Thanks to this weeks sponsor, IBM, for their support of the podcast! Visit twimlai.com/ibm to learn more about the IBM Data Science Community.

]]>
42:34 clean podcast,science,open,technology,tech,data,intelligence,learning,community,source,python,visualization,conversation,artificial,ibm,developer,rebecca,machine,ai,platform,ml,bilbro,twiml,yellowbrick Today we close out our PyDataSci series joined by Rebecca Bilbro, head of data science at ICX media and co-creator of the popular open-source visualization library YellowBrick. In our conversation, Rebecca details: • Her relationship with toolmaking, which led to the eventual creation of YellowBrick. • Popular tools within YellowBrick, including a summary of their unit testing approach. • Interesting use cases that she’s seen over time. 264 full Sam Charrington
Librosa: Audio and Music Processing in Python with Brian McFee - TWiML Talk #263 Librosa: Audio and Music Processing in Python with Brian McFee Thu, 09 May 2019 18:13:39 +0000 Today we continue our PyDataSci series joined by Brian McFee, assistant professor of music technology and data science at NYU, and creator of LibROSA, a python package for music and audio analysis.

Brian walks us through his experience building LibROSA, including:

• Detailing the core functions provided in the library,

• His experience working within Jupyter Notebook,

• We explore a typical LibROSA workflow & more!

The complete show notes for this episode can be found at twimlai.com/talk/263.

Check out the rest of the PyDataSci series at twimlai.com/pydatasci.

We want to better understand your views on the importance of open source and the projects and players in this space. To access the survey visit twimlai.com/pythonsurvey.

Thanks to this weeks sponsor, IBM, for their support of the podcast! Visit twimlai.com/ibm to learn more about the IBM Data Science Community.

]]>
Today we continue our PyDataSci series joined by Brian McFee, assistant professor of music technology and data science at NYU, and creator of LibROSA, a python package for music and audio analysis.

Brian walks us through his experience building LibROSA, including:

• Detailing the core functions provided in the library,

• His experience working within Jupyter Notebook,

• We explore a typical LibROSA workflow & more!

The complete show notes for this episode can be found at twimlai.com/talk/263.

Check out the rest of the PyDataSci series at twimlai.com/pydatasci.

We want to better understand your views on the importance of open source and the projects and players in this space. To access the survey visit twimlai.com/pythonsurvey.

Thanks to this weeks sponsor, IBM, for their support of the podcast! Visit twimlai.com/ibm to learn more about the IBM Data Science Community.

]]>
39:10 clean Today we continue our PyDataSci series joined by Brian McFee, assistant professor of music technology and data science at NYU, and creator of LibROSA, a python package for music and audio analysis. Brian walks us through his experience building LibROSA, including: • Detailing the core functions provided in the library  • His experience working in Jupyter Notebook • We explore a typical LibROSA workflow & more! The complete show notes for this episode can be found at twimlai.com/talk/26 263 full Sam Charrington
Practical Natural Language Processing with spaCy and Prodigy w/ Ines Montani - TWiML Talk #262 Practical Natural Language Processing with spaCy and Prodigy Tue, 07 May 2019 19:48:32 +0000 In this episode of PyDataSci, we’re joined by Ines Montani, Cofounder of Explosion, Co-developer of SpaCy and lead developer of Prodigy.

Ines and I caught up to discuss her various projects, including the aforementioned SpaCy, an open-source NLP library built with a focus on industry and production use cases.

The complete show notes for this episode can be found at twimlai.com/talk/262. Check out the rest of the PyDataSci series at twimlai.com/pydatasci.

We want to better understand your views on the importance of open source and the projects and players in this space. To access the survey visit twimlai.com/pythonsurvey.

Thanks to this weeks sponsor, IBM, for their support of the podcast! Visit twimlai.com/ibm to learn more about the IBM Data Science Community. 

]]>
In this episode of PyDataSci, we’re joined by Ines Montani, Cofounder of Explosion, Co-developer of SpaCy and lead developer of Prodigy.

Ines and I caught up to discuss her various projects, including the aforementioned SpaCy, an open-source NLP library built with a focus on industry and production use cases.

The complete show notes for this episode can be found at twimlai.com/talk/262. Check out the rest of the PyDataSci series at twimlai.com/pydatasci.

We want to better understand your views on the importance of open source and the projects and players in this space. To access the survey visit twimlai.com/pythonsurvey.

Thanks to this weeks sponsor, IBM, for their support of the podcast! Visit twimlai.com/ibm to learn more about the IBM Data Science Community. 

]]>
49:39 clean podcast,science,open,technology,tech,data,language,intelligence,learning,processing,natural,community,source,python,artificial,ibm,developer,machine,ai,platform,prodigy,ecosystem,nlp,ines,ml,annotation,kubernetes,spacy,twiml,montani In this episode of PyDataSci, we’re joined by Ines Montani, Cofounder of Explosion, Co-developer of SpaCy and lead developer of Prodigy. Ines and I caught up to discuss her various projects, including the aforementioned SpaCy, an open-source NLP library built with a focus on industry and production use cases. The complete show notes for this episode can be found at twimlai.com/talk/262. Check out the rest of the PyDataSci series at twimlai.com/pydatasci. 262 full Sam Charrington
Scaling Jupyter Notebooks with Luciano Resende - TWiML Talk #261 Scaling Jupyter Notebooks with Luciano Resende Mon, 06 May 2019 17:11:44 +0000 Today we kick off PyDataSci with Luciano Resende, an Open Source AI Platform Architect at IBM and part of the Center for Open Source Data and AI Technology.

Luciano and I caught up to discuss his work on Jupyter Enterprise Gateway, a scalable way to share Jupyter notebooks and other resources in an enterprise environment. In our conversation, we discuss some of the challenges that arise using Jupyter Notebooks at scale, the role of open source projects like Jupyter Hub and Enterprise Gateway, and some potential reasons for investing in and building custom notebooks. We also explore some common requests from the community, such as tighter integration with git repositories, as well as the python-centricity of the vast Jupyter ecosystem.

The complete show notes for this episode can be found at twimlai.com/talk/261. Check out the rest of the PyDataSci series at twimlai.com/pydatasci.

Thanks to this weeks sponsor, IBM, for their support of the podcast! Visit twimlai.com/ibm to learn more about the IBM Data Science Community. 

 

]]>
Today we kick off PyDataSci with Luciano Resende, an Open Source AI Platform Architect at IBM and part of the Center for Open Source Data and AI Technology.

Luciano and I caught up to discuss his work on Jupyter Enterprise Gateway, a scalable way to share Jupyter notebooks and other resources in an enterprise environment. In our conversation, we discuss some of the challenges that arise using Jupyter Notebooks at scale, the role of open source projects like Jupyter Hub and Enterprise Gateway, and some potential reasons for investing in and building custom notebooks. We also explore some common requests from the community, such as tighter integration with git repositories, as well as the python-centricity of the vast Jupyter ecosystem.

The complete show notes for this episode can be found at twimlai.com/talk/261. Check out the rest of the PyDataSci series at twimlai.com/pydatasci.

Thanks to this weeks sponsor, IBM, for their support of the podcast! Visit twimlai.com/ibm to learn more about the IBM Data Science Community. 

 

]]>
34:28 clean podcast,science,open,technology,tech,data,enterprise,intelligence,learning,community,source,artificial,ibm,developer,notebook,machine,ai,platform,ecosystem,luciano,ml,kubernetes,jupyter,twiml,resende Today we're joined by Luciano Resende, an Open Source AI Platform Architect at IBM, to discuss his work on Jupyter Enterprise Gateway. In our conversation, we address challenges that arise while using Jupyter Notebooks at scale and the role of open source projects like Jupyter Hub and Enterprise Gateway. We also explore some common requests like tighter integration with git repositories, as well as the python-centricity of the vast Jupyter ecosystem. 261 full Sam Charrington
Fighting Fake News and Deep Fakes with Machine Learning w/ Delip Rao - TWiML Talk #260 Fighting Fake News and Deep Fakes with Machine Learning w/ Delip Rao Fri, 03 May 2019 18:47:29 +0000 Today we’re joined by Delip Rao, vice president of research at the AI Foundation, co-author of the book Natural Language Processing with PyTorch, and creator of the Fake News Challenge.

Our conversation begins with the origin story of the Fake News Challenge, including Delip’s initial motivations for the project, and what some of his key takeaways were from that experience. We then dive into a discussion about the generation and detection of artificial content, including “fake news” and “deep fakes.” We discuss the state of generation and detection for text, video, and audio, the key challenges in each of these modalities, the role of GANs on both sides of the equation, and other potential solutions. Finally, we discuss Delip’s new book, Natural Language Processing with PyTorch and his philosophy behind writing it.

The complete show notes for this episode can be found at https://twimlai.com/talk/260.

For more from the AI Conference NY series, visit twimlai.com/nyai19.

Thanks to our friends at HPE for sponsoring this week's series of shows from the O’Reilly AI Conference in New York City! For more information on HPE InfoSight, visit twimlai.com/hpe.

 

]]>
Today we’re joined by Delip Rao, vice president of research at the AI Foundation, co-author of the book Natural Language Processing with PyTorch, and creator of the Fake News Challenge.

Our conversation begins with the origin story of the Fake News Challenge, including Delip’s initial motivations for the project, and what some of his key takeaways were from that experience. We then dive into a discussion about the generation and detection of artificial content, including “fake news” and “deep fakes.” We discuss the state of generation and detection for text, video, and audio, the key challenges in each of these modalities, the role of GANs on both sides of the equation, and other potential solutions. Finally, we discuss Delip’s new book, Natural Language Processing with PyTorch and his philosophy behind writing it.

The complete show notes for this episode can be found at https://twimlai.com/talk/260.

For more from the AI Conference NY series, visit twimlai.com/nyai19.

Thanks to our friends at HPE for sponsoring this week's series of shows from the O’Reilly AI Conference in New York City! For more information on HPE InfoSight, visit twimlai.com/hpe.

 

]]>
58:40 clean podcast,science,news,technology,tech,data,language,intelligence,learning,processing,natural,foundation,generation,artificial,challenge,fake,machine,ai,gans,ml,rao,twiml,pytorch,delip Today we’re joined by Delip Rao, vice president of research at the AI Foundation, co-author of the book Natural Language Processing with PyTorch, and creator of the Fake News Challenge. In our conversation, we discuss the generation and detection of artificial content, including “fake news” and “deep fakes,” the state of generation and detection for text, video, and audio, the key challenges in each of these modalities, the role of GANs on both sides of the equation, and other potential solutio 260 full Sam Charrington
Maintaining Human Control of Artificial Intelligence with Joanna Bryson - TWiML Talk #259 Maintaining Human Control of Artificial Intelligence with Joanna Bryson Wed, 01 May 2019 19:25:50 +0000 Today we’re joined by Joanna Bryson, Reader at the University of Bath.

I was fortunate to catch up with Joanna at the AI Conference where she presented on “Maintaining Human Control of Artificial Intelligence,“ focusing on technological and policy mechanisms that could be used to achieve that goal. In our conversation, we explore our current understanding of “natural intelligence” and how it can inform the development of AI, the context in which she uses the term “human control” and its implications, and the meaning of and need to apply “DevOps” principles when developing AI systems. This was a fun one!

The complete show notes for this episode can be found at https://twimlai.com/talk/259.

For more from the AI Conference NY series, visit twimlai.com/nyai19.

Thanks to our friends at HPE for sponsoring this week's series of shows from the O’Reilly AI Conference in New York City! For more information on HPE InfoSight, visit twimlai.com/hpe.

]]>
Today we’re joined by Joanna Bryson, Reader at the University of Bath.

I was fortunate to catch up with Joanna at the AI Conference where she presented on “Maintaining Human Control of Artificial Intelligence,“ focusing on technological and policy mechanisms that could be used to achieve that goal. In our conversation, we explore our current understanding of “natural intelligence” and how it can inform the development of AI, the context in which she uses the term “human control” and its implications, and the meaning of and need to apply “DevOps” principles when developing AI systems. This was a fun one!

The complete show notes for this episode can be found at https://twimlai.com/talk/259.

For more from the AI Conference NY series, visit twimlai.com/nyai19.

Thanks to our friends at HPE for sponsoring this week's series of shows from the O’Reilly AI Conference in New York City! For more information on HPE InfoSight, visit twimlai.com/hpe.

]]>
38:11 clean podcast,science,conference,technology,tech,data,systems,enterprise,intelligence,policy,learning,control,human,natural,oreilly,artificial,hewlett,packard,machine,ai,bryson,joanna,ml,devops,hpe,twiml Today we’re joined by Joanna Bryson, Reader at the University of Bath. I was fortunate to catch up with Joanna at the conference, where she presented on “Maintaining Human Control of Artificial Intelligence." In our conversation, we explore our current understanding of “natural intelligence” and how it can inform the development of AI, the context in which she uses the term “human control” and its implications, and the meaning of and need to apply “DevOps” principles when developing AI sy 259 full Sam Charrington
Intelligent Infrastructure Management with Pankaj Goyal & Rochna Dhand - TWiML Talk #258 Intelligent Infrastructure Management with Pankaj Goyal & Rochna Dhand Mon, 29 Apr 2019 17:58:43 +0000 Today we kick off our AI conference NY series with Pankaj Goyal, VP for AI & HPC product management at HPE, and Rochna Dhand, director of product management for HPE InfoSight.


Today we get things kicked off with Pankaj Goyal, VP for AI & HPC product management at HPE, and Rochna Dhand, director of product management for HPE InfoSight. In our conversation, Pankaj shares some examples of the kind of AI projects HPE is working with customers on And Rochna details hows HPE’s Infosight helps IT organizations better manage and ensure the health of an enterprise’s IT infrastructure using machine learning. We discuss the key use cases addressed by InfoSight, the types of models it uses for its analysis and some of the results seen in real-world deployments.

The complete show notes for this episode can be found at https://twimlai.com/talk/258.

For more from the AI Conference NY series, visit twimlai.com/nyai19.

Thanks to our friends at HPE for sponsoring this week's series of shows from the O’Reilly AI Conference in New York City! For more information on HPE InfoSight, visit twimlai.com/hpe.

]]>
Today we kick off our AI conference NY series with Pankaj Goyal, VP for AI & HPC product management at HPE, and Rochna Dhand, director of product management for HPE InfoSight.

Today we get things kicked off with Pankaj Goyal, VP for AI & HPC product management at HPE, and Rochna Dhand, director of product management for HPE InfoSight. In our conversation, Pankaj shares some examples of the kind of AI projects HPE is working with customers on And Rochna details hows HPE’s Infosight helps IT organizations better manage and ensure the health of an enterprise’s IT infrastructure using machine learning. We discuss the key use cases addressed by InfoSight, the types of models it uses for its analysis and some of the results seen in real-world deployments.

The complete show notes for this episode can be found at https://twimlai.com/talk/258.

For more from the AI Conference NY series, visit twimlai.com/nyai19.

Thanks to our friends at HPE for sponsoring this week's series of shows from the O’Reilly AI Conference in New York City! For more information on HPE InfoSight, visit twimlai.com/hpe.

]]>
44:49 clean podcast,science,conference,technology,it,tech,data,enterprise,intelligence,learning,oreilly,operations,artificial,pankaj,hewlett,packard,machine,ai,ml,goyal,hpe,twiml,rochna,dhand Today we're joined by Pankaj Goyal and Rochna Dhand, to discuss HPE InfoSight. In our conversation, Pankaj gives a look into how HPE as a company views AI, from their customers to the future of AI at HPE through investment. Rocha details the role of HPE’s Infosight in deploying AI operations at an enterprise level, including a look at where it fits into the infrastructure for their current customer base, along with a walkthrough of how InfoSight is deployed in a real-world use case. 258 full Sam Charrington
Organizing for Successful Data Science at Stitch Fix with Eric Colson - TWiML Talk #257 Organizing for Successful Data Science at Stitch Fix with Eric Colson Fri, 26 Apr 2019 16:26:18 +0000 For the final episode of our Strata Data series, we’re joined by Eric Colson, Chief Algorithms Officer at Stitch Fix, whose presentation at the conference explored “How to make fewer bad decisions.”

Our discussion focuses in on the three key organizational principles for data science teams that he’s developed at Stitch Fix. Along the way, we also talk through the various roles data science plays at the company, explore a few of the 800+ algorithms in use at the company spanning recommendations, inventory management, demand forecasting, and clothing design. We discuss the roles of Stitch Fix’splatforms team in supporting the data science organization, and his unique perspective on how to identify platform features.

The complete show notes for this episode can be found at https://twimlai.com/talk/257.

For more from the Strata Data conference series, visit twimlai.com/stratasf19.

I want to send a quick thanks to our friends at Cloudera for their sponsorship of this series of podcasts from the Strata Data Conference, which they present along with O’Reilly Media. Cloudera’s long been a supporter of the podcast; in fact, they sponsored the very first episode of TWiML Talk, recorded back in 2016. Since that time Cloudera has continued to invest in and build out its platform, which already securely hosts huge volumes of enterprise data, to provide enterprise customers with a modern environment for machine learning and analytics that works both in the cloud as well as the data center. In addition, Cloudera Fast Forward Labs provides research and expert guidance that helps enterprises understand the realities of building with AI technologies without needing to hire an in-house research team. To learn more about what the company is up to and how they can help, visit Cloudera’s Machine Learning resource center at cloudera.com/ml.

]]>
For the final episode of our Strata Data series, we’re joined by Eric Colson, Chief Algorithms Officer at Stitch Fix, whose presentation at the conference explored “How to make fewer bad decisions.”

Our discussion focuses in on the three key organizational principles for data science teams that he’s developed at Stitch Fix. Along the way, we also talk through the various roles data science plays at the company, explore a few of the 800+ algorithms in use at the company spanning recommendations, inventory management, demand forecasting, and clothing design. We discuss the roles of Stitch Fix’splatforms team in supporting the data science organization, and his unique perspective on how to identify platform features.

The complete show notes for this episode can be found at https://twimlai.com/talk/257.

For more from the Strata Data conference series, visit twimlai.com/stratasf19.

I want to send a quick thanks to our friends at Cloudera for their sponsorship of this series of podcasts from the Strata Data Conference, which they present along with O’Reilly Media. Cloudera’s long been a supporter of the podcast; in fact, they sponsored the very first episode of TWiML Talk, recorded back in 2016. Since that time Cloudera has continued to invest in and build out its platform, which already securely hosts huge volumes of enterprise data, to provide enterprise customers with a modern environment for machine learning and analytics that works both in the cloud as well as the data center. In addition, Cloudera Fast Forward Labs provides research and expert guidance that helps enterprises understand the realities of building with AI technologies without needing to hire an in-house research team. To learn more about what the company is up to and how they can help, visit Cloudera’s Machine Learning resource center at cloudera.com/ml.

]]>
52:38 clean podcast,science,clothing,technology,to,production,tech,data,eric,business,intelligence,learning,end,artificial,decisions,machine,ai,retail,algorithms,platforms,ml,algorithm,colson,stitchfix,twiml Today we’re joined by Eric Colson, Chief Algorithms Officer at Stitch Fix, whose presentation at the Strata Data conference explored “How to make fewer bad decisions.” Our discussion focuses in on the three key organizational principles for data science teams that he’s developed while at Stitch Fix. Along the way, we also talk through various roles data science plays, exploring a few of the 800+ algorithms in use at the company spanning recommendations, inventory management, demand forecasting, a 257 full Sam Charrington
End-to-End Data Science to Drive Business Decisions at LinkedIn with Burcu Baran - TWiML Talk #256 End-to-End Data Science to Drive Business Decisions at LinkedIn with Burcu Baran Wed, 24 Apr 2019 17:45:54 +0000 In this episode of our Strata Data conference series, we’re joined by Burcu Baran, Senior Data Scientist at LinkedIn.

At Strata, Burcu, along with a few members of her team, delivered the presentation “Using the full spectrum of data science to drive business decisions,” which outlines how LinkedIn manages their entire machine learning production process. In our conversation, Burcu details each phase of the process, including problem formulation, monitoring features, A/B testing and more. We also discuss how her “horizontal” team works with other more “vertical” teams within LinkedIn, various challenges that arise when training and modeling such as data leakage and interpretability, best practices when trying to deal with data partitioning at scale, and of course, the need for a platform that reduces the manual pieces of this process, promoting efficiency.

The complete show notes for this episode can be found at https://twimlai.com/talk/256.

For more from the Strata Data conference series, visit twimlai.com/stratasf19.

I want to send a quick thanks to our friends at Cloudera for their sponsorship of this series of podcasts from the Strata Data Conference, which they present along with O’Reilly Media. Cloudera’s long been a supporter of the podcast; in fact, they sponsored the very first episode of TWiML Talk, recorded back in 2016. Since that time Cloudera has continued to invest in and build out its platform, which already securely hosts huge volumes of enterprise data, to provide enterprise customers with a modern environment for machine learning and analytics that works both in the cloud as well as the data center. In addition, Cloudera Fast Forward Labs provides research and expert guidance that helps enterprises understand the realities of building with AI technologies without needing to hire an in-house research team. To learn more about what the company is up to and how they can help, visit Cloudera’s Machine Learning resource center at cloudera.com/ml.

I’d also like to send a huge thanks to LinkedIn for their continued support and sponsorship of the show! Now that I’ve had a chance to interview several of the folks on LinkedIn’s Data Science and Engineering teams, it’s really put into context the complexity and scale of the problems that they get to work on in their efforts to create enhanced economic opportunities for every member of the global workforce. AI and ML are integral aspects of almost every product LinkedIn builds for its members and customers and their massive, highly structured dataset gives their data scientists and researchers the ability to conduct applied research to improve member experiences. To learn more about the work of LinkedIn Engineering, please visit engineering.linkedin.com/blog.

]]>
In this episode of our Strata Data conference series, we’re joined by Burcu Baran, Senior Data Scientist at LinkedIn.

At Strata, Burcu, along with a few members of her team, delivered the presentation “Using the full spectrum of data science to drive business decisions,” which outlines how LinkedIn manages their entire machine learning production process. In our conversation, Burcu details each phase of the process, including problem formulation, monitoring features, A/B testing and more. We also discuss how her “horizontal” team works with other more “vertical” teams within LinkedIn, various challenges that arise when training and modeling such as data leakage and interpretability, best practices when trying to deal with data partitioning at scale, and of course, the need for a platform that reduces the manual pieces of this process, promoting efficiency.

The complete show notes for this episode can be found at https://twimlai.com/talk/256.

For more from the Strata Data conference series, visit twimlai.com/stratasf19.

I want to send a quick thanks to our friends at Cloudera for their sponsorship of this series of podcasts from the Strata Data Conference, which they present along with O’Reilly Media. Cloudera’s long been a supporter of the podcast; in fact, they sponsored the very first episode of TWiML Talk, recorded back in 2016. Since that time Cloudera has continued to invest in and build out its platform, which already securely hosts huge volumes of enterprise data, to provide enterprise customers with a modern environment for machine learning and analytics that works both in the cloud as well as the data center. In addition, Cloudera Fast Forward Labs provides research and expert guidance that helps enterprises understand the realities of building with AI technologies without needing to hire an in-house research team. To learn more about what the company is up to and how they can help, visit Cloudera’s Machine Learning resource center at cloudera.com/ml.

I’d also like to send a huge thanks to LinkedIn for their continued support and sponsorship of the show! Now that I’ve had a chance to interview several of the folks on LinkedIn’s Data Science and Engineering teams, it’s really put into context the complexity and scale of the problems that they get to work on in their efforts to create enhanced economic opportunities for every member of the global workforce. AI and ML are integral aspects of almost every product LinkedIn builds for its members and customers and their massive, highly structured dataset gives their data scientists and researchers the ability to conduct applied research to improve member experiences. To learn more about the work of LinkedIn Engineering, please visit engineering.linkedin.com/blog.

]]>
49:51 clean podcast,science,technology,to,production,tech,data,business,intelligence,modeling,learning,problem,end,testing,artificial,decisions,machine,ai,platform,ab,leakage,ml,formulation,twiml,interpretability In this episode of our Strata Data conference series, we’re joined by Burcu Baran, Senior Data Scientist at LinkedIn. At Strata, Burcu, along with a few members of her team, delivered the presentation “Using the full spectrum of data science to drive business decisions,” which outlines how LinkedIn manages their entire machine learning production process. In our conversation, Burcu details each phase of the process, including problem formulation, monitoring features, A/B testing and more. 256 full Sam Charrington
Learning with Limited Labeled Data with Shioulin Sam - TWiML Talk #255 Learning with Limited Labeled Data with Shioulin Sam Mon, 22 Apr 2019 22:11:47 +0000 Today, in the first episode of our Strata Data conference series, we’re joined by Shioulin Sam, Research Engineer with Cloudera Fast Forward Labs.

Shioulin and I caught up to discuss the newest report to come out of CFFL, “Learning with Limited Label Data,” which explores active learning as a means to build applications requiring only a relatively small set of labeled data. We start our conversation with a review of active learning and some of the reasons why it’s recently become an interesting technology for folks building systems based on deep learning. We then discuss some of the differences between active learning approaches or implementations, and some of the common requirements of an active learning system. Finally, we touch on some packaged offerings in the marketplace that include active learning, including Amazon’s SageMaker Ground Truth, and review Shoulin’s tips for getting started with the technology.

The complete show notes for this episode can be found at https://twimlai.com/talk/255.

For more from the Strata Data conference series, visit twimlai.com/stratasf19.

I want to send a quick thanks to our friends at Cloudera for their sponsorship of this series of podcasts from the Strata Data Conference, which they present along with O’Reilly Media. Cloudera’s long been a supporter of the podcast; in fact, they sponsored the very first episode of TWiML Talk, recorded back in 2016. Since that time Cloudera has continued to invest in and build out its platform, which already securely hosts huge volumes of enterprise data, to provide enterprise customers with a modern environment for machine learning and analytics that works both in the cloud as well as the data center. In addition, Cloudera Fast Forward Labs provides research and expert guidance that helps enterprises understand the realities of building with AI technologies without needing to hire an in-house research team. To learn more about what the company is up to and how they can help, visit Cloudera’s Machine Learning resource center at cloudera.com/ml.

]]>
Today, in the first episode of our Strata Data conference series, we’re joined by Shioulin Sam, Research Engineer with Cloudera Fast Forward Labs.

Shioulin and I caught up to discuss the newest report to come out of CFFL, “Learning with Limited Label Data,” which explores active learning as a means to build applications requiring only a relatively small set of labeled data. We start our conversation with a review of active learning and some of the reasons why it’s recently become an interesting technology for folks building systems based on deep learning. We then discuss some of the differences between active learning approaches or implementations, and some of the common requirements of an active learning system. Finally, we touch on some packaged offerings in the marketplace that include active learning, including Amazon’s SageMaker Ground Truth, and review Shoulin’s tips for getting started with the technology.

The complete show notes for this episode can be found at https://twimlai.com/talk/255.

For more from the Strata Data conference series, visit twimlai.com/stratasf19.

I want to send a quick thanks to our friends at Cloudera for their sponsorship of this series of podcasts from the Strata Data Conference, which they present along with O’Reilly Media. Cloudera’s long been a supporter of the podcast; in fact, they sponsored the very first episode of TWiML Talk, recorded back in 2016. Since that time Cloudera has continued to invest in and build out its platform, which already securely hosts huge volumes of enterprise data, to provide enterprise customers with a modern environment for machine learning and analytics that works both in the cloud as well as the data center. In addition, Cloudera Fast Forward Labs provides research and expert guidance that helps enterprises understand the realities of building with AI technologies without needing to hire an in-house research team. To learn more about what the company is up to and how they can help, visit Cloudera’s Machine Learning resource center at cloudera.com/ml.

]]>
44:36 clean podcast,science,technology,tech,data,intelligence,learning,sam,fast,artificial,machine,forward,ai,strata,active,limited,labs,ml,cloudera,labeled,twiml,shioulin Today we’re joined by Shioulin Sam, Research Engineer with Cloudera Fast Forward Labs. Shioulin and I caught up to discuss the newest report to come out of CFFL, “Learning with Limited Label Data,” which explores active learning as a means to build applications requiring only a relatively small set of labeled data. We start our conversation with a review of active learning and some of the reasons why it’s recently become an interesting technology for folks building systems based on deep learning 255 full Sam Charrington
cuDF, cuML & RAPIDS: GPU Accelerated Data Science with Paul Mahler - TWiML Talk #254 cuDF, cuML & RAPIDS: GPU Accelerated Data Science with Paul Mahler Fri, 19 Apr 2019 17:33:30 +0000 Today we're joined by Paul Mahler, senior data scientist and technical product manager for machine learning at NVIDIA.

In our conversation, Paul and I discuss NVIDIA's RAPIDS open source project, which aims to bring GPU acceleration to traditional data science workflows and machine learning tasks. We dig into the various subprojects like cuDF and cuML that make up the RAPIDS ecosystem, as well as the role of lower-level libraries like mlprims and the relationship to other open-source projects like Scikit-learn, XGBoost and Dask.

The complete show notes for this episode can be found at https://twimlai.com/talk/254.

Visit twimlai.com/gtc19 for more from our GTC 2019 series.

To learn more about Dell Precision workstations, and some of the ways they’re being used by customers in industries like Media and Entertainment, Engineering and Manufacturing, Healthcare and Life Sciences, Oil and Gas, and Financial services, visit Dellemc.com/Precision.

]]>
Today we're joined by Paul Mahler, senior data scientist and technical product manager for machine learning at NVIDIA.

In our conversation, Paul and I discuss NVIDIA's RAPIDS open source project, which aims to bring GPU acceleration to traditional data science workflows and machine learning tasks. We dig into the various subprojects like cuDF and cuML that make up the RAPIDS ecosystem, as well as the role of lower-level libraries like mlprims and the relationship to other open-source projects like Scikit-learn, XGBoost and Dask.

The complete show notes for this episode can be found at https://twimlai.com/talk/254.

Visit twimlai.com/gtc19 for more from our GTC 2019 series.

To learn more about Dell Precision workstations, and some of the ways they’re being used by customers in industries like Media and Entertainment, Engineering and Manufacturing, Healthcare and Life Sciences, Oil and Gas, and Financial services, visit Dellemc.com/Precision.

]]>
38:13 clean science,paul,technology,tech,data,intelligence,dell,learning,sam,rapids,artificial,machine,ai,nvidia,mahler,opensource,acceleration,gpu,ml,learining,charrington Today we're joined by Paul Mahler, senior data scientist and technical product manager for ML at NVIDIA. In our conversation, Paul and I discuss NVIDIA's RAPIDS open source project, which aims to bring GPU acceleration to traditional data science workflows and ML tasks. We dig into the various subprojects like cuDF and cuML that make up the RAPIDS ecosystem, as well as the role of lower-level libraries like mlprims and the relationship to other open-source projects like Scikit-learn, XGBoost and Dask. 254 full Sam Charrington
Edge AI for Smart Manufacturing with Trista Chen - TWiML Talk #253 Edge AI for Smart Manufacturing with Trista Chen Thu, 18 Apr 2019 17:26:20 +0000 Today we’re joined by Trista Chen, chief scientist of machine learning at Inventec.

At GTC, Trista spoke on “Edge AI in Smart Manufacturing: Defect Detection and Beyond.” In our conversation, we discuss a few of the challenges that Industry 4.0 initiatives aim to address and dig into a few of the various use cases she’s worked on, such as the deployment of machine learning in an industrial setting to perform defect detection, safety improvement, demand forecasting, and more. We also dig into the role of edge, cloud, and what she calls hybrid AI, which is inference happening both in the cloud and on the edge concurrently. Finally, we discuss the challenges associated with estimating the ROI of industrial AI projects and the need that often arises to redefine the problem to understand the ultimate impact of the solution.

The complete show notes for this episode can be found at https://twimlai.com/talk/253.

Visit twimlai.com/gtc19 for more from our GTC 2019 series.

To learn more about Dell Precision workstations, and some of the ways they’re being used by customers in industries like Media and Entertainment, Engineering and Manufacturing, Healthcare and Life Sciences, Oil and Gas, and Financial services, visit Dellemc.com/Precision.

]]>
Today we’re joined by Trista Chen, chief scientist of machine learning at Inventec.

At GTC, Trista spoke on “Edge AI in Smart Manufacturing: Defect Detection and Beyond.” In our conversation, we discuss a few of the challenges that Industry 4.0 initiatives aim to address and dig into a few of the various use cases she’s worked on, such as the deployment of machine learning in an industrial setting to perform defect detection, safety improvement, demand forecasting, and more. We also dig into the role of edge, cloud, and what she calls hybrid AI, which is inference happening both in the cloud and on the edge concurrently. Finally, we discuss the challenges associated with estimating the ROI of industrial AI projects and the need that often arises to redefine the problem to understand the ultimate impact of the solution.

The complete show notes for this episode can be found at https://twimlai.com/talk/253.

Visit twimlai.com/gtc19 for more from our GTC 2019 series.

To learn more about Dell Precision workstations, and some of the ways they’re being used by customers in industries like Media and Entertainment, Engineering and Manufacturing, Healthcare and Life Sciences, Oil and Gas, and Financial services, visit Dellemc.com/Precision.

]]>
38:39 clean podcast,edge,science,technology,tech,smart,cloud,data,intelligence,corp,dell,vision,learning,manufacturing,computer,technologies,artificial,machine,hybrid,ai,detection,federated,prediction,ml,2019,gtc,twiml,inventec Today we’re joined by Trista Chen, chief scientist of machine learning at Inventec, who spoke on “Edge AI in Smart Manufacturing: Defect Detection and Beyond” at GTC. In our conversation, we discuss the challenges that Industry 4.0 initiatives aim to address and dig into a few of the various use cases she’s worked on, such as the deployment of ML in an industrial setting to perform various tasks. We also discuss the challenges associated with estimating the ROI of industrial AI projects. 253 full Sam Charrington
Machine Learning for Security and Security for Machine Learning with Nicole Nichols - TWiML Talk #252 Machine Learning for Security and Security for Machine Learning with Nicole Nichols Tue, 16 Apr 2019 17:01:59 +0000 Today we’re joined by Nicole Nichols, a senior research scientist at the Pacific Northwest National Lab.

Nicole joined me to discuss her recent presentation at GTC, which was titled “Machine Learning for Security, and Security for Machine Learning.” Our conversation explores the two use cases she presented, insider threat detection, and software fuzz testing. We discuss the effectiveness of standard and bidirectional RNN language models for detecting malicious activity within the Los Alamos National Laboratory cybersecurity dataset, the augmentation of software fuzzing techniques using deep learning, and light-based adversarial attacks on image classification systems. I’d love to hear your thoughts on these use cases!

The complete show notes for this episode can be found at https://twimlai.com/talk/252.

Visit twimlai.com/gtc19 for more from our GTC 2019 series.

To learn more about Dell Precision workstations, and some of the ways they’re being used by customers in industries like Media and Entertainment, Engineering and Manufacturing, Healthcare and Life Sciences, Oil and Gas, and Financial services, visit Dellemc.com/Precision.

]]>
Today we’re joined by Nicole Nichols, a senior research scientist at the Pacific Northwest National Lab.

Nicole joined me to discuss her recent presentation at GTC, which was titled “Machine Learning for Security, and Security for Machine Learning.” Our conversation explores the two use cases she presented, insider threat detection, and software fuzz testing. We discuss the effectiveness of standard and bidirectional RNN language models for detecting malicious activity within the Los Alamos National Laboratory cybersecurity dataset, the augmentation of software fuzzing techniques using deep learning, and light-based adversarial attacks on image classification systems. I’d love to hear your thoughts on these use cases!

The complete show notes for this episode can be found at https://twimlai.com/talk/252.

Visit twimlai.com/gtc19 for more from our GTC 2019 series.

To learn more about Dell Precision workstations, and some of the ways they’re being used by customers in industries like Media and Entertainment, Engineering and Manufacturing, Healthcare and Life Sciences, Oil and Gas, and Financial services, visit Dellemc.com/Precision.

]]>
41:56 clean podcast,science,technology,tech,data,language,security,intelligence,dell,modeling,pacific,learning,research,lab,insider,testing,fuzz,northwest,nicole,nichols,artificial,machine,ai,threat,detection,nlp,ml,2019,gtc,twiml Today we’re joined by Nicole Nichols, a senior research scientist at the Pacific Northwest National Lab. We discuss her recent presentation at GTC, which was titled “Machine Learning for Security, and Security for Machine Learning.” We explore two use cases, insider threat detection, and software fuzz testing, discussing the effectiveness of standard and bidirectional RNN language models for detecting malicious activity, the augmentation of software fuzzing techniques using deep learning, and much mor 252 full Sam Charrington
Domain Adaptation and Generative Models for Single Cell Genomics with Gerald Quon - TWiML Talk #251 Domain Adaptation and Generative Models for Single Cell Genomics with Gerald Quon Mon, 15 Apr 2019 19:48:17 +0000 Today we’re joined by Gerald Quon, assistant professor in the Molecular and Cellular Biology department at UC Davis.

Gerald presented his work on Deep Domain Adaptation and Generative Models for Single Cell Genomics at GTC this year, which explores single cell genomics as a means of disease identification for treatment. In our conversation, we discuss how Gerald and his team use deep learning to generate novel insights across diseases, the different types of data that was used, and the development of ‘nested’ Generative Models for single cell measurement.

The complete show notes for this episode can be found at https://twimlai.com/talk/251.

Visit twimlai.com/gtc19 for more from our GTC 2019 series.

To learn more about Dell Precision workstations, and some of the ways they’re being used by customers in industries like Media and Entertainment, Engineering and Manufacturing, Healthcare and Life Sciences, Oil and Gas, and Financial services, visit Dellemc.com/Precision.

]]>
Today we’re joined by Gerald Quon, assistant professor in the Molecular and Cellular Biology department at UC Davis.

Gerald presented his work on Deep Domain Adaptation and Generative Models for Single Cell Genomics at GTC this year, which explores single cell genomics as a means of disease identification for treatment. In our conversation, we discuss how Gerald and his team use deep learning to generate novel insights across diseases, the different types of data that was used, and the development of ‘nested’ Generative Models for single cell measurement.

The complete show notes for this episode can be found at https://twimlai.com/talk/251.

Visit twimlai.com/gtc19 for more from our GTC 2019 series.

To learn more about Dell Precision workstations, and some of the ways they’re being used by customers in industries like Media and Entertainment, Engineering and Manufacturing, Healthcare and Life Sciences, Oil and Gas, and Financial services, visit Dellemc.com/Precision.

]]>
32:24 clean podcast,science,gerald,conference,technology,tech,data,deep,intelligence,biology,dell,learning,cell,artificial,domain,single,genomics,adaptation,machine,ai,nvidia,cellular,gpu,molecular,ml,2019,quon,gtc,twiml Today we’re joined by Gerald Quon, assistant professor at UC Davis. Gerald presented his work on Deep Domain Adaptation and Generative Models for Single Cell Genomics at GTC this year, which explores single cell genomics as a means of disease identification for treatment. In our conversation, we discuss how he uses deep learning to generate novel insights across diseases, the different types of data that was used, and the development of ‘nested’ Generative Models for single cell measurement. 251 full Sam Charrington
Mapping Dark Matter with Bayesian Neural Networks w/ Yashar Hezaveh - TWiML Talk #250 Mapping Dark Matter with Bayesian Neural Networks w/ Yashar Hezaveh Thu, 11 Apr 2019 19:01:55 +0000 You might have seen the news yesterday that MIT researcher Katie Bouman produced the first image of a black hole. What’s been less reported is that the algorithm she developed to accomplish this is based on machine learning. Machine learning is having a huge impact in the fields of astronomy and astrophysics, and I’m excited to bring you interviews with some of the people innovating in this area.

Today we’re joined by Yashar Hezaveh, Assistant Professor at the University of Montreal, and Research Fellow at the Center for Computational Astrophysics at Flatiron Institute.

Yashar and I caught up to discuss his work on gravitational lensing, which is the bending of light from distant sources due to the effects of gravity. In our conversation, Yashar and I discuss how machine learning can be applied to undistort images, including some of the various techniques used and how the data is prepared to get the best results. We also discuss the intertwined roles of simulation and machine learning in generating images, incorporating other techniques such as domain transfer or GANs, and how he assesses the results of this project.

For even more on this topic, I’d also suggest checking out the following interviews, TWiML Talk #117 with Chris Shallue, where we discuss the discovery of exoplanets, TWiML Talk #184, with Viviana Acquaviva, where we explore dark energy and star formation, and if you want to go way back, TWiML Talk #5 with Joshua Bloom which provides a great overview of the application of ML in astronomy.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/250.

]]>
You might have seen the news yesterday that MIT researcher Katie Bouman produced the first image of a black hole. What’s been less reported is that the algorithm she developed to accomplish this is based on machine learning. Machine learning is having a huge impact in the fields of astronomy and astrophysics, and I’m excited to bring you interviews with some of the people innovating in this area.

Today we’re joined by Yashar Hezaveh, Assistant Professor at the University of Montreal, and Research Fellow at the Center for Computational Astrophysics at Flatiron Institute.

Yashar and I caught up to discuss his work on gravitational lensing, which is the bending of light from distant sources due to the effects of gravity. In our conversation, Yashar and I discuss how machine learning can be applied to undistort images, including some of the various techniques used and how the data is prepared to get the best results. We also discuss the intertwined roles of simulation and machine learning in generating images, incorporating other techniques such as domain transfer or GANs, and how he assesses the results of this project.

For even more on this topic, I’d also suggest checking out the following interviews, TWiML Talk #117 with Chris Shallue, where we discuss the discovery of exoplanets, TWiML Talk #184, with Viviana Acquaviva, where we explore dark energy and star formation, and if you want to go way back, TWiML Talk #5 with Joshua Bloom which provides a great overview of the application of ML in astronomy.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/250.

]]>
36:04 clean podcast,science,lens,technology,tech,data,deep,intelligence,learning,satellite,artificial,gravity,machine,ai,gans,hubble,gravitational,lensing,astrophysics,ml,yashar,cnns,twiml,hezaveh Today we’re joined by Yashar Hezaveh, Assistant Professor at the University of Montreal. Yashar and I caught up to discuss his work on gravitational lensing, which is the bending of light from distant sources due to the effects of gravity. In our conversation, Yashar and I discuss how ML can be applied to undistort images, the intertwined roles of simulation and ML in generating images, incorporating other techniques such as domain transfer or GANs, and how he assesses the results of this project. 250 full Sam Charrington
Deep Learning for Population Genetic Inference with Dan Schrider - TWiML Talk #249 Deep Learning for Population Genetic Inference with Dan Schrider Tue, 09 Apr 2019 03:39:27 +0000 Today we’re joined by Dan Schrider, assistant professor in the department of genetics at The University of North Carolina at Chapel Hill.

My discussion with Dan starts with an overview of population genomics and from there digs into his application of machine learning in the field, allowing us to, for example, better understand population size changes and gene flow from DNA sequences. We then dig into Dan’s paper “The Unreasonable Effectiveness of Convolutional Neural Networks in Population Genetic Inference,” which was published in the Molecular Biology and Evolution journal, which examines the idea that CNNs are capable of outperforming expert-derived statistical methods for some key problems in the field.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/249.

]]>
Today we’re joined by Dan Schrider, assistant professor in the department of genetics at The University of North Carolina at Chapel Hill.

My discussion with Dan starts with an overview of population genomics and from there digs into his application of machine learning in the field, allowing us to, for example, better understand population size changes and gene flow from DNA sequences. We then dig into Dan’s paper “The Unreasonable Effectiveness of Convolutional Neural Networks in Population Genetic Inference,” which was published in the Molecular Biology and Evolution journal, which examines the idea that CNNs are capable of outperforming expert-derived statistical methods for some key problems in the field.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/249.

]]>
49:53 clean podcast,dan,of,science,technology,networks,tech,data,north,genetic,intelligence,biology,learning,university,carolina,artificial,selective,inference,genomics,neural,machine,ai,vector,dna,population,molecular,sweeps,sequences,ml,convolutional,cnns,schrider Today we’re joined by Dan Schrider, assistant professor in the department of genetics at UNC Chapel Hill. My discussion with Dan starts with an overview of population genomics, looking into his application of ML in the field. We then dig into Dan’s paper “The Unreasonable Effectiveness of Convolutional Neural Networks in Population Genetic Inference,” which examines the idea that CNNs are capable of outperforming expert-derived statistical methods for some key problems in the field. 249 full Sam Charrington
Empathy in AI with Rob Walker - TWiML Talk #248 Empathy in AI with Rob Walker Fri, 05 Apr 2019 18:31:23 +0000 Today we’re joined by Rob Walker, Vice President of Decision Management at Pegasystems.

Rob joined us back in episode 127 to discuss “Hyperpersonalizing the customer experience.” Today, he’s back for a discussion about the role of empathy in AI systems. In our conversation, we dig into the role empathy plays in consumer-facing human-AI interactions, the differences between empathy and ethics, and a few examples of ways empathy should be considered when building enterprise AI systems.

What do you think? Should empathy be a consideration in AI systems? If so, do any examples jump out for you of where and how it should be applied? I’d love to hear your thoughts on the topic! Feel free to shoot me a tweet at @samcharrington or leave a comment on the show notes page with your thoughts.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/248.

]]>
Today we’re joined by Rob Walker, Vice President of Decision Management at Pegasystems.

Rob joined us back in episode 127 to discuss “Hyperpersonalizing the customer experience.” Today, he’s back for a discussion about the role of empathy in AI systems. In our conversation, we dig into the role empathy plays in consumer-facing human-AI interactions, the differences between empathy and ethics, and a few examples of ways empathy should be considered when building enterprise AI systems.

What do you think? Should empathy be a consideration in AI systems? If so, do any examples jump out for you of where and how it should be applied? I’d love to hear your thoughts on the topic! Feel free to shoot me a tweet at @samcharrington or leave a comment on the show notes page with your thoughts.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/248.

]]>
41:26 clean podcast,science,rob,technology,tech,data,intelligence,learning,walker,ethics,artificial,empathy,machine,ai,ml,pegaworld,pegasystems,twiml Today we’re joined by Rob Walker, Vice President of Decision Management at Pegasystems. Rob joined us back in episode 127 to discuss “Hyperpersonalizing the customer experience.” Today, he’s back for a discussion about the role of empathy in AI systems. In our conversation, we dig into the role empathy plays in consumer-facing human-AI interactions, the differences between empathy and ethics, and a few examples of ways empathy should be considered when enterprise AI systems. 248 full Sam Charrington
Benchmarking Custom Computer Vision Services at Urban Outfitters with Tom Szumowski - TWiML Talk #247 Benchmarking Custom Computer Vision Services at Urban Outfitters with Tom Szumowski Wed, 03 Apr 2019 21:24:29 +0000 Today we’re joined by Tom Szumowski, Data Scientist at URBN, the parent company of Urban Outfitters, Anthropologie, and other consumer fashion brands.

Tom and I caught up recently to discuss his project “Exploring Custom Vision Services for Automated Fashion Product Attribution.” We start our discussion with a definition of the product attribution problem in retail and fashion, and a discussion of the challenges it offers to data scientists. We then look at the process Tom and his team took to building custom attribution models, and the results of their evaluation of various custom vision APIs for this purpose, with a focus on the various roadblocks and lessons he and his team encountered along the way.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/247.

]]>
Today we’re joined by Tom Szumowski, Data Scientist at URBN, the parent company of Urban Outfitters, Anthropologie, and other consumer fashion brands.

Tom and I caught up recently to discuss his project “Exploring Custom Vision Services for Automated Fashion Product Attribution.” We start our discussion with a definition of the product attribution problem in retail and fashion, and a discussion of the challenges it offers to data scientists. We then look at the process Tom and his team took to building custom attribution models, and the results of their evaluation of various custom vision APIs for this purpose, with a focus on the various roadblocks and lessons he and his team encountered along the way.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/247.

]]>
50:38 clean podcast,science,technology,tech,urban,data,intelligence,fashion,vision,tom,learning,computer,product,artificial,machine,attribution,ai,api,outfitters,szumowski,ml,apis,urbn,twiml,automl Today we’re joined by Tom Szumowski, Data Scientist at URBN, parent company of Urban Outfitters and other consumer fashion brands. Tom and I caught up to discuss his project “Exploring Custom Vision Services for Automated Fashion Product Attribution.” We look at the process Tom and his team took to build custom attribution models, and the results of their evaluation of various custom vision APIs for this purpose, with a focus on the various roadblocks and lessons he and his team encountered along the 247 full Sam Charrington
Pragmatic Quantum Machine Learning with Peter Wittek - TWiML Talk #245 Pragmatic Quantum Machine Learning with Peter Wittek Mon, 01 Apr 2019 21:27:12 +0000 Today we’re joined by Peter Wittek, Assistant Professor at the University of Toronto working on quantum-enhanced machine learning and the application of high-performance learning algorithms in quantum physics.

Peter and I caught up back in November to discuss a presentation he gave at re:Invent, “Pragmatic Quantum Machine Learning Today.” In our conversation, we start with a bit of background including the current state of quantum computing, a look ahead to what the next 20 years of quantum computing might hold, and how current quantum computers are flawed. We then dive into our discussion on quantum machine learning, and Peter’s new course on the topic, which debuted in February. I’ll link to that in the show notes. Finally, we briefly discuss the work of Ewin Tang, a PhD student from the University of Washington, who’s undergrad thesis “A quantum-inspired classical algorithm for recommendation systems,” made quite a stir last summer. As a special treat for those interested, I’m also sharing my interview with Ewin as a bonus episode alongside this one. I’d love to hear your thoughts on how you think quantum computing will impact machine learning in the next 20 years! Send me a tweet or leave a comment on the show notes page.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/245.

]]>
Today we’re joined by Peter Wittek, Assistant Professor at the University of Toronto working on quantum-enhanced machine learning and the application of high-performance learning algorithms in quantum physics.

Peter and I caught up back in November to discuss a presentation he gave at re:Invent, “Pragmatic Quantum Machine Learning Today.” In our conversation, we start with a bit of background including the current state of quantum computing, a look ahead to what the next 20 years of quantum computing might hold, and how current quantum computers are flawed. We then dive into our discussion on quantum machine learning, and Peter’s new course on the topic, which debuted in February. I’ll link to that in the show notes. Finally, we briefly discuss the work of Ewin Tang, a PhD student from the University of Washington, who’s undergrad thesis “A quantum-inspired classical algorithm for recommendation systems,” made quite a stir last summer. As a special treat for those interested, I’m also sharing my interview with Ewin as a bonus episode alongside this one. I’d love to hear your thoughts on how you think quantum computing will impact machine learning in the next 20 years! Send me a tweet or leave a comment on the show notes page.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/245.

]]>
01:06:59 clean podcast,of,science,technology,tech,computing,data,toronto,intelligence,learning,university,quantum,peter,artificial,machine,ai,ml,reinvent,twiml,wittek Today we’re joined by Peter Wittek, Assistant Professor at the University of Toronto working on quantum-enhanced machine learning and the application of high-performance learning algorithms. In our conversation, we discuss the current state of quantum computing, a look ahead to what the next 20 years of quantum computing might hold, and how current quantum computers are flawed. We then dive into our discussion on quantum machine learning, and Peter’s new course on the topic, which debuted in Februar 245 full Sam Charrington
*Bonus Episode* A Quantum Machine Learning Algorithm Takedown with Ewin Tang - TWiML Talk #246 *Bonus Episode* A Quantum Machine Learning Algorithm Takedown with Ewin Tang Mon, 01 Apr 2019 18:40:41 +0000 In this special bonus episode of the podcast, I’m joined by Ewin Tang, a PhD student in the Theoretical Computer Science group at the University of Washington.

In our conversation, Ewin and I dig into her paper “A quantum-inspired classical algorithm for recommendation systems,” which took the quantum computing community by storm last summer. We haven’t called out a Nerd-Alert interview in a long time, but this interview inspired us to dust off that designation, so get your notepad ready!

The complete show notes for this episode can be found at https://twimlai.com/talk/246.

]]>
In this special bonus episode of the podcast, I’m joined by Ewin Tang, a PhD student in the Theoretical Computer Science group at the University of Washington.

In our conversation, Ewin and I dig into her paper “A quantum-inspired classical algorithm for recommendation systems,” which took the quantum computing community by storm last summer. We haven’t called out a Nerd-Alert interview in a long time, but this interview inspired us to dust off that designation, so get your notepad ready!

The complete show notes for this episode can be found at https://twimlai.com/talk/246.

]]>
40:03 clean podcast,of,science,technology,tech,computing,data,intelligence,learning,university,washington,classical,quantum,artificial,machine,ai,tang,algorithms,ml,algorithm,twiml,ewin In this special bonus episode of the podcast, I’m joined by Ewin Tang, a PhD student in the Theoretical Computer Science group at the University of Washington. In our conversation, Ewin and I dig into her paper “A quantum-inspired classical algorithm for recommendation systems,” which took the quantum computing community by storm last summer. We haven’t called out a Nerd-Alert interview in a long time, but this interview inspired us to dust off that designation, so get your notepad ready! 246 full Sam Charrington
Supporting TensorFlow at Airbnb with Alfredo Luque - TWiML Talk #244 Supporting TensorFlow at Airbnb with Alfredo Luque Thu, 28 Mar 2019 19:38:45 +0000 This interview features my conversation with Alfredo Luque, a software engineer on the machine infrastructure team at Airbnb.

If you’re among the many TWiML fans interested in AI Platforms and ML infrastructure, you probably remember my interview with Airbnb’s Atul Kale, in which we discussed their Bighead platform. In my conversation with Alfredo, we dig a bit deeper into Bighead’s support for TensorFlow, discuss a recent image categorization challenge they solved with the framework, and explore what the new 2.0 release means for their users. The complete show notes for this episode can be found at https://twimlai.com/talk/244

I’d like to send a huge thanks to the TensorFlow team for helping us bring you this podcast series and giveaway. With all the great announcements coming out of the TensorFlow Dev Summit, including the 2.0 alpha, you should definitely check out the latest and greatest at https://tensorflow.org where you can also download and start building with the framework.

In conjunction with the TensorFlow 2.0 alpha release, and our TensorFlow Dev Summit series, we invite you to enter our TensorFlow Edge Kit Giveaway. Winners will receive a gift box from Google that includes some fun toys including the new Coral Edge TPU device and the SparkFun Edge development board powered by TensorFlow. Find out more at https://twimlai.com/tfgiveaway.

]]>
This interview features my conversation with Alfredo Luque, a software engineer on the machine infrastructure team at Airbnb.

If you’re among the many TWiML fans interested in AI Platforms and ML infrastructure, you probably remember my interview with Airbnb’s Atul Kale, in which we discussed their Bighead platform. In my conversation with Alfredo, we dig a bit deeper into Bighead’s support for TensorFlow, discuss a recent image categorization challenge they solved with the framework, and explore what the new 2.0 release means for their users. The complete show notes for this episode can be found at https://twimlai.com/talk/244

I’d like to send a huge thanks to the TensorFlow team for helping us bring you this podcast series and giveaway. With all the great announcements coming out of the TensorFlow Dev Summit, including the 2.0 alpha, you should definitely check out the latest and greatest at https://tensorflow.org where you can also download and start building with the framework.

In conjunction with the TensorFlow 2.0 alpha release, and our TensorFlow Dev Summit series, we invite you to enter our TensorFlow Edge Kit Giveaway. Winners will receive a gift box from Google that includes some fun toys including the new Coral Edge TPU device and the SparkFun Edge development board powered by TensorFlow. Find out more at https://twimlai.com/tfgiveaway.

]]>
40:57 clean podcast,science,black,box,technology,image,tech,google,data,intelligence,20,learning,agnostic,artificial,infrastructure,framework,machine,ai,kale,atul,ml,airbnb,bighead,categorization,tensorflow,twiml Today we're joined by Alfredo Luque, a software engineer on the machine infrastructure team at Airbnb. If you’re interested in AI Platforms and ML infrastructure, you probably remember my interview with Airbnb’s Atul Kale, in which we discussed their Bighead platform. In my conversation with Alfredo, we dig a bit deeper into Bighead’s support for TensorFlow, discuss a recent image categorization challenge they solved with the framework, and explore what the new 2.0 release means for their users. 244 full Sam Charrington
Mining the Vatican Secret Archives with TensorFlow w/ Elena Nieddu - TWiML Talk #243 Mining the Vatican Secret Archives with TensorFlow w/ Elena Nieddu Wed, 27 Mar 2019 16:20:32 +0000 Today we’re joined by Elena Nieddu, PhD Student at Roma Tre University, who presented on her project “In Codice Ratio” at the TF Dev Summit.

In our conversation, Elena provides an overview of the project, which aims to annotate and transcribe Vatican secret archive documents via machine learning. We discuss the many challenges associated with transcribing this vast archive of handwritten documents, including overcoming the high cost of data annotation. I think you’ll agree that her team’s approach to that challenge was particularly creative. The complete show notes for this episode can be found at https://twimlai.com/talk/243

I’d like to send a huge thanks to the TensorFlow team for helping us bring you this podcast series and giveaway. With all the great announcements coming out of the TensorFlow Dev Summit, including the 2.0 alpha, you should definitely check out the latest and greatest at https://tensorflow.org where you can also download and start building with the framework.

In conjunction with the TensorFlow 2.0 alpha release, and our TensorFlow Dev Summit series, we invite you to enter our TensorFlow Edge Kit Giveaway. Winners will receive a gift box from Google that includes some fun toys including the new Coral Edge TPU device and the SparkFun Edge development board powered by TensorFlow. Find out more at https://twimlai.com/tfgiveaway.

]]>
Today we’re joined by Elena Nieddu, PhD Student at Roma Tre University, who presented on her project “In Codice Ratio” at the TF Dev Summit.

In our conversation, Elena provides an overview of the project, which aims to annotate and transcribe Vatican secret archive documents via machine learning. We discuss the many challenges associated with transcribing this vast archive of handwritten documents, including overcoming the high cost of data annotation. I think you’ll agree that her team’s approach to that challenge was particularly creative. The complete show notes for this episode can be found at https://twimlai.com/talk/243

I’d like to send a huge thanks to the TensorFlow team for helping us bring you this podcast series and giveaway. With all the great announcements coming out of the TensorFlow Dev Summit, including the 2.0 alpha, you should definitely check out the latest and greatest at https://tensorflow.org where you can also download and start building with the framework.

In conjunction with the TensorFlow 2.0 alpha release, and our TensorFlow Dev Summit series, we invite you to enter our TensorFlow Edge Kit Giveaway. Winners will receive a gift box from Google that includes some fun toys including the new Coral Edge TPU device and the SparkFun Edge development board powered by TensorFlow. Find out more at https://twimlai.com/tfgiveaway.

]]>
44:06 clean podcast,science,technology,tech,in,google,data,secret,intelligence,20,learning,university,artificial,developer,machine,vatican,summit,ai,archives,elena,ratio,roma,tre,transcription,ml,annotation,tensorflow,nieddu,twiml,codice Today we’re joined by Elena Nieddu, Phd Student at Roma Tre University, who presented on her project “In Codice Ratio” at the TF Dev Summit. In our conversation, Elena provides an overview of the project, which aims to annotate and transcribe Vatican secret archive documents via machine learning. We discuss the many challenges associated with transcribing this vast archive of handwritten documents, including overcoming the high cost of data annotation. 243 full Sam Charrington
Exploring TensorFlow 2.0 with Paige Bailey - TWiML Talk #242 Exploring TensorFlow 2.0 with Paige Bailey Mon, 25 Mar 2019 21:01:27 +0000 Today we're joined by Paige Bailey, a TensorFlow developer advocate at Google to discuss the TensorFlow 2.0 alpha release.

Paige and I sat down to talk through the latest TensorFlow updates, and we cover a lot of ground, including the evolution of the TensorFlow APIs and the role of eager mode, tf.keras and tf.function, the evolution of TensorFlow for Swift and its inclusion in the new fast.ai course, new updates to TFX (or TensorFlow Extended), Google’s end-to-end machine learning platform, the emphasis on community collaboration with TF 2.0, and a bunch more. The complete show notes for this episode can be found at https://twimlai.com/talk/242

I’d like to send a huge thanks to the TensorFlow team for helping us bring you this podcast series and giveaway. With all the great announcements coming out of the TensorFlow Dev Summit, including the 2.0 alpha, you should definitely check out the latest and greatest at https://tensorflow.org where you can also download and start building with the framework.

In conjunction with the TensorFlow 2.0 alpha release, and our TensorFlow Dev Summit series, we invite you to enter our TensorFlow Edge Kit Giveaway. Winners will receive a gift box from Google that includes some fun toys including the new Coral Edge TPU device and the SparkFun Edge development board powered by TensorFlow. Find out more at https://twimlai.com/tfgiveaway.

 

 

]]>
Today we're joined by Paige Bailey, a TensorFlow developer advocate at Google to discuss the TensorFlow 2.0 alpha release.

Paige and I sat down to talk through the latest TensorFlow updates, and we cover a lot of ground, including the evolution of the TensorFlow APIs and the role of eager mode, tf.keras and tf.function, the evolution of TensorFlow for Swift and its inclusion in the new fast.ai course, new updates to TFX (or TensorFlow Extended), Google’s end-to-end machine learning platform, the emphasis on community collaboration with TF 2.0, and a bunch more. The complete show notes for this episode can be found at https://twimlai.com/talk/242

I’d like to send a huge thanks to the TensorFlow team for helping us bring you this podcast series and giveaway. With all the great announcements coming out of the TensorFlow Dev Summit, including the 2.0 alpha, you should definitely check out the latest and greatest at https://tensorflow.org where you can also download and start building with the framework.

In conjunction with the TensorFlow 2.0 alpha release, and our TensorFlow Dev Summit series, we invite you to enter our TensorFlow Edge Kit Giveaway. Winners will receive a gift box from Google that includes some fun toys including the new Coral Edge TPU device and the SparkFun Edge development board powered by TensorFlow. Find out more at https://twimlai.com/tfgiveaway.

 

 

]]>
41:17 clean podcast,science,technology,tech,google,data,intelligence,20,learning,bailey,swift,artificial,machine,ai,paige,ml,tf,tensorflow,twiml,cubeflow Today we're joined by Paige Bailey, TensorFlow developer advocate at Google, to discuss the TensorFlow 2.0 alpha release. Paige and I talk through the latest TensorFlow updates, including the evolution of the TensorFlow APIs and the role of eager mode, tf.keras and tf.function, the evolution of TensorFlow for Swift and its inclusion in the new fast.ai course, new updates to TFX (or TensorFlow Extended), Google’s end-to-end ML platform, the emphasis on community collaboration with TF 2.0, and more. 242 full Sam Charrington
Privacy-Preserving Decentralized Data Science with Andrew Trask - TWiML Talk #241 Privacy-Preserving Decentralized Data Science with Andrew Trask Thu, 21 Mar 2019 16:27:46 +0000 Today we’re joined by Andrew Trask, PhD student at the University of Oxford and Leader of the OpenMined Project.

OpenMined is an open-source community focused on researching, developing, and promoting tools for secure, privacy-preserving, value-aligned artificial intelligence. Andrew and I caught up back at NeurIPS to dig into why OpenMined is important and explore some of the basic research and technologies supporting Private, Decentralized Data Science. We touch on ideas such as Differential Privacy, and Secure Multi-Party Computation, and how these ideas come into play in, for example, federated learning.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/241.

]]>
Today we’re joined by Andrew Trask, PhD student at the University of Oxford and Leader of the OpenMined Project.

OpenMined is an open-source community focused on researching, developing, and promoting tools for secure, privacy-preserving, value-aligned artificial intelligence. Andrew and I caught up back at NeurIPS to dig into why OpenMined is important and explore some of the basic research and technologies supporting Private, Decentralized Data Science. We touch on ideas such as Differential Privacy, and Secure Multi-Party Computation, and how these ideas come into play in, for example, federated learning.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/241.

]]>
32:40 clean podcast,of,science,technology,tech,data,intelligence,learning,university,andrew,artificial,trask,privacy,machine,oxford,ai,federated,ml,differential,decentralized,twiml,neurips,openmined Today we’re joined by Andrew Trask, PhD student at the University of Oxford and Leader of the OpenMined Project, an open-source community focused on researching, developing, and promoting tools for secure, privacy-preserving, value-aligned artificial intelligence. We dig into why OpenMined is important, exploring some of the basic research and technologies supporting Private, Decentralized Data Science, including ideas such as Differential Privacy,and Secure Multi-Party Computation. 241 full Sam Charrington
The Unreasonable Effectiveness of the Forget Gate with Jos Van Der Westhuizen - TWiML Talk #240 The Unreasonable Effectiveness of the Forget Gate with Jos Van Der Westhuizen Mon, 18 Mar 2019 19:31:31 +0000 Today we’re joined by Jos Van Der Westhuizen, PhD student in Engineering at Cambridge University.

Jos’ research focuses on applying LSTMs, or Long Short-Term Memory neural networks, to biological data for various tasks. In our conversation, we discuss his paper The unreasonable effectiveness of the forget gate, in which he explores the various “gates” that make up an LSTM module and the general impact of getting rid of gates on the computational intensity of training the networks. Jos eventually determines that leaving only the forget-gate results in an unreasonably effective network, and we discuss why. Jos also gives us some great LSTM related resources, including references to Jurgen Schmidhuber, whose research group invented the LSTM, and who I spoke to back in Talk #44.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/240.

]]>
Today we’re joined by Jos Van Der Westhuizen, PhD student in Engineering at Cambridge University.

Jos’ research focuses on applying LSTMs, or Long Short-Term Memory neural networks, to biological data for various tasks. In our conversation, we discuss his paper The unreasonable effectiveness of the forget gate, in which he explores the various “gates” that make up an LSTM module and the general impact of getting rid of gates on the computational intensity of training the networks. Jos eventually determines that leaving only the forget-gate results in an unreasonably effective network, and we discuss why. Jos also gives us some great LSTM related resources, including references to Jurgen Schmidhuber, whose research group invented the LSTM, and who I spoke to back in Talk #44.

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there for $200 off.

The complete show notes for this episode can be found at https://twimlai.com/talk/240.

]]>
33:32 clean podcast,science,gate,technology,networks,tech,data,biological,intelligence,long,learning,memory,artificial,van,machine,forget,ai,der,jos,jurgen,ml,shortterm,lstm,pegasystems,twiml,westhuizen,schmidhuber Today we’re joined by Jos Van Der Westhuizen, PhD student in Engineering at Cambridge University. Jos’ research focuses on applying LSTMs, or Long Short-Term Memory neural networks, to biological data for various tasks. In our conversation, we discuss his paper "The unreasonable effectiveness of the forget gate," in which he explores the various “gates” that make up an LSTM module and the general impact of getting rid of gates on the computational intensity of training the networks. 240 full Sam Charrington
Building a Recommendation Agent for The North Face with Andrew Guldman - TWiML Talk #239 Building a Recommendation Agent for The North Face with Andrew Guldman Thu, 14 Mar 2019 16:42:41 +0000 Today we’re joined by Andrew Guldman, VP of Product Engineering and Research and Development at Fluid.

Andrew and I caught up a while back to discuss Fluid XPS, a user experience built to help the casual shopper decide on the best product choices during online retail interactions. While XPS has expanded since we recorded this interview, we specifically discuss its origins as a product to assist outerwear retailer The North Face. In our conversation, we discuss their use of heat-sink algorithms and graph databases, and their use of chat and other interfaces, and the challenges associated with staying on top of a constantly changing technology landscape. This was a fun interview!

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there.

The complete show notes for this episode can be found at https://twimlai.com/talk/239.

]]>
Today we’re joined by Andrew Guldman, VP of Product Engineering and Research and Development at Fluid.

Andrew and I caught up a while back to discuss Fluid XPS, a user experience built to help the casual shopper decide on the best product choices during online retail interactions. While XPS has expanded since we recorded this interview, we specifically discuss its origins as a product to assist outerwear retailer The North Face. In our conversation, we discuss their use of heat-sink algorithms and graph databases, and their use of chat and other interfaces, and the challenges associated with staying on top of a constantly changing technology landscape. This was a fun interview!

Thanks to Pegasystems for sponsoring today's show! I'd like to invite you to join me at PegaWorld, the company’s annual digital transformation conference, which takes place this June in Las Vegas. To learn more about the conference or to register, visit pegaworld.com and use TWIML19 in the promo code field when you get there.

The complete show notes for this episode can be found at https://twimlai.com/talk/239.

]]>
48:42 clean podcast,the,science,technology,recommendation,system,tech,user,experience,data,north,intelligence,learning,andrew,databases,chat,artificial,machine,face,ai,fluid,graph,interfaces,ml,xps,twiml,guldman Today we’re joined by Andrew Guldman, VP of Product Engineering and R&D at Fluid to discuss Fluid XPS, a user experience built to help the casual shopper decide on the best product choices during online retail interactions. We specifically discuss its origins as a product to assist outerwear retailer The North Face. In our conversation, we discuss their use of heat-sink algorithms and graph databases, challenges associated with staying on top of a constantly changing landscape, and more! 239 full Sam Charrington
Active Learning for Materials Design with Kevin Tran - TWiML Talk #238 Active Learning for Materials Design with Kevin Tran Mon, 11 Mar 2019 18:28:33 +0000 Today we’re joined by Kevin Tran, PhD student in the department of chemical engineering at Carnegie Mellon University.

Kevin’s research focuses on creating and using automated, active learning workflows to perform density functional theory, or DFT, simulations, which are used to screen for new catalysts for a myriad of materials applications. In our conversation, we explore the challenges surrounding one such application—the creation of renewable energy fuel cells, which is discussed in his recent Nature paper “Active learning across intermetallics to guide discovery of electrocatalysts for CO2 reduction and H2 evolution.” We dig into the role and need for good catalysts in this application, the role that quantum mechanics plays in finding them, and how Kevin uses machine learning and optimization to predict electrocatalyst performance.

The complete show notes for this episode can be found at twimlai.com/talk/238.

The Artificial Intelligence Conference is returning to New York in April and we have one FREE conference pass for a lucky listener! Visit twimlai.com/ainygiveaway to enter!

]]>
Today we’re joined by Kevin Tran, PhD student in the department of chemical engineering at Carnegie Mellon University.

Kevin’s research focuses on creating and using automated, active learning workflows to perform density functional theory, or DFT, simulations, which are used to screen for new catalysts for a myriad of materials applications. In our conversation, we explore the challenges surrounding one such application—the creation of renewable energy fuel cells, which is discussed in his recent Nature paper “Active learning across intermetallics to guide discovery of electrocatalysts for CO2 reduction and H2 evolution.” We dig into the role and need for good catalysts in this application, the role that quantum mechanics plays in finding them, and how Kevin uses machine learning and optimization to predict electrocatalyst performance.

The complete show notes for this episode can be found at twimlai.com/talk/238.

The Artificial Intelligence Conference is returning to New York in April and we have one FREE conference pass for a lucky listener! Visit twimlai.com/ainygiveaway to enter!

]]>
34:55 clean podcast,science,design,kevin,technology,tech,data,intelligence,chemistry,learning,ny,oreilly,artificial,chemical,materials,machine,ai,active,mellon,cmu,carnegie,ml,tran,twiml Today we’re joined by Kevin Tran, PhD student at Carnegie Mellon University. In our conversation, we explore the challenges surrounding the creation of renewable energy fuel cells, which is discussed in his recent Nature paper “Active learning across intermetallics to guide discovery of electrocatalysts for CO2 reduction and H2 evolution.” The AI Conference is returning to New York in April and we have one FREE conference pass for a lucky listener! Visit twimlai.com/ainygiveaway to enter! 238 full Sam Charrington
Deep Learning in Optics with Aydogan Ozcan - TWiML Talk #237 Deep Learning in Optics with Aydogan Ozcan Thu, 07 Mar 2019 19:08:13 +0000 Today, we’re joined by Aydogan Ozcan, Professor of Electrical and Computer Engineering at UCLA, where his research group focuses on photonics and its applications to nano- and biotechnology.

In our conversation, we explore his group's research into the intersection of deep learning and optics, holography and computational imaging. We specifically look at a really interesting project to create all-optical neural networks which work based on diffraction, where the printed pixels of the network are analogous to neurons. We also explore some of the practical applications for their research and other areas of interest for their group.

The complete show notes for this episode can be found at twimlai.com/talk/237

Be sure to subscribe to our weekly newsletter at twimlai.com/newsletter!

]]>
Today, we’re joined by Aydogan Ozcan, Professor of Electrical and Computer Engineering at UCLA, where his research group focuses on photonics and its applications to nano- and biotechnology.

In our conversation, we explore his group's research into the intersection of deep learning and optics, holography and computational imaging. We specifically look at a really interesting project to create all-optical neural networks which work based on diffraction, where the printed pixels of the network are analogous to neurons. We also explore some of the practical applications for their research and other areas of interest for their group.

The complete show notes for this episode can be found at twimlai.com/talk/237

Be sure to subscribe to our weekly newsletter at twimlai.com/newsletter!

]]>
42:07 clean podcast,science,technology,networks,tech,data,deep,intelligence,learning,biotechnology,imaging,computational,artificial,holography,neural,machine,neuron,ucla,ai,optics,optical,ml,ozcan,twiml,aydogan,mnist Today we’re joined by Aydogan Ozcan, Professor of Electrical and Computer Engineering at UCLA, exploring his group's research into the intersection of deep learning and optics, holography and computational imaging. We specifically look at a really interesting project to create all-optical neural networks which work based on diffraction, where the printed pixels of the network are analogous to neurons. We also explore practical applications for their research and other areas of interest. 237 full Sam Charrington
Scaling Machine Learning on Graphs at LinkedIn with Hema Raghavan and Scott Meyer - TWiML Talk #236 Scaling Machine Learning on Graphs at LinkedIn with Hema Raghavan and Scott Meyer Mon, 04 Mar 2019 17:00:00 +0000 Today we’re joined by Hema Raghavan and Scott Meyer of LinkedIn.

Hema is an Engineering Director Responsible for AI for Growth and Notifications, while Scott serves as a Principal Software Engineer. In this conversation, Hema, Scott and I dig into the graph database and machine learning systems that power LinkedIn features such as “People You May Know” and second-degree connections. Hema shares her insight into the motivations for LinkedIn’s use of graph-based models and some of the challenges surrounding using graphical models at LinkedIn’s scale, while Scott details his work on the software used at the company to support its biggest graph databases.

We'd like to send a huge thanks to LinkedIn for sponsoring today’s show! LinkedIn Engineering solves complex problems at scale to create economic opportunity for every member of the global workforce. AI and ML are integral aspects of almost every product the company builds for its members and customers. LinkedIn’s highly structured dataset gives their data scientists and researchers the ability to conduct applied research to improve member experiences. To learn more about the work of LinkedIn Engineering, please visit engineering.linkedin.com/blog.

For the complete show notes, visit https:/twimlai.com/talk/236. 

]]>
Today we’re joined by Hema Raghavan and Scott Meyer of LinkedIn.

Hema is an Engineering Director Responsible for AI for Growth and Notifications, while Scott serves as a Principal Software Engineer. In this conversation, Hema, Scott and I dig into the graph database and machine learning systems that power LinkedIn features such as “People You May Know” and second-degree connections. Hema shares her insight into the motivations for LinkedIn’s use of graph-based models and some of the challenges surrounding using graphical models at LinkedIn’s scale, while Scott details his work on the software used at the company to support its biggest graph databases.

We'd like to send a huge thanks to LinkedIn for sponsoring today’s show! LinkedIn Engineering solves complex problems at scale to create economic opportunity for every member of the global workforce. AI and ML are integral aspects of almost every product the company builds for its members and customers. LinkedIn’s highly structured dataset gives their data scientists and researchers the ability to conduct applied research to improve member experiences. To learn more about the work of LinkedIn Engineering, please visit engineering.linkedin.com/blog.

For the complete show notes, visit https:/twimlai.com/talk/236. 

]]>
47:01 clean podcast,database,science,technology,linkedin,tech,data,intelligence,scott,models,learning,meyer,artificial,machine,ai,graph,graphical,ml,raghavan,hema,twiml Today we’re joined by Hema Raghavan and Scott Meyer of LinkedIn to discuss the graph database and machine learning systems that power LinkedIn features such as “People You May Know” and second-degree connections. Hema shares her insight into the motivations for LinkedIn’s use of graph-based models and some of the challenges surrounding using graphical models at LinkedIn’s scale, while Scott details his work on the software used at the company to support its biggest graph databases. 236 full Sam Charrington
Safer Exploration in Deep Reinforcement Learning using Action Priors with Sicelukwanda Zwane - TWiML Talk #235 Safer Exploration in Deep Reinforcement Learning using Action Priors with Sicelukwanda Zwane Fri, 01 Mar 2019 17:00:00 +0000 Today we conclude our Black in AI series with Sicelukwanda Zwane, a masters student at the University of Witwatersrand and graduate research assistant at the CSIR.

At the workshop, he presented on “Safer Exploration in Deep Reinforcement Learning using Action Priors,” which explores transferring action priors between robotic tasks to reduce the exploration space in reinforcement learning, which in turn reduces sample complexity. In our conversation, we discuss what “safer exploration” means in this sense, the difference between this work and other techniques like imitation learning, and how this fits in with the goal of “lifelong learning.”

The complete show notes for this episode can be found at https://twimlai.com/talk/235. To follow along with the Black in AI series, visit https://twimlai.com/blackinai19.

]]>
Today we conclude our Black in AI series with Sicelukwanda Zwane, a masters student at the University of Witwatersrand and graduate research assistant at the CSIR.

At the workshop, he presented on “Safer Exploration in Deep Reinforcement Learning using Action Priors,” which explores transferring action priors between robotic tasks to reduce the exploration space in reinforcement learning, which in turn reduces sample complexity. In our conversation, we discuss what “safer exploration” means in this sense, the difference between this work and other techniques like imitation learning, and how this fits in with the goal of “lifelong learning.”

The complete show notes for this episode can be found at https://twimlai.com/talk/235. To follow along with the Black in AI series, visit https://twimlai.com/blackinai19.

]]>
54:01 clean podcast,of,science,black,technology,tech,in,data,action,deep,intelligence,physics,learning,university,artificial,machine,ai,reinforcement,ml,priors,zwane,witwatersrand,twiml,sicelukwanda Today we conclude our Black in AI series with Sicelukwanda Zwane, a masters student at the University of Witwatersrand and graduate research assistant at the CSIR, who presented on “Safer Exploration in Deep Reinforcement Learning using Action Priors” at the workshop. In our conversation, we discuss what “safer exploration” means in this sense, the difference between this work and other techniques like imitation learning, and how this fits in with the goal of “lifelong learning.” 235 full Sam Charrington
Dissecting the Controversy around OpenAI's New Language Model - TWiML Talk #234 Dissecting the Controversy around OpenAI's New Language Model Mon, 25 Feb 2019 17:58:34 +0000 If you’re listening to this podcast, you’ve likely seen some of the press coverage and discussion surrounding the release, or lack thereof, of OpenAI’s new GPT-2 Language Model. The announcement caused quite a stir, with reactions spanning confusion, frustration, concern, and many points in between. Several days later, many open questions remained about the model and the way the release was handled.

Seeing the continued robust discourse, and wanting to offer the community a forum for exploring this topic with more nuance than Twitter’s 280 characters allow, we convened the inaugural “TWiML Live” panel. I was joined on the panel by Amanda Askell and Miles Brundage of OpenAI, Anima Anandkumar of NVIDIA and CalTech, Robert Munro of Lilt, and Stephen Merity, the latter being some of the most outspoken voices in the online discussion of this issue.

Our discussion thoroughly explored the many issues surrounding the GPT-2 release controversy. We cover the basics like what language models are and why they’re important, and why this announcement caused such a stir, and dig deep into why the lack of a full release of the model raised concerns for so many.

The discussion initially aired via Youtube Live and we’re happy to share it with you via the podcast as well. To be clear, both the panel discussion and live stream format were a bit of an experiment for us and we’d love to hear your thoughts on it. Would you like to see, or hear, more of these TWiML Live conversations? If so, what issues would you like us to take on?

If you have feedback for us on the format or if you’d like to join the discussion around OpenAI’s GPT-2 model, head to the show notes page for this show at twimlai.com/talk/234 and leave us a comment.

]]>
If you’re listening to this podcast, you’ve likely seen some of the press coverage and discussion surrounding the release, or lack thereof, of OpenAI’s new GPT-2 Language Model. The announcement caused quite a stir, with reactions spanning confusion, frustration, concern, and many points in between. Several days later, many open questions remained about the model and the way the release was handled.

Seeing the continued robust discourse, and wanting to offer the community a forum for exploring this topic with more nuance than Twitter’s 280 characters allow, we convened the inaugural “TWiML Live” panel. I was joined on the panel by Amanda Askell and Miles Brundage of OpenAI, Anima Anandkumar of NVIDIA and CalTech, Robert Munro of Lilt, and Stephen Merity, the latter being some of the most outspoken voices in the online discussion of this issue.

Our discussion thoroughly explored the many issues surrounding the GPT-2 release controversy. We cover the basics like what language models are and why they’re important, and why this announcement caused such a stir, and dig deep into why the lack of a full release of the model raised concerns for so many.

The discussion initially aired via Youtube Live and we’re happy to share it with you via the podcast as well. To be clear, both the panel discussion and live stream format were a bit of an experiment for us and we’d love to hear your thoughts on it. Would you like to see, or hear, more of these TWiML Live conversations? If so, what issues would you like us to take on?

If you have feedback for us on the format or if you’d like to join the discussion around OpenAI’s GPT-2 model, head to the show notes page for this show at twimlai.com/talk/234 and leave us a comment.

]]>
01:06:22 clean podcast,science,twitter,technology,tech,model,live,data,language,intelligence,stephen,learning,processing,natural,robert,youtube,artificial,controversy,machine,ai,anima,nvidia,nlp,unsupervised,munro,ml,openai,twiml,anandkumar,merity In the inaugural TWiML Live, Sam Charrington is joined by Amanda Askell (OpenAI), Anima Anandkumar (NVIDIA/CalTech), Miles Brundage (OpenAI), Robert Munro (Lilt), and Stephen Merity to discuss the controversial recent release of the OpenAI GPT-2 Language Model. We cover the basics like what language models are and why they’re important, and why this announcement caused such a stir, and dig deep into why the lack of a full release of the model raised concerns for so many. 234 full Sam Charrington
Human-Centered Design with Mira Lane - TWiML Talk #233 Human-Centered Design with Mira Lane Fri, 22 Feb 2019 15:26:34 +0000 Today we present the final episode in our AI for the Benefit of Society series, in which we’re joined by Mira Lane, Partner Director for Ethics and Society at Microsoft.

Mira and I focus our conversation on the role of culture and human-centered design in AI. We discuss how Mira defines human-centered design, its connections to culture and responsible innovation, and how these ideas can be scalably implemented across large engineering organizations.

We’d like to thank Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at Microsoft.ai.

The complete show notes for this episode can be found at twimlai.com/talk/233. For more information on the AI for the Benefit of Society series, visit twimlai.com/ai4society.

]]>
Today we present the final episode in our AI for the Benefit of Society series, in which we’re joined by Mira Lane, Partner Director for Ethics and Society at Microsoft.

Mira and I focus our conversation on the role of culture and human-centered design in AI. We discuss how Mira defines human-centered design, its connections to culture and responsible innovation, and how these ideas can be scalably implemented across large engineering organizations.

We’d like to thank Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at Microsoft.ai.

The complete show notes for this episode can be found at twimlai.com/talk/233. For more information on the AI for the Benefit of Society series, visit twimlai.com/ai4society.

]]>
47:04 clean podcast,the,science,technology,tech,in,for,data,microsoft,intelligence,lane,learning,human,ethics,artificial,institute,disclosure,machine,now,ai,mira,loop,centered,ml,twiml,datasets,datasheets,timnit,gebru Today we present the final episode in our AI for the Benefit of Society series, in which we’re joined by Mira Lane, Partner Director for Ethics and Society at Microsoft. Mira and I focus our conversation on the role of culture and human-centered design in AI. We discuss how Mira defines human-centered design, its connections to culture and responsible innovation, and how these ideas can be scalably implemented across large engineering organizations. 233 full Sam Charrington
Fairness in Machine Learning with Hanna Wallach - TWiML Talk #232 Fairness in Machine Learning with Hanna Wallach Mon, 18 Feb 2019 23:06:39 +0000 Today we’re joined by Hanna Wallach, a Principal Researcher at Microsoft Research.

Hanna and I really dig into how bias and a lack of interpretability and transparency show up across machine learning. We discuss the role that human biases, even those that are inadvertent, play in tainting data, and whether deployment of “fair” ML models can actually be achieved in practice, and much more. Along the way, Hanna points us to a TON of papers and resources to further explore the topic of fairness in ML. You’ll definitely want to check out the notes page for this episode, which you’ll find at twimlai.com/talk/232.

We’d like to thank Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at Microsoft.ai.

]]>
Today we’re joined by Hanna Wallach, a Principal Researcher at Microsoft Research.

Hanna and I really dig into how bias and a lack of interpretability and transparency show up across machine learning. We discuss the role that human biases, even those that are inadvertent, play in tainting data, and whether deployment of “fair” ML models can actually be achieved in practice, and much more. Along the way, Hanna points us to a TON of papers and resources to further explore the topic of fairness in ML. You’ll definitely want to check out the notes page for this episode, which you’ll find at twimlai.com/talk/232.

We’d like to thank Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at Microsoft.ai.

]]>
49:04 clean podcast,science,technology,tech,women,in,data,microsoft,intelligence,learning,ethics,transparency,artificial,wallach,fairness,machine,bias,hanna,ai,ml,twiml,interpretability,neurips,wiml Today we’re joined by Hanna Wallach, a Principal Researcher at Microsoft Research. Hanna and I really dig into how bias and a lack of interpretability and transparency show up across ML. We discuss the role that human biases, even those that are inadvertent, play in tainting data, and whether deployment of “fair” ML models can actually be achieved in practice, and much more. Hanna points us to a TON of resources to further explore the topic of fairness in ML, which you’ll find at twimlai.com/talk 232 full Sam Charrington
AI for Healthcare with Peter Lee - TWiML Talk #231 AI in Healthcare with Peter Lee Mon, 18 Feb 2019 02:06:25 +0000 In this episode, we’re joined by Peter Lee, Corporate Vice President at Microsoft Research responsible for the company’s healthcare initiatives.

Peter and I met a few months ago at the Microsoft Ignite conference, where he gave me some really interesting takes on AI development in China. You can find more on that topic in the show notes. This conversation centers the three impact areas Peter sees for AI in healthcare, namely diagnostics and therapeutics, tools, and the future of precision medicine. We dig into some examples in each area, and Peter details the realities of applying machine learning and some of the impediments to rapid scale.

We’d like to thank Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at Microsoft.ai.

The complete show notes for this episode can be found at twimlai.com/talk/231.

]]>
In this episode, we’re joined by Peter Lee, Corporate Vice President at Microsoft Research responsible for the company’s healthcare initiatives.

Peter and I met a few months ago at the Microsoft Ignite conference, where he gave me some really interesting takes on AI development in China. You can find more on that topic in the show notes. This conversation centers the three impact areas Peter sees for AI in healthcare, namely diagnostics and therapeutics, tools, and the future of precision medicine. We dig into some examples in each area, and Peter details the realities of applying machine learning and some of the impediments to rapid scale.

We’d like to thank Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at Microsoft.ai.

The complete show notes for this episode can be found at twimlai.com/talk/231.

]]>
57:19 clean podcast,medical,science,technology,tech,doctor,data,microsoft,intelligence,china,vision,learning,lee,computer,healthcare,peter,medicine,technologies,artificial,machine,ai,adaptive,diagnostics,ml,precision,twiml In this episode, we’re joined by Peter Lee, Corporate Vice President at Microsoft Research responsible for the company’s healthcare initiatives. Peter and I met back at Microsoft Ignite, where he gave me some really interesting takes on AI development in China, which is linked in the show notes. This conversation centers around impact areas Peter sees for AI in healthcare, namely diagnostics and therapeutics, tools, and the future of precision medicine. 231 full Sam Charrington
An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection with Justice Amoh Jr. - TWiML Talk #230 An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection with Justice Amoh Mon, 11 Feb 2019 21:43:35 +0000 Today, we're joined by Justice Amoh Jr., a Ph.D. student at Dartmouth’s Thayer School of Engineering.

Justice presented his work on “An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection.” In our conversation, we discuss his goal of bringing low cost, high-efficiency wearables to market for monitoring asthma. We explore the many challenges of using classical machine learning models on microcontrollers, and how he went about developing models optimized for constrained hardware environments. We’d also like to wish Justice the best of luck as he should be defending his Ph.D. any day now!

The complete show notes for this episode can be found at https://twimlai.com/talk/230. To follow along with the Black in AI series, visit https://twimlai.com/blackinai19.

 

]]>
Today, we're joined by Justice Amoh Jr., a Ph.D. student at Dartmouth’s Thayer School of Engineering.

Justice presented his work on “An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection.” In our conversation, we discuss his goal of bringing low cost, high-efficiency wearables to market for monitoring asthma. We explore the many challenges of using classical machine learning models on microcontrollers, and how he went about developing models optimized for constrained hardware environments. We’d also like to wish Justice the best of luck as he should be defending his Ph.D. any day now!

The complete show notes for this episode can be found at https://twimlai.com/talk/230. To follow along with the Black in AI series, visit https://twimlai.com/blackinai19.

 

]]>
45:51 clean podcast,science,black,technology,tech,in,data,power,intelligence,learning,acoustic,hardware,justice,low,artificial,asthma,event,machine,ai,detection,dartmouth,ml,wearables,twiml,amoh,microcontrollers Today, we're joined by Justice Amoh Jr., a Ph.D. student at Dartmouth’s Thayer School of Engineering. Justice presented his work on “An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection.” In our conversation, we discuss his goal of bringing low cost, high-efficiency wearables to market for monitoring asthma. We explore the challenges of using classical machine learning models on microcontrollers, and how he went about developing models optimized for constrained hardware environm 230 full Sam Charrington
Pathologies of Neural Models and Interpretability with Alvin Grissom II - TWiML Talk #229 Pathologies of Neural Models and Interpretability with Alvin Grissom II Mon, 11 Feb 2019 17:49:21 +0000 Today, we're excited to continue our Black in AI series with Alvin Grissom II, Assistant Professor of Computer Science at Ursinus College.

Alvin’s research is focused on computational linguistics, and we begin with a brief chat about some of his prior work on verb prediction using reinforcement learning. We then dive into the paper he presented at the workshop, “Pathologies of Neural Models Make Interpretations Difficult.” We talk through some of the “pathological behaviors” he identified in the paper, how we can better understand the overconfidence of trained deep learning models in certain settings, and how we can improve model training with entropy regularization. We also touch on the parallel between his work and the work being done on adversarial examples by Ian Goodfellow and others.

For the complete show notes, visit https://twimlai.com/talk/229. To follow along with our Black in AI series, visit https://twimlai.com/blackinai19.

 

]]>
Today, we're excited to continue our Black in AI series with Alvin Grissom II, Assistant Professor of Computer Science at Ursinus College.

Alvin’s research is focused on computational linguistics, and we begin with a brief chat about some of his prior work on verb prediction using reinforcement learning. We then dive into the paper he presented at the workshop, “Pathologies of Neural Models Make Interpretations Difficult.” We talk through some of the “pathological behaviors” he identified in the paper, how we can better understand the overconfidence of trained deep learning models in certain settings, and how we can improve model training with entropy regularization. We also touch on the parallel between his work and the work being done on adversarial examples by Ian Goodfellow and others.

For the complete show notes, visit https://twimlai.com/talk/229. To follow along with our Black in AI series, visit https://twimlai.com/blackinai19.

 

]]>
32:28 clean podcast,science,black,technology,tech,in,data,intelligence,models,learning,linguistics,computational,artificial,neural,machine,ai,behaviors,examples,alvin,ml,interpretations,pathological,grissom,adversarial,twiml,neurips,ursinus,pathologies Today, we continue our Black in AI series with Alvin Grissom II, Assistant Professor of Computer Science at Ursinus College. In our conversation, we dive into the paper he presented at the workshop, “Pathologies of Neural Models Make Interpretations Difficult.” We talk through some of the “pathological behaviors” he identified in the paper, how we can better understand the overconfidence of trained deep learning models in certain settings, and how we can improve model training with entropy regulariz 229 full Sam Charrington
AI for Earth with Lucas Joppa - TWiML Talk #228 AI for Earth with Lucas Joppa Fri, 08 Feb 2019 16:00:00 +0000 In this episode of our AI For the Benefit of Society with Microsoft series, we’re joined by Lucas Joppa and Zach Parisa.

Lucas is the Chief Environmental Officer at Microsoft, spearheading their 5 year, $50 million AI for Earth commitment, which seeks to apply machine learning and AI across four key environmental areas: agriculture, water, biodiversity, and climate change. Zack is Co-founder and president of SilviaTerra, a Microsoft AI for Earth grantee whose mission is to help people use modern data sources to better manage forest habitats and ecosystems.

In our conversation we discuss the ways that machine learning and AI can be used to advance our understanding of forests and other ecosystems and support conservation efforts. We discuss how SilviaTerra uses computer vision and data from a wide array of sensors like LIDAR, combined with AI, to yield more detailed small-area estimates of the various species in our forests. We also briefly discuss another AI for Earth project, WildMe, a computer vision based wildlife conservation project we discussed with Jason Holmberg back on episode 166.

The complete show notes for this episode can be found at https://twimlai.com/talk/288. To follow along with the entire AI for the Benefit of Society series, visit https://twimlai.com/ai4society.

We’d like to thank Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at https://Microsoft.ai.

 

 

]]>
In this episode of our AI For the Benefit of Society with Microsoft series, we’re joined by Lucas Joppa and Zach Parisa.

Lucas is the Chief Environmental Officer at Microsoft, spearheading their 5 year, $50 million AI for Earth commitment, which seeks to apply machine learning and AI across four key environmental areas: agriculture, water, biodiversity, and climate change. Zack is Co-founder and president of SilviaTerra, a Microsoft AI for Earth grantee whose mission is to help people use modern data sources to better manage forest habitats and ecosystems.

In our conversation we discuss the ways that machine learning and AI can be used to advance our understanding of forests and other ecosystems and support conservation efforts. We discuss how SilviaTerra uses computer vision and data from a wide array of sensors like LIDAR, combined with AI, to yield more detailed small-area estimates of the various species in our forests. We also briefly discuss another AI for Earth project, WildMe, a computer vision based wildlife conservation project we discussed with Jason Holmberg back on episode 166.

The complete show notes for this episode can be found at https://twimlai.com/talk/288. To follow along with the entire AI for the Benefit of Society series, visit https://twimlai.com/ai4society.

We’d like to thank Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at https://Microsoft.ai.

 

 

]]>
57:07 clean podcast,science,water,forest,silvia,technology,tech,agriculture,for,data,earth,environment,climate,change,microsoft,intelligence,learning,lucas,artificial,biodiversity,machine,ai,terra,zach,joppa,ml,parisa,lidar,wildme,twiml Today we’re joined by Lucas Joppa, Chief Environmental Officer at Microsoft and Zach Parisa, Co-founder and president of Silvia Terra, a Microsoft AI for Earth grantee. In our conversation, we explore the ways that ML & AI can be used to advance our understanding of forests and other ecosystems, supporting conservation efforts. We discuss how Silvia Terra uses computer vision and data from a wide array of sensors, combined with AI, to yield more detailed estimates of the various species in our forests. 228 full Sam Charrrington
AI for Accessibility with Wendy Chisholm - TWiML Talk #227 AI for Accessibility with Wendy Chisholm Wed, 06 Feb 2019 16:00:00 +0000 Today we’re joined by Wendy Chisholm, Lois Brady, and Matthew Guggemos. Wendy is a principal accessibility architect at Microsoft, and one of the chief proponents of the AI for Accessibility program, which extends grants to AI-powered accessibility projects the areas of Employment, Daily Life, and Communication & Connection. Lois and Matthew are Co-Founders and CEO and CTO, respectively, of iTherapy, an AI for Accessibility grantee and creator of the Inner Voice app, which utilizes visual language to strengthen communication in children on the autism scale.

In our conversation, we discuss the intersection of AI and accessibility, the lasting impact that innovation in AI can have for people with disabilities and society as a whole, and the importance of programs like AI for Accessibility in bringing projects in this area to fruition. 

For the complete show notes, visit https://twimlai.com/talk/226.

The transcript for this interview can be found at https://twimlai.com/talk/206/tx.

To follow along with the AI for the Benefit of Society series, visit https://twimlai.com/ai4society.

Thanks to Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at https://microsoft.ai.

 

]]>
Today we’re joined by Wendy Chisholm, Lois Brady, and Matthew Guggemos. Wendy is a principal accessibility architect at Microsoft, and one of the chief proponents of the AI for Accessibility program, which extends grants to AI-powered accessibility projects the areas of Employment, Daily Life, and Communication & Connection. Lois and Matthew are Co-Founders and CEO and CTO, respectively, of iTherapy, an AI for Accessibility grantee and creator of the Inner Voice app, which utilizes visual language to strengthen communication in children on the autism scale.

In our conversation, we discuss the intersection of AI and accessibility, the lasting impact that innovation in AI can have for people with disabilities and society as a whole, and the importance of programs like AI for Accessibility in bringing projects in this area to fruition. 

For the complete show notes, visit https://twimlai.com/talk/226.

The transcript for this interview can be found at https://twimlai.com/talk/206/tx.

To follow along with the AI for the Benefit of Society series, visit https://twimlai.com/ai4society.

Thanks to Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at https://microsoft.ai.

 

]]>
51:12 clean podcast,science,technology,tech,life,communication,voice,data,society,microsoft,intelligence,daily,learning,employment,sensory,autism,disability,accessibility,artificial,machine,ai,inner,wendy,disabled,empowered,ml,chisholm,accessible,twiml Today we’re joined by Wendy Chisholm, a principal accessibility architect at Microsoft, and one of the chief proponents of the AI for Accessibility program, which extends grants to AI-powered accessibility projects the areas of Employment, Daily Life, and Communication & Connection. In our conversation, we discuss the intersection of AI and accessibility, the lasting impact that innovation in AI can have for people with disabilities and society as a whole, and the importance of projects in this area. 227 full Sam Charrington
AI for Humanitarian Action with Justin Spelhaug - TWiML Talk #226 AI for Humanitarian Action with Justin Spelhaug Mon, 04 Feb 2019 16:00:00 +0000 Today we're joined by Justin Spelhaug, General Manager of Technology for Social Impact at Microsoft.

In our conversation, Justin and I discuss the company’s efforts in AI for Humanitarian Action, a program which extends grants to fund AI-powered projects focused on disaster response, the needs of children, protecting refugees, and promoting respect for human rights. We cover Microsoft’s overall approach to technology for social impact, how his group helps mission-driven organizations best leverage technologies like AI, and how AI is being used at places like the World Bank, Operation Smile, and Mission Measurement to create greater impact.

The complete show notes for this episode can be found at https://twimlai.com/talk/226. Follow along with the entire AI for the Benefit of Society series, visit https://twimlai.com/ai4society.

We’d like to thank Microsoft for their support of the show, and their sponsorship of this series.  Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with this intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more about their plan at Microsoft.ai

]]>
Today we're joined by Justin Spelhaug, General Manager of Technology for Social Impact at Microsoft.

In our conversation, Justin and I discuss the company’s efforts in AI for Humanitarian Action, a program which extends grants to fund AI-powered projects focused on disaster response, the needs of children, protecting refugees, and promoting respect for human rights. We cover Microsoft’s overall approach to technology for social impact, how his group helps mission-driven organizations best leverage technologies like AI, and how AI is being used at places like the World Bank, Operation Smile, and Mission Measurement to create greater impact.

The complete show notes for this episode can be found at https://twimlai.com/talk/226. Follow along with the entire AI for the Benefit of Society series, visit https://twimlai.com/ai4society.

We’d like to thank Microsoft for their support of the show, and their sponsorship of this series.  Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with this intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more about their plan at Microsoft.ai

]]>
59:21 clean podcast,science,justin,social,technology,tech,for,data,action,microsoft,world,intelligence,learning,bank,human,philanthropy,smile,rights,azure,artificial,operation,disaster,relief,impact,machine,ai,humanitarian,ml,twiml,spelhaug Today we're joined by Justin Spelhaug, General Manager of Technology for Social Impact at Microsoft. In our conversation, we discuss the company’s efforts in AI for Humanitarian Action, covering Microsoft’s overall approach to technology for social impact, how his group helps mission-driven organizations best leverage technologies like AI, and how AI is being used at places like the World Bank, Operation Smile, and Mission Measurement to create greater impact. 226 full Sam Charrington
Teaching AI to Preschoolers with Randi Williams - TWiML Talk #225 Teaching AI to Preschoolers with Randi Williams Thu, 31 Jan 2019 05:58:09 +0000 Today, in the first episode of our Black in AI series, we’re joined by Randi Williams, PhD student at the MIT Media Lab.

At the Black in AI workshop Randi presented her research on Popbots: A Early Childhood AI Curriculum, which is geared towards teaching preschoolers the fundamentals of artificial intelligence. In our conversation, we discuss the origins of the project, the three AI concepts that are taught in the program, and the goals that Randi hopes to accomplish with her work. This was a fun conversation!

The complete show notes for this episode can be found at twimlai.com/talk/225.

Follow along with our Black in AI series at twimlai.com/blackinai19.

]]>
Today, in the first episode of our Black in AI series, we’re joined by Randi Williams, PhD student at the MIT Media Lab.

At the Black in AI workshop Randi presented her research on Popbots: A Early Childhood AI Curriculum, which is geared towards teaching preschoolers the fundamentals of artificial intelligence. In our conversation, we discuss the origins of the project, the three AI concepts that are taught in the program, and the goals that Randi hopes to accomplish with her work. This was a fun conversation!

The complete show notes for this episode can be found at twimlai.com/talk/225.

Follow along with our Black in AI series at twimlai.com/blackinai19.

]]>
44:32 clean podcast,science,black,technology,recommendation,tech,williams,in,data,systems,language,education,media,intelligence,learning,processing,natural,youtube,artificial,robotics,machine,ai,mit,labs,curriculum,randi,preschool,ml,supervised,generative,twiml Today, in the first episode of our Black in AI series, we’re joined by Randi Williams, PhD student at the MIT Media Lab. At the Black in AI workshop Randi presented her research on Popbots: A Early Childhood AI Curriculum, which is geared towards teaching preschoolers the fundamentals of artificial intelligence. In our conversation, we discuss the origins of the project, the three AI concepts that are taught in the program, and the goals that Randi hopes to accomplish with her work. 225 full Sam Charrington
Holistic Optimization of the LinkedIn News Feed - TWiML Talk #224 Holistic Optimization of the LinkedIn News Feed Mon, 28 Jan 2019 16:28:15 +0000 Today we’re joined by Tim Jurka, Head of Feed AI at LinkedIn.

As you can imagine Feed AI is responsible for curating all the content you see daily on the LinkedIn site. What’s less apparent to those that don’t work on this type of product is the wide variety of opposing factors that need to be considered in organizing the feed. As you’ll learn in our conversation, Tim calls this the holistic optimization of the feed and we discuss some of the interesting technical and business challenges associated with trying to do this. We talk through some of the specific techniques used at LinkedIn like Multi-arm Bandits and Content Embeddings, and also jump into a really interesting discussion about organizing for machine learning at scale.

We’d like to send a huge thanks to LinkedIn for sponsoring today’s show! LinkedIn Engineering solves complex problems at scale to create economic opportunity for every member of the global workforce. AI and ML are integral aspects of almost every product the company builds for its members and customers. LinkedIn’s highly structured dataset gives their data scientists and researchers the ability to conduct applied research to improve member experiences. To learn more about the work of LinkedIn Engineering, please visit https://engineering.linkedin.com/blog.

The complete show notes can be found at https://twimlai.com/talk/224.

]]>
Today we’re joined by Tim Jurka, Head of Feed AI at LinkedIn.

As you can imagine Feed AI is responsible for curating all the content you see daily on the LinkedIn site. What’s less apparent to those that don’t work on this type of product is the wide variety of opposing factors that need to be considered in organizing the feed. As you’ll learn in our conversation, Tim calls this the holistic optimization of the feed and we discuss some of the interesting technical and business challenges associated with trying to do this. We talk through some of the specific techniques used at LinkedIn like Multi-arm Bandits and Content Embeddings, and also jump into a really interesting discussion about organizing for machine learning at scale.

We’d like to send a huge thanks to LinkedIn for sponsoring today’s show! LinkedIn Engineering solves complex problems at scale to create economic opportunity for every member of the global workforce. AI and ML are integral aspects of almost every product the company builds for its members and customers. LinkedIn’s highly structured dataset gives their data scientists and researchers the ability to conduct applied research to improve member experiences. To learn more about the work of LinkedIn Engineering, please visit https://engineering.linkedin.com/blog.

The complete show notes can be found at https://twimlai.com/talk/224.

]]>
48:24 clean podcast,tim,science,technology,linkedin,tech,data,intelligence,learning,content,feed,multi,artificial,arm,holistic,machine,ai,bandits,optimization,scale,ml,twiml,embeddings,jurka Today we’re joined by Tim Jurka, Head of Feed AI at LinkedIn. In our conversation, Tim describes the holistic optimization of the feed and we discuss some of the interesting technical and business challenges associated with trying to do this. We talk through some of the specific techniques used at LinkedIn like Multi-arm Bandits and Content Embeddings, and also jump into a really interesting discussion about organizing for machine learning at scale. 224 full Sam Charrington
AI at the Edge at Qualcomm with Gary Brotman - TWiML Talk #223 AI at the Edge at Qualcomm with Gary Brotman Thu, 24 Jan 2019 16:50:22 +0000 Today we’re joined by Gary Brotman, Senior Director of Product Management at Qualcomm Technologies, Inc.

Gary, who got his start in AI through music, now leads strategy and product planning for the company’s Artificial Intelligence and Machine Learning technologies, including those that make up the Qualcomm Snapdragon mobile platforms. In our conversation, we discuss AI on mobile devices and at the edge, including popular use cases, and explore some of the various acceleration technologies offered by Qualcomm and others that enable them. We also dig into the state of AI on devices from the application developer’s perspective, and how various acceleration technologies fit together to help developers bring new products to market.

Thanks to our friends at Qualcomm for sponsoring today’s show! As you’ll hear in the conversation with Gary, Qualcomm has been in the AI space for well over a decade now, powering some of the latest and greatest Android devices with their Snapdragon chipset. With their strong footing in the mobile chipset space, Qualcomm now has the goal of making AI at the edge ubiquitous, beyond mobile devices. To find out more about what they’re up to, and how they plan to get there, visit twimlai.com/qualcomm.

The complete show notes for this episode can be found at twimlai.com/talk/223.

]]>
Today we’re joined by Gary Brotman, Senior Director of Product Management at Qualcomm Technologies, Inc.

Gary, who got his start in AI through music, now leads strategy and product planning for the company’s Artificial Intelligence and Machine Learning technologies, including those that make up the Qualcomm Snapdragon mobile platforms. In our conversation, we discuss AI on mobile devices and at the edge, including popular use cases, and explore some of the various acceleration technologies offered by Qualcomm and others that enable them. We also dig into the state of AI on devices from the application developer’s perspective, and how various acceleration technologies fit together to help developers bring new products to market.

Thanks to our friends at Qualcomm for sponsoring today’s show! As you’ll hear in the conversation with Gary, Qualcomm has been in the AI space for well over a decade now, powering some of the latest and greatest Android devices with their Snapdragon chipset. With their strong footing in the mobile chipset space, Qualcomm now has the goal of making AI at the edge ubiquitous, beyond mobile devices. To find out more about what they’re up to, and how they plan to get there, visit twimlai.com/qualcomm.

The complete show notes for this episode can be found at twimlai.com/talk/223.

]]>
51:54 clean podcast,science,gary,technology,networks,tech,processor,digital,mobile,google,data,intelligence,learning,artificial,developer,neural,signal,machine,ai,qualcomm,ml,dsp,snapdragon,hexagon,twiml,brotman Today we’re joined by Gary Brotman, Senior Director of Product Management at Qualcomm Technologies, Inc. Gary, who got his start in AI through music, now leads strategy and product planning for the company’s AI and ML technologies, including those that make up the Qualcomm Snapdragon mobile platforms. In our conversation, we discuss AI on mobile devices and at the edge, including popular use cases, and explore some of the various acceleration technologies offered by Qualcomm and others that enable th 223 full Sam Charrington
AI Innovation at CES - TWiML Talk #222 AI Innovation at CES - TWiML Talk #222 Mon, 21 Jan 2019 19:18:58 +0000 A few weeks ago, I made the trek to Las Vegas for the world’s biggest electronics conference, CES.

CES is one of those things that’s hard to fully understand without having seen, so I thought it’d be fun to give you a look at it from my vantage point. In this special visual episode, we’re going to check out some of the interesting examples of machine learning and AI that I found at the event. We cover a bunch of different categories, including several that don’t really target consumers at all, like John Deere’s gigantic, combine harvester, a company building a drone that stops bullets, and a startup that wants to do away with something we all despise, traffic.

Check out the video at https://twimlai.com/ces2019, and be sure to hit the like and subscribe buttons and let us know how you like the show via a comment!

For the show notes, visit https://twimlai.com/talk/222.

 

]]>
A few weeks ago, I made the trek to Las Vegas for the world’s biggest electronics conference, CES.

CES is one of those things that’s hard to fully understand without having seen, so I thought it’d be fun to give you a look at it from my vantage point. In this special visual episode, we’re going to check out some of the interesting examples of machine learning and AI that I found at the event. We cover a bunch of different categories, including several that don’t really target consumers at all, like John Deere’s gigantic, combine harvester, a company building a drone that stops bullets, and a startup that wants to do away with something we all despise, traffic.

Check out the video at https://twimlai.com/ces2019, and be sure to hit the like and subscribe buttons and let us know how you like the show via a comment!

For the show notes, visit https://twimlai.com/talk/222.

 

]]>
02:01 clean podcast,john,technology,tech,in,this,traffic,week,automation,intelligence,learning,electronics,ces,astral,intel,consumer,artificial,robotics,relief,machine,ai,bot,wearable,ar,boxer,omron,drones,ml,quell,deere,twiml,hoobox,ximantis A few weeks ago, I made the trek to Las Vegas for the world’s biggest electronics conference, CES. In this special visual only episode, we’re going to check out some of the interesting examples of machine learning and AI that I found at the event. Check out the video at https://twimlai.com/ces2019, and be sure to hit the like and subscribe buttons and let us know how you like the show via a comment! For the show notes, visit https://twimlai.com/talk/222. 222 full Sam Charrington
Self-Tuning Services via Real-Time Machine Learning with Vladimir Bychkovsky - TWiML Talk #221 Self-Tuning Services vis Real-Time Machine Learning with Vladimir Bychkovsky Thu, 17 Jan 2019 19:34:02 +0000 Today we’re joined by Vladimir Bychkovsky, Engineering Manager at Facebook, to discuss Spiral.

Spiral is a system they’ve developed for self-tuning high-performance infrastructure services at scale, using real-time machine learning. In our conversation, we explore the ins and outs of Spiral, including how the system works, how it was developed, and how infrastructure teams at Facebook can use it to replace hand-tuned parameters set using heuristics with services that automatically optimize themselves in minutes rather than in weeks. We also discuss the challenges of implementing these kinds of systems, overcoming user skepticism, and achieving an appropriate level of explainability.

The complete show notes for this episode can be found at twimlai.com/talk/221

 

]]>
Today we’re joined by Vladimir Bychkovsky, Engineering Manager at Facebook, to discuss Spiral.

Spiral is a system they’ve developed for self-tuning high-performance infrastructure services at scale, using real-time machine learning. In our conversation, we explore the ins and outs of Spiral, including how the system works, how it was developed, and how infrastructure teams at Facebook can use it to replace hand-tuned parameters set using heuristics with services that automatically optimize themselves in minutes rather than in weeks. We also discuss the challenges of implementing these kinds of systems, overcoming user skepticism, and achieving an appropriate level of explainability.

The complete show notes for this episode can be found at twimlai.com/talk/221

 

]]>
46:06 clean podcast,science,facebook,technology,tech,data,intelligence,learning,self,performing,artificial,machine,ai,spiral,scale,vladimir,ml,twiml,bychkovsky Today we’re joined by Vladimir Bychkovsky, Engineering Manager at Facebook, to discuss Spiral, a system they’ve developed for self-tuning high-performance infrastructure services at scale, using real-time machine learning. In our conversation, we explore how the system works, how it was developed, and how infrastructure teams at Facebook can use it to replace hand-tuned parameters set using heuristics with services that automatically optimize themselves in minutes rather than in weeks. 221 full Sam Charrington
Building a Recommender System from Scratch at 20th Century Fox with JJ Espinoza - TWiML Talk #220 Building a Recommender System from Scratch at 20th Century Fox with JJ Espinoza Mon, 14 Jan 2019 20:15:32 +0000 Today we’re joined by JJ Espinoza, former Director of Data Science at 20th Century Fox.

In this talk we start out with a discussion JJ’s transition from econometrician to data scientist, and then dig into his and his team’s experience building and deploying a content recommendation system from the ground up. In our conversation, we explore the design of a couple of key components of their system, the first of which processes movie scripts to make recommendations about which movies the studio should make, and the second processes trailers to determine which should be recommended to users. We discuss the challenges they’ve encountered fielding these systems, some of the tools that were used along the way, and a few of the upcoming projects that could be layered on top of the platform they’ve built.

For the complete show notes for this episode, visit twimlai.com/talk/220.

If this talk piqued your interest, you should also check out Talk #201, where Leemay Nassery of Comcast breaks down how she led the rebuild of the Comcast Xfinity X1 recommender platform.

 

]]>
Today we’re joined by JJ Espinoza, former Director of Data Science at 20th Century Fox.

In this talk we start out with a discussion JJ’s transition from econometrician to data scientist, and then dig into his and his team’s experience building and deploying a content recommendation system from the ground up. In our conversation, we explore the design of a couple of key components of their system, the first of which processes movie scripts to make recommendations about which movies the studio should make, and the second processes trailers to determine which should be recommended to users. We discuss the challenges they’ve encountered fielding these systems, some of the tools that were used along the way, and a few of the upcoming projects that could be layered on top of the platform they’ve built.

For the complete show notes for this episode, visit twimlai.com/talk/220.

If this talk piqued your interest, you should also check out Talk #201, where Leemay Nassery of Comcast breaks down how she led the rebuild of the Comcast Xfinity X1 recommender platform.

 

]]>
35:08 clean podcast,science,technology,movie,recommendations,system,tech,cloud,google,data,script,trailer,intelligence,fox,learning,century,youtube,jj,artificial,pipeline,machine,predictions,ai,amazon,platform,20th,ml,espinoza,aws,arxiv,twiml Today we’re joined by JJ Espinoza, former Director of Data Science at 20th Century Fox. In this talk we dig into JJ and his team’s experience building and deploying a content recommendation system from the ground up. In our conversation, we explore the design of a couple of key components of their system, the first of which processes movie scripts to make recommendations about which movies the studio should make, and the second processes trailers to determine which should be recommended to users. 220 full Sam Charrington
Legal and Policy Implications of Model Interpretability with Solon Barocas - TWiML Talk #219 Legal and Policy Implications of Model Interpretability with Solon Barocas Thu, 10 Jan 2019 18:22:32 +0000 Today we’re joined by Solon Barocas, Assistant Professor of Information Science at Cornell University.

Solon is also the co-founder of the Fairness, Accountability, and Transparency in Machine Learning workshop that is hosted annually at conferences like ICML. Solon and I caught up to discuss his work on model interpretability and the legal and policy implications of the use of machine learning models. In our conversation, we discuss the gap between law, policy, and ML, and how to build the bridge between them, including formalizing ethical frameworks for machine learning. We also look at his paper ”The Intuitive Appeal of Explainable Machines,” which proposes that explainability is really two problems, inscrutability and non-intuitiveness, and that disentangling the two allows us to better reason about the kind of explainability that’s really needed in any given situation.

The complete show notes for this episode can be found at https://twimlai.com/talk/219.

And be sure to sign up for our weekly newsletter at https://twimlai.com/newsletter! 

 

]]>
Today we’re joined by Solon Barocas, Assistant Professor of Information Science at Cornell University.

Solon is also the co-founder of the Fairness, Accountability, and Transparency in Machine Learning workshop that is hosted annually at conferences like ICML. Solon and I caught up to discuss his work on model interpretability and the legal and policy implications of the use of machine learning models. In our conversation, we discuss the gap between law, policy, and ML, and how to build the bridge between them, including formalizing ethical frameworks for machine learning. We also look at his paper ”The Intuitive Appeal of Explainable Machines,” which proposes that explainability is really two problems, inscrutability and non-intuitiveness, and that disentangling the two allows us to better reason about the kind of explainability that’s really needed in any given situation.

The complete show notes for this episode can be found at https://twimlai.com/talk/219.

And be sure to sign up for our weekly newsletter at https://twimlai.com/newsletter! 

 

]]>
47:00 clean podcast,science,technology,tech,model,data,intelligence,law,policy,learning,ethics,transparency,artificial,fairness,cornell,machine,ai,accountability,ml,solon,twiml,barocas,interpretability Today we’re joined by Solon Barocas, Assistant Professor of Information Science at Cornell University. Solon and I caught up to discuss his work on model interpretability and the legal and policy implications of the use of machine learning models. In our conversation, we explore the gap between law, policy, and ML, and how to build the bridge between them, including formalizing ethical frameworks for machine learning. We also look at his paper ”The Intuitive Appeal of Explainable Machines.” 219 full Sam Charrington
Trends in Computer Vision with Siddha Ganju - TWiML Talk #218 Trends in Computer Vision with Siddha Ganju Mon, 07 Jan 2019 21:00:09 +0000 In the final episode of our AI Rewind series, we’re excited to have Siddha Ganju back on the show.

Siddha, who is now an autonomous vehicles solutions architect at Nvidia shares her thoughts on trends in Computer Vision in 2018 and beyond. We cover her favorite CV papers of the year in areas such as neural architecture search, learning from simulation, application of CV to augmented reality, and more, as well as a bevy of tools and open source projects.

The complete show notes for this episode can be found at https://twimlai.com/talk/218

For more information on our AI Rewind series, visit https://twimlai.com/rewind18.

]]>
In the final episode of our AI Rewind series, we’re excited to have Siddha Ganju back on the show.

Siddha, who is now an autonomous vehicles solutions architect at Nvidia shares her thoughts on trends in Computer Vision in 2018 and beyond. We cover her favorite CV papers of the year in areas such as neural architecture search, learning from simulation, application of CV to augmented reality, and more, as well as a bevy of tools and open source projects.

The complete show notes for this episode can be found at https://twimlai.com/talk/218

For more information on our AI Rewind series, visit https://twimlai.com/rewind18.

]]>
01:11:01 clean podcast,science,tools,technology,tech,in,data,intelligence,review,year,vision,learning,computer,trends,artificial,machine,predictions,ai,nvidia,cv,ml,arxiv,siddha,twiml,ganju In the final episode of our AI Rewind series, we’re excited to have Siddha Ganju back on the show. Siddha, who is now an autonomous vehicles solutions architect at Nvidia shares her thoughts on trends in Computer Vision in 2018 and beyond. We cover her favorite CV papers of the year in areas such as neural architecture search, learning from simulation, application of CV to augmented reality, and more, as well as a bevy of tools and open source projects. 218 full Sam Charrington
Trends in Reinforcement Learning with Simon Osindero - TWiML Talk #217 Trends in Reinforcement Learning with Simon Osindero Thu, 03 Jan 2019 18:26:57 +0000 In this episode of our AI Rewind series, we introduce a new friend of the show, Simon Osindero, Staff Research Scientist at DeepMind.

We discuss trends in Deep Reinforcement Learning in 2018 and beyond. We’ve packed a bunch into this show, as Simon walks us through many of the important papers and developments seen last year in areas like Imitation Learning, Unsupervised RL, Meta-learning, and more.

The complete show notes for this episode can be found at https://twimlai.com/talk/217.

For more information on our 2018 AI Rewind series, visit https://twimlai.com/rewind2018.

 

 

]]>
In this episode of our AI Rewind series, we introduce a new friend of the show, Simon Osindero, Staff Research Scientist at DeepMind.

We discuss trends in Deep Reinforcement Learning in 2018 and beyond. We’ve packed a bunch into this show, as Simon walks us through many of the important papers and developments seen last year in areas like Imitation Learning, Unsupervised RL, Meta-learning, and more.

The complete show notes for this episode can be found at https://twimlai.com/talk/217.

For more information on our 2018 AI Rewind series, visit https://twimlai.com/rewind2018.

 

 

]]>
52:46 clean podcast,science,tools,technology,tech,in,google,data,intelligence,review,year,learning,trends,simon,artificial,machine,predictions,ai,reinforcement,rl,ml,arxiv,deepmind,twiml,osindero In this episode of our AI Rewind series, we introduce a new friend of the show, Simon Osindero, Staff Research Scientist at DeepMind. We discuss trends in Deep Reinforcement Learning in 2018 and beyond. We’ve packed a bunch into this show, as Simon walks us through many of the important papers and developments seen this year in areas like Imitation Learning, Unsupervised RL, Meta-learning, and more. The complete show notes for this episode can be found at https://twimlai.com/talk/217. 217 full Sam Charrington
Trends in Natural Language Processing with Sebastian Ruder - TWiML Talk #216 Trends in Natural Language Processing with Sebastian Ruder Mon, 31 Dec 2018 16:53:28 +0000 In this episode of our AI Rewind series, we’ve brought back recent guest Sebastian Ruder, PhD Student at the National University of Ireland and Research Scientist at Aylien, to discuss trends in Natural Language Processing in 2018 and beyond.

In our conversation we cover a bunch of interesting papers spanning topics such as pre-trained language models, common sense inference datasets and large document reasoning and more, and talk through Sebastian’s predictions for the new year.

The complete show notes for this episode can be found at twimlai.com/talk/216.

For more information on the AI Rewind 2018 series, visit twimlai.com/rewind18.

]]>
In this episode of our AI Rewind series, we’ve brought back recent guest Sebastian Ruder, PhD Student at the National University of Ireland and Research Scientist at Aylien, to discuss trends in Natural Language Processing in 2018 and beyond.

In our conversation we cover a bunch of interesting papers spanning topics such as pre-trained language models, common sense inference datasets and large document reasoning and more, and talk through Sebastian’s predictions for the new year.

The complete show notes for this episode can be found at twimlai.com/talk/216.

For more information on the AI Rewind 2018 series, visit twimlai.com/rewind18.

]]>
53:32 clean podcast,science,tools,technology,tech,in,data,language,deep,intelligence,review,year,learning,processing,natural,trends,artificial,ruder,machine,predictions,papers,ai,sebastian,ml,arxiv,twiml In this episode of our AI Rewind series, we’ve brought back recent guest Sebastian Ruder, PhD Student at the National University of Ireland and Research Scientist at Aylien, to discuss trends in Natural Language Processing in 2018 and beyond. In our conversation we cover a bunch of interesting papers spanning topics such as pre-trained language models, common sense inference datasets and large document reasoning and more, and talk through Sebastian’s predictions for the new year. 216 full Sam Charrington
Trends in Machine Learning with Anima Anandkumar - TWiML Talk #215 Trends in Machine Learning with Anima Anandkumar Thu, 27 Dec 2018 15:48:55 +0000 In this episode of our AI Rewind series, we’re back with Anima Anandkumar, Bren Professor at Caltech and now Director of Machine Learning Research at NVIDIA.

Anima joins us to discuss her take on trends in the broader Machine Learning field in 2018 and beyond. In our conversation, we cover not only technical breakthroughs in the field but also those around inclusivity and diversity.

For this episode's complete show notes, visit twimlai.com/talk/215.

For more information on the AI Rewind series, visit twimlai.com/rewind18.

]]>
In this episode of our AI Rewind series, we’re back with Anima Anandkumar, Bren Professor at Caltech and now Director of Machine Learning Research at NVIDIA.

Anima joins us to discuss her take on trends in the broader Machine Learning field in 2018 and beyond. In our conversation, we cover not only technical breakthroughs in the field but also those around inclusivity and diversity.

For this episode's complete show notes, visit twimlai.com/talk/215.

For more information on the AI Rewind series, visit twimlai.com/rewind18.

]]>
51:54 clean podcast,science,tools,technology,tech,in,data,intelligence,review,year,learning,trends,artificial,machine,predictions,papers,ai,anima,nvidia,ml,arxiv,twiml,anandkumar In this episode of our AI Rewind series, we’re back with Anima Anandkumar, Bren Professor at Caltech and now Director of Machine Learning Research at NVIDIA. Anima joins us to discuss her take on trends in the broader Machine Learning field in 2018 and beyond. In our conversation, we cover not only technical breakthroughs in the field but also those around inclusivity and diversity. For this episode's complete show notes, visit twimlai.com/talk/215. 215 full Sam Charrington
Trends in Deep Learning with Jeremy Howard - TWiML Talk #214 Trends in Deep Learning with Jeremy Howard Mon, 24 Dec 2018 16:43:45 +0000 In this episode of our AI Rewind series, we’re bringing back one of your favorite guests of the year, Jeremy Howard, founder and researcher at Fast.ai.

Jeremy joins us to discuss trends in Deep Learning in 2018 and beyond. We cover many of the papers, tools and techniques that have contributed to making deep learning more accessible than ever to so many developers and data scientists.

The complete show notes for this episode can be found at https://twimlai.com/talk/214.

Follow along with our AI Rewind 2018 series visit https://twimlai.com/rewind18

]]>
In this episode of our AI Rewind series, we’re bringing back one of your favorite guests of the year, Jeremy Howard, founder and researcher at Fast.ai.

Jeremy joins us to discuss trends in Deep Learning in 2018 and beyond. We cover many of the papers, tools and techniques that have contributed to making deep learning more accessible than ever to so many developers and data scientists.

The complete show notes for this episode can be found at https://twimlai.com/talk/214.

Follow along with our AI Rewind 2018 series visit https://twimlai.com/rewind18

]]>
01:08:47 clean podcast,science,tools,technology,tech,in,data,deep,intelligence,review,year,jeremy,learning,trends,howard,artificial,machine,predictions,papers,ai,ml,arxiv,twiml In this episode of our AI Rewind series, we’re bringing back one of your favorite guests of the year, Jeremy Howard, founder and researcher at Fast.ai. Jeremy joins us to discuss trends in Deep Learning in 2018 and beyond. We cover many of the papers, tools and techniques that have contributed to making deep learning more accessible than ever to so many developers and data scientists. 214 full Sam Charrington
Training Large-Scale Deep Nets with RL with Nando de Freitas - TWiML Talk #213 Training Large-Scale Deep Nets with RL with Nando de Freitas Thu, 20 Dec 2018 17:34:52 +0000 Today we close out both our NeurIPS series and our 2018 conference coverage with this interview with Nando de Freitas, Team Lead & Principal Scientist at Deepmind and Fellow at the Canadian Institute for Advanced Research.

In our conversation, we explore his interest in understanding the brain and working towards artificial general intelligence through techniques like meta-learning, few-shot learning and imitation learning. In particular, we dig into a couple of his team’s NeurIPS papers: “Playing hard exploration games by watching YouTube,” and “One-Shot high-fidelity imitation: Training large-scale deep nets with RL.”

The complete show notes for this episode can be found at https://twimlai.com/talk/213.

For more information on the NeurIPS series, visit https://twimlai.com/neurips2018.

 

]]>
Today we close out both our NeurIPS series and our 2018 conference coverage with this interview with Nando de Freitas, Team Lead & Principal Scientist at Deepmind and Fellow at the Canadian Institute for Advanced Research.

In our conversation, we explore his interest in understanding the brain and working towards artificial general intelligence through techniques like meta-learning, few-shot learning and imitation learning. In particular, we dig into a couple of his team’s NeurIPS papers: “Playing hard exploration games by watching YouTube,” and “One-Shot high-fidelity imitation: Training large-scale deep nets with RL.”

The complete show notes for this episode can be found at https://twimlai.com/talk/213.

For more information on the NeurIPS series, visit https://twimlai.com/neurips2018.

 

]]>
55:17 clean podcast,science,de,network,technology,tech,google,data,intelligence,learning,artificial,neural,imitation,machine,ai,yann,meta,freitas,nando,ml,deepmind,twiml,neurips,ciar,lecun Today we close out both our NeurIPS series joined by Nando de Freitas, Team Lead & Principal Scientist at Deepmind. In our conversation, we explore his interest in understanding the brain and working towards artificial general intelligence. In particular, we dig into a couple of his team’s NeurIPS papers: “Playing hard exploration games by watching YouTube,” and “One-Shot high-fidelity imitation: Training large-scale deep nets with RL.” 213 full Sam Charrington
Making Algorithms Trustworthy with David Spiegelhalter - TWiML Talk #212 Making Algorithms Trustworthy with David Spiegelhalter Thu, 20 Dec 2018 01:00:26 +0000 In this, the second episode of our NeurIPS series, we’re joined by David Spiegelhalter, Chair of Winton Center for Risk and Evidence Communication at Cambridge University and President of the Royal Statistical Society.

David, an invited speaker at NeurIPS, presented on “Making Algorithms Trustworthy: What Can Statistical Science Contribute to Transparency, Explanation and Validation?”. In our conversation, we explore the nuanced difference between being trusted and being trustworthy, and its implications for those building AI systems. We also dig into how we can evaluate trustworthiness, which David breaks into four phases, the inspiration for which he drew from British philosopher Onora O'Neill's ideas around 'intelligent transparency’.

The complete show notes for this episode can be found at twimlai.com/talk/212.

For more information on the NeurIPS series, visit twimlai.com/neurips2018.

]]>
In this, the second episode of our NeurIPS series, we’re joined by David Spiegelhalter, Chair of Winton Center for Risk and Evidence Communication at Cambridge University and President of the Royal Statistical Society.

David, an invited speaker at NeurIPS, presented on “Making Algorithms Trustworthy: What Can Statistical Science Contribute to Transparency, Explanation and Validation?”. In our conversation, we explore the nuanced difference between being trusted and being trustworthy, and its implications for those building AI systems. We also dig into how we can evaluate trustworthiness, which David breaks into four phases, the inspiration for which he drew from British philosopher Onora O'Neill's ideas around 'intelligent transparency’.

The complete show notes for this episode can be found at twimlai.com/talk/212.

For more information on the NeurIPS series, visit twimlai.com/neurips2018.

]]>
23:26 clean podcast,science,technology,tech,david,data,intelligent,intelligence,learning,university,statistics,transparency,artificial,trust,machine,ai,oneill,cambridge,onora,algorithms,ml,twiml,neurips,spiegelhalter Today we’re joined by David Spiegelhalter, Chair of Winton Center for Risk and Evidence Communication at Cambridge University and President of the Royal Statistical Society. David, an invited speaker at NeurIPS, presented on “Making Algorithms Trustworthy: What Can Statistical Science Contribute to Transparency, Explanation and Validation?”. In our conversation, we explore the nuanced difference between being trusted and being trustworthy, and its implications for those building AI systems. 212 full Sam Charrington
Designing Computer Systems for Software with Kunle Olukotun - TWiML Talk #211 Designing Computer Systems for Software with Kunle Olukotun Tue, 18 Dec 2018 00:38:14 +0000 Today we’re joined by Kunle Olukotun, Professor in the department of Electrical Engineering and Computer Science at Stanford University, and Chief Technologist at Sambanova Systems.

Kunle was an invited speaker at NeurIPS this year, presenting on “Designing Computer Systems for Software 2.0.” In our conversation, we discuss various aspects of designing hardware systems for machine and deep learning, touching on multicore processor design, domain specific languages, and graph-based hardware. We cover the limitations of the current hardware such as GPUs, and peer a bit into the future as well. This was a fun one!

The complete show notes for this episode can be found at twimlai.com/talk/211

For more information on this series, visit twimlai.com/neurips2018.

]]>
Today we’re joined by Kunle Olukotun, Professor in the department of Electrical Engineering and Computer Science at Stanford University, and Chief Technologist at Sambanova Systems.

Kunle was an invited speaker at NeurIPS this year, presenting on “Designing Computer Systems for Software 2.0.” In our conversation, we discuss various aspects of designing hardware systems for machine and deep learning, touching on multicore processor design, domain specific languages, and graph-based hardware. We cover the limitations of the current hardware such as GPUs, and peer a bit into the future as well. This was a fun one!

The complete show notes for this episode can be found at twimlai.com/talk/211

For more information on this series, visit twimlai.com/neurips2018.

]]>
56:32 clean podcast,science,technology,tech,data,intelligence,learning,architecture,stanford,descent,artificial,domain,machine,ai,languages,specific,gradient,gpu,sambanova,ml,stochastic,kunle,twiml,neurips,olukotun,parallelizing Today we’re joined by Kunle Olukotun, Professor in the department of EE and CS at Stanford University, and Chief Technologist at Sambanova Systems. Kunle was an invited speaker at NeurIPS this year, presenting on “Designing Computer Systems for Software 2.0.” In our conversation, we discuss various aspects of designing hardware systems for machine and deep learning, touching on multicore processor design, domain specific languages, and graph-based hardware. This was a fun one! 211 full Sam Charrington
Operationalizing Ethical AI with Kathryn Hume - TWiML Talk #210 Operationalizing Ethical AI with Kathryn Hume Fri, 14 Dec 2018 17:49:06 +0000 Today we conclude our Trust in AI series with this conversation with Kathryn Hume, VP of Strategy at Integrate AI.

You might remember Kathryn from our interview last year on “Selling AI to the Enterprise,” which was TWiML Talk #20. This time around, we discuss her newly released white paper “Responsible AI in the Consumer Enterprise,” which details a framework for ethical AI deployment in e-commerce companies and other consumer-facing enterprises. We look at the structure of the ethical framework she proposes, and some of the many questions that need to be considered when deploying AI in an ethical manner.

For the complete show notes for this episode, visit twimlai.com/talk/210.

 

]]>
Today we conclude our Trust in AI series with this conversation with Kathryn Hume, VP of Strategy at Integrate AI.

You might remember Kathryn from our interview last year on “Selling AI to the Enterprise,” which was TWiML Talk #20. This time around, we discuss her newly released white paper “Responsible AI in the Consumer Enterprise,” which details a framework for ethical AI deployment in e-commerce companies and other consumer-facing enterprises. We look at the structure of the ethical framework she proposes, and some of the many questions that need to be considered when deploying AI in an ethical manner.

For the complete show notes for this episode, visit twimlai.com/talk/210.

 

]]>
54:28 clean podcast,science,technology,tech,in,data,enterprise,intelligence,learning,ethics,consumer,kathryn,artificial,fairness,partners,framework,trust,machine,ai,georgian,ecommerce,hume,ml,integrateai,twiml Today we conclude our Trust in AI series with this conversation with Kathryn Hume, VP of Strategy at Integrate AI. We discuss her newly released white paper “Responsible AI in the Consumer Enterprise,” which details a framework for ethical AI deployment in e-commerce companies and other consumer-facing enterprises. We look at the structure of the ethical framework she proposes, and some of the many questions that need to be considered when deploying AI in an ethical manner. 210 full Sam Charrington
Approaches to Fairness in Machine Learning with Richard Zemel - TWiML Talk #209 Approaches to Fairness in Machine Learning with Richard Zemel Wed, 12 Dec 2018 22:29:49 +0000 Today we continue our exploration of Trust in AI with this interview with Richard Zemel, Professor in the department of Computer Science at the University of Toronto and Research Director at Vector Institute.

In our conversation, Rich describes some of his work on fairness in machine learning algorithms, including how he defines both group and individual fairness and his group’s recent NeurIPS poster, “Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer.”

This week’s series is sponsored by our friends at Georgian Partners. Georgian recently published Building Conversational AI Teams, a comprehensive guide to lead you through sourcing, acquiring and nurturing a successful conversational AI team. Download at: https://gptrs.vc/convoai

For this episode's complete show notes, visit twimlai.com/talk/209.

]]>
Today we continue our exploration of Trust in AI with this interview with Richard Zemel, Professor in the department of Computer Science at the University of Toronto and Research Director at Vector Institute.

In our conversation, Rich describes some of his work on fairness in machine learning algorithms, including how he defines both group and individual fairness and his group’s recent NeurIPS poster, “Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer.”

This week’s series is sponsored by our friends at Georgian Partners. Georgian recently published Building Conversational AI Teams, a comprehensive guide to lead you through sourcing, acquiring and nurturing a successful conversational AI team. Download at: https://gptrs.vc/convoai

For this episode's complete show notes, visit twimlai.com/talk/209.

]]>
46:12 clean podcast,of,science,technology,tech,data,toronto,intelligence,richard,learning,university,artificial,fairness,partners,institute,machine,ai,georgian,vector,understanding,ml,representations,twiml,neurips,zemel Today we continue our exploration of Trust in AI with this interview with Richard Zemel, Professor in the department of Computer Science at the University of Toronto and Research Director at Vector Institute. In our conversation, Rich describes some of his work on fairness in machine learning algorithms, including how he defines both group and individual fairness and his group’s recent NeurIPS poster, “Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer.” 209 full Sam Charrington
Trust and AI with Parinaz Sobhani - TWiML Talk #208 Trust and AI with Parinaz Sobhani Tue, 11 Dec 2018 16:53:15 +0000 In today’s episode we’re joined by Parinaz Sobhani, Director of Machine Learning at Georgian Partners.

In our conversation, Parinaz and I discuss some of the main issues falling under the “trust” umbrella, such as transparency, fairness and accountability. We also explore some of the trust-related projects she and her team at Georgian are working on, as well as some of the interesting trust and privacy papers coming out of the NeurIPS conference.

This week’s series is sponsored by our friends at Georgian Partners. Georgian recently published Building Conversational AI Teams, a comprehensive guide to lead you through sourcing, acquiring and nurturing a successful conversational AI team. Download at: https://gptrs.vc/convoai

For this episode's complete show notes, visit twimlai.com/talk/208.

]]>
In today’s episode we’re joined by Parinaz Sobhani, Director of Machine Learning at Georgian Partners.

In our conversation, Parinaz and I discuss some of the main issues falling under the “trust” umbrella, such as transparency, fairness and accountability. We also explore some of the trust-related projects she and her team at Georgian are working on, as well as some of the interesting trust and privacy papers coming out of the NeurIPS conference.

This week’s series is sponsored by our friends at Georgian Partners. Georgian recently published Building Conversational AI Teams, a comprehensive guide to lead you through sourcing, acquiring and nurturing a successful conversational AI team. Download at: https://gptrs.vc/convoai

For this episode's complete show notes, visit twimlai.com/talk/208.

]]>
46:42 clean podcast,science,technology,tech,in,data,intelligence,learning,transparency,attacks,artificial,fairness,privacy,partners,trust,machine,ai,georgian,ml,differential,sobhani,adversarial,twiml,parinaz In today’s episode we’re joined by Parinaz Sobhani, Director of Machine Learning at Georgian Partners. In our conversation, Parinaz and I discuss some of the main issues falling under the “trust” umbrella, such as transparency, fairness and accountability. We also explore some of the trust-related projects she and her team at Georgian are working on, as well as some of the interesting trust and privacy papers coming out of the NeurIPS conference. 208 full Sam Charrington
Unbiased Learning from Biased User Feedback with Thorsten Joachims - TWiML Talk #207 Unbiased Learning from Biased User Feedback with Thorsten Joachims Fri, 07 Dec 2018 19:04:12 +0000 In the final episode of our re:Invent series, we're joined by Thorsten Joachims, Professor in the Department of Computer Science at Cornell University.

Thorsten participated at the conference’s AI Summit, presenting his research on “Unbiased Learning from Biased User Feedback.” In our conversation, we take a look at some of the inherent and introduced biases in recommender systems, and the ways to avoid them. We also discuss how inference techniques can be used to make learning algorithms more robust to bias, and how these can be enabled with the correct type of logging policies.

The complete show notes for this episode can be found at https://twimlai.com/talk/207

For more information on our AWS re:Invent series, visit https://twimlai.com/reinvent2018.

 

 

]]>
In the final episode of our re:Invent series, we're joined by Thorsten Joachims, Professor in the Department of Computer Science at Cornell University.

Thorsten participated at the conference’s AI Summit, presenting his research on “Unbiased Learning from Biased User Feedback.” In our conversation, we take a look at some of the inherent and introduced biases in recommender systems, and the ways to avoid them. We also discuss how inference techniques can be used to make learning algorithms more robust to bias, and how these can be enabled with the correct type of logging policies.

The complete show notes for this episode can be found at https://twimlai.com/talk/207

For more information on our AWS re:Invent series, visit https://twimlai.com/reinvent2018.

 

 

]]>
40:43 clean podcast,science,technology,tech,data,systems,deep,intelligence,learning,testing,feedback,artificial,cornell,inference,machine,bias,dl,ai,ab,policies,ml,recommender,multivariate,thorsten,unbiased,twiml,joachims In the final episode of our re:Invent series, we're joined by Thorsten Joachims, Professor in the Department of Computer Science at Cornell University. We discuss his presentation “Unbiased Learning from Biased User Feedback,” looking at some of the inherent and introduced biases in recommender systems, and the ways to avoid them. We also discuss how inference techniques can be used to make learning algorithms more robust to bias, and how these can be enabled with the correct type of logging policies. 207 full Sam Charrington
Language Parsing and Character Mining with Jinho Choi - TWiML Talk #206 Language Parsing and Character Mining with Jinho Choi Wed, 05 Dec 2018 22:31:54 +0000 Today, in the second episode of our re:Invent series, we’re joined by Jinho Choi, assistant professor of computer science at Emory University.

Jinho presented at the conference on ELIT — a cloud-based NLP platform — which is short for Evolution of Language and Information Technology. In our conversation, we discuss some of the key NLP challenges that Jinho and his group are tackling, including language parsing and character mining. We also discuss their vision for ELIT, which is to make it easy for researchers to develop, access, and deploying cutting-edge NLP tools models on the cloud.

The complete show notes can be found at https://twimlai.com/talk/206

For more info on our re:Invent series, visit https://twimlai.com/reinvent2018

]]>
Today, in the second episode of our re:Invent series, we’re joined by Jinho Choi, assistant professor of computer science at Emory University.

Jinho presented at the conference on ELIT — a cloud-based NLP platform — which is short for Evolution of Language and Information Technology. In our conversation, we discuss some of the key NLP challenges that Jinho and his group are tackling, including language parsing and character mining. We also discuss their vision for ELIT, which is to make it easy for researchers to develop, access, and deploying cutting-edge NLP tools models on the cloud.

The complete show notes can be found at https://twimlai.com/talk/206

For more info on our re:Invent series, visit https://twimlai.com/reinvent2018

]]>
47:48 clean podcast,science,penn,technology,tech,data,language,mining,intelligence,learning,university,character,artificial,framework,machine,ai,choi,parsing,nlp,emory,ml,jinho,twiml,elit,treebank Today we’re joined by Jinho Choi, assistant professor of computer science at Emory University. Jinho presented at the conference on ELIT, their cloud-based NLP platform. In our conversation, we discuss some of the key NLP challenges that Jinho and his group are tackling, including language parsing and character mining. We also discuss their vision for ELIT, which is to make it easy for researchers to develop, access, and deploying cutting-edge NLP tools models on the cloud. 206 full Sam Charrington
re:Invent Roundup Roundtable 2018 with Dave McCrory and Val Bercovici - TWiML Talk #205 re:Invent Roundup Roundtable 2018 with Dave McCrory and Val Bercovici Mon, 03 Dec 2018 19:36:00 +0000 For today’s show, I’m excited to present our second annual re:Invent Roundtable Roundup. This year I’m joined by my friends Dave McCrory, VP of Software Engineering at Wise.io at GE Digital, and Val Bercovici, Founder and CEO of Pencil Data.

If you missed the news coming out of re:Invent, or you want to know more about what one of the biggest AI platform providers is up to, you’ll want to say tuned, because we’ll discuss many of their new offerings in this episode. We cover all of AWS’ most important ML and AI announcements, including SageMaker Ground Truth, Reinforcement Learning and New, DeepRacer, Inferentia and Elastic Inference, ML Marketplace, Personalize, Forecast and Textract, and more.

For the complete show notes for this episode, visit https://twimlai.com/talk/205.

]]>
For today’s show, I’m excited to present our second annual re:Invent Roundtable Roundup. This year I’m joined by my friends Dave McCrory, VP of Software Engineering at Wise.io at GE Digital, and Val Bercovici, Founder and CEO of Pencil Data.

If you missed the news coming out of re:Invent, or you want to know more about what one of the biggest AI platform providers is up to, you’ll want to say tuned, because we’ll discuss many of their new offerings in this episode. We cover all of AWS’ most important ML and AI announcements, including SageMaker Ground Truth, Reinforcement Learning and New, DeepRacer, Inferentia and Elastic Inference, ML Marketplace, Personalize, Forecast and Textract, and more.

For the complete show notes for this episode, visit https://twimlai.com/talk/205.

]]>
01:08:35 clean podcast,and,science,technology,tech,personalize,data,intelligence,dave,learning,marketplace,val,forecast,artificial,inference,machine,ai,amazon,reinforcement,elastic,mccrory,ml,aws,reinvent,bercovici,twiml,sagemaker,deepracer,inferentia,textract I’m excited to present our second annual re:Invent Roundtable Roundup. This year I’m joined by Dave McCrory, VP of Software Engineering at Wise.io at GE Digital, and Val Bercovici, Founder and CEO of Pencil Data. If you missed the news coming out of re:Invent, we cover all of AWS’ most important ML and AI announcements, including SageMaker Ground Truth, Reinforcement Learning, DeepRacer, Inferentia and Elastic Inference, ML Marketplace and much more. For the show notes visit https://twimlai.com/ta 205 full Sam Charrington
Knowledge Graphs and Expert Augmentation with Marisa Boston - TWiML Talk #204 Knowledge Graphs and Expert Augmentation with Marisa Boston Thu, 29 Nov 2018 23:34:58 +0000 Today we’re joined by Marisa Boston, Director of Cognitive Technology in KPMG’s Cognitive Automation Lab.

Marisa and I caught up to discuss some of the ways that they’re using AI to build tools that help augment the knowledge of KPMG’s teams of professionals. We start out with a discussion of knowledge graphs, and how they can be used to map out and relate various concepts. We then explore how they use these in conjunction with NLP tools to create insight engines, tools that curate and contextualize news and other text-based data sources to produce a series of content recommendations that help their users work more effectively. Finally, Marisa shares some general principles for using AI to augment experts.

The complete show notes for this episode can be found at twimlai.com/talk/204.

]]>
Today we’re joined by Marisa Boston, Director of Cognitive Technology in KPMG’s Cognitive Automation Lab.

Marisa and I caught up to discuss some of the ways that they’re using AI to build tools that help augment the knowledge of KPMG’s teams of professionals. We start out with a discussion of knowledge graphs, and how they can be used to map out and relate various concepts. We then explore how they use these in conjunction with NLP tools to create insight engines, tools that curate and contextualize news and other text-based data sources to produce a series of content recommendations that help their users work more effectively. Finally, Marisa shares some general principles for using AI to augment experts.

The complete show notes for this episode can be found at twimlai.com/talk/204.

]]>
47:40 clean podcast,science,boston,technology,tech,data,intelligence,learning,knowledge,marisa,cognitive,artificial,infrastructure,machine,ai,platform,graphs,nlp,platforms,ml,kpmg,twiml,contextualize Today we’re joined by Marisa Boston, Director of Cognitive Technology in KPMG’s Cognitive Automation Lab. We caught up to discuss some of the ways that KPMG is using AI to build tools that help augment the knowledge of their teams of professionals. We discuss knowledge graphs and how they can be used to map out and relate various concepts and how they use these in conjunction with NLP tools to create insight engines. We also look at tools that curate and contextualize news and other text-based data sour 204 full Sam Charrington
ML/DL for Non-Stationary Time Series Analysis in Financial Markets and Beyond with Stuart Reid - TWiML Talk #203 ML/DL for Non-Stationary Time Series Analysis in Financial Markets and Beyond with Stuart Reid Mon, 26 Nov 2018 21:59:47 +0000 Today, we’re joined by Stuart Reid, Chief Scientist at NMRQL Research.

NMRQL, based in Stellenbosch, South Africa, is an investment management firm that uses machine learning algorithms to make adaptive, unbiased, scalable, and testable trading decisions for its funds. In our conversation, Stuart and I dig into the way NMRQL uses machine learning and deep learning models to support the firm’s investment decisions. In particular, we focus on techniques for modeling non-stationary time-series, of which financial markets are just one example. We start from first principles and look at stationary vs non-stationary time-series, discuss some of the challenges of building models using financial data, explore issues like model interpretability, and much more. This was a very insightful conversation, which I expect will be very useful not just for those in the fintech space.

Check out the complete show notes for this episode at twimlai.com/talk/203

]]>
Today, we’re joined by Stuart Reid, Chief Scientist at NMRQL Research.

NMRQL, based in Stellenbosch, South Africa, is an investment management firm that uses machine learning algorithms to make adaptive, unbiased, scalable, and testable trading decisions for its funds. In our conversation, Stuart and I dig into the way NMRQL uses machine learning and deep learning models to support the firm’s investment decisions. In particular, we focus on techniques for modeling non-stationary time-series, of which financial markets are just one example. We start from first principles and look at stationary vs non-stationary time-series, discuss some of the challenges of building models using financial data, explore issues like model interpretability, and much more. This was a very insightful conversation, which I expect will be very useful not just for those in the fintech space.

Check out the complete show notes for this episode at twimlai.com/talk/203

]]>
59:36 clean podcast,science,technology,tech,data,intelligence,learning,investment,financial,banking,artificial,infrastructure,machine,ai,platform,scale,stationary,platforms,stellenbosch,ml,reinvent,fintech,twiml,nmrql,neurips,kubecon,timeseries,interpretabilty Today, we’re joined by Stuart Reid, Chief Scientist at NMRQL Research. NMRQL is an investment management firm that uses ML algorithms to make adaptive, unbiased, scalable, and testable trading decisions for its funds. In our conversation, Stuart and I dig into the way NMRQL uses ML and DL models to support the firm’s investment decisions. We focus on techniques for modeling non-stationary time-series, stationary vs non-stationary time-series, and challenges of building models using financial data. 203 full Sam Charrington
Industrializing Machine Learning at Shell with Daniel Jeavons - TWiML Talk #202 Industrializing Machine Learning at Shell with Daniel Jeavons Wed, 21 Nov 2018 16:32:20 +0000 In this episode of our AI Platforms series, we’re joined by Daniel Jeavons, General Manager of Data Science at Shell.

In our conversation, Daniel and I explore the evolution of analytics and data science at Shell, and cover a ton of interesting machine learning use cases that the company is pursuing, such as well drilling and charging smart cars. A good bit of our conversation centers around IoT-related applications and issues, such as inference at the edge, federated machine learning, and digital twins, all key considerations for the way they apply ML. We also talk about the data science process at Shell and the importance of platform technologies to Daniel’s organization and the company as a whole and we discuss some of the technologies he and his team are excited about introducing to the company.

For the complete show notes for this episode, visit twimlai.com/talk/202.

For more information on the AI Platforms series, visit twimlai.com/aiplatforms.

Be sure to sign up for our weekly newsletter at twimlai.com/newsletter!

]]>
In this episode of our AI Platforms series, we’re joined by Daniel Jeavons, General Manager of Data Science at Shell.

In our conversation, Daniel and I explore the evolution of analytics and data science at Shell, and cover a ton of interesting machine learning use cases that the company is pursuing, such as well drilling and charging smart cars. A good bit of our conversation centers around IoT-related applications and issues, such as inference at the edge, federated machine learning, and digital twins, all key considerations for the way they apply ML. We also talk about the data science process at Shell and the importance of platform technologies to Daniel’s organization and the company as a whole and we discuss some of the technologies he and his team are excited about introducing to the company.

For the complete show notes for this episode, visit twimlai.com/talk/202.

For more information on the AI Platforms series, visit twimlai.com/aiplatforms.

Be sure to sign up for our weekly newsletter at twimlai.com/newsletter!

]]>
46:46 clean podcast,science,technology,tech,data,intelligence,vision,learning,computer,applications,daniel,artificial,infrastructure,analytics,machine,ai,platform,federated,scale,shell,platforms,ml,jeavons,iot,twiml In this episode of our AI Platforms series, we’re joined by Daniel Jeavons, General Manager of Data Science at Shell. In our conversation, we explore the evolution of analytics and data science at Shell, discussing IoT-related applications and issues, such as inference at the edge, federated ML, and digital twins, all key considerations for the way they apply ML. We also talk about the data science process at Shell and the importance of platform technologies to the company as a whole. 202 full Sam Charrington
Resurrecting a Recommendations Platform at Comcast with Leemay Nassery - TWiML Talk #201 Resurrecting a Recommendations Platform at Comcast with Leemay Nassery Mon, 19 Nov 2018 19:19:55 +0000 In this episode of our AI Platforms series, we’re joined by Leemay Nassery, Senior Engineering Manager and head of the recommendations team at Comcast.

Leemay spoke at the Strange Loop conference a few months ago on “Resurrecting a recommendations platform.” In our conversation, Leemay and I discuss just how she and her team resurrected the Xfinity X1 recommendations platform, including rebuilding the data pipeline, the machine learning process, and the deployment and training of their updated models. We also touch on the importance of A-B testing and maintaining their rebuilt infrastructure. 

For the complete show notes for this episode, visit twimlai.com/talk/201

For more information on our upcoming eBook series or the AI Platforms series, visit twimlai.com/aiplatforms.

Make sure you sign up for our newsletter at twimlai.com/newsletter!

]]>
In this episode of our AI Platforms series, we’re joined by Leemay Nassery, Senior Engineering Manager and head of the recommendations team at Comcast.

Leemay spoke at the Strange Loop conference a few months ago on “Resurrecting a recommendations platform.” In our conversation, Leemay and I discuss just how she and her team resurrected the Xfinity X1 recommendations platform, including rebuilding the data pipeline, the machine learning process, and the deployment and training of their updated models. We also touch on the importance of A-B testing and maintaining their rebuilt infrastructure. 

For the complete show notes for this episode, visit twimlai.com/talk/201

For more information on our upcoming eBook series or the AI Platforms series, visit twimlai.com/aiplatforms.

Make sure you sign up for our newsletter at twimlai.com/newsletter!

]]>
48:47 clean podcast,science,technology,strange,tech,data,intelligence,learning,artificial,infrastructure,pipeline,machine,ai,platform,scale,loop,comcast,platforms,ml,xfinity,twiml,leemay,nassery In this episode of our AI Platforms series, we’re joined by Leemay Nassery, Senior Engineering Manager and head of the recommendations team at Comcast. In our conversation, Leemay and I discuss just how she and her team resurrected the Xfinity X1 recommendations platform, including the rebuilding the data pipeline, the machine learning process, and the deployment and training of their updated models. We also touch on the importance of A-B testing and maintaining their rebuilt infrastructure. 201 full Sam Charrington
Productive Machine Learning at LinkedIn with Bee-Chung Chen - TWiML Talk #200 Productive Machine Learning at LinkedIn with Bee-Chung Chen Thu, 15 Nov 2018 20:05:16 +0000 In this episode of our AI Platforms series, we’re joined by Bee-Chung Chen, Principal Staff Engineer and Applied Researcher at LinkedIn.

Bee-Chung and I caught up to discuss LinkedIn’s internal AI automation platform, Pro-ML, which was built with the hopes of providing a single platform for the entire lifecycle of developing, training, deploying, and testing machine learning models. In our conversation, Bee-Chung details Pro-ML, breaking down some of the major pieces of the pipeline including their feature marketplace, model creation tooling, and training management system to name a few. We also discuss LinkedIn’s experience bringing Pro-ML to the company's developers and the role the LinkedIn AI Academy plays in helping them get up to speed.

For the complete show notes, visit https://twimlai.com/talk/200.

For more information about the AI Platforms series, visit https://twimlai.com/aiplatforms.

Be sure to sign up for our newsletter at https://twimlai.com/newsletter.

]]>
In this episode of our AI Platforms series, we’re joined by Bee-Chung Chen, Principal Staff Engineer and Applied Researcher at LinkedIn.

Bee-Chung and I caught up to discuss LinkedIn’s internal AI automation platform, Pro-ML, which was built with the hopes of providing a single platform for the entire lifecycle of developing, training, deploying, and testing machine learning models. In our conversation, Bee-Chung details Pro-ML, breaking down some of the major pieces of the pipeline including their feature marketplace, model creation tooling, and training management system to name a few. We also discuss LinkedIn’s experience bringing Pro-ML to the company's developers and the role the LinkedIn AI Academy plays in helping them get up to speed.

For the complete show notes, visit https://twimlai.com/talk/200.

For more information about the AI Platforms series, visit https://twimlai.com/aiplatforms.

Be sure to sign up for our newsletter at https://twimlai.com/newsletter.

]]>
49:02 clean podcast,science,technology,linkedin,tech,data,automation,intelligence,learning,artificial,developer,infrastructure,productive,pipeline,machine,ai,platform,scale,chen,platforms,ml,twiml,beechung,proml,photonml In this episode of our AI Platforms series, we’re joined by Bee-Chung Chen, Principal Staff Engineer and Applied Researcher at LinkedIn. Bee-Chung and I caught up to discuss LinkedIn’s internal AI automation platform, Pro-ML. Bee-Chung breaks down some of the major pieces of the pipeline, LinkedIn’s experience bringing Pro-ML to the company's developers and the role the LinkedIn AI Academy plays in helping them get up to speed. For the complete show notes, visit https://twimlai.com/talk/200. 200 full Sam Charrington
Scaling Deep Learning on Kubernetes at OpenAI with Christopher Berner - TWiML Talk #199 Scaling Deep Learning on Kubernetes at OpenAI with Christopher Berner Mon, 12 Nov 2018 20:15:06 +0000 In this episode of our AI Platforms series we’re joined by OpenAI’s Head of Infrastructure, Christopher Berner.

Chris has played a key role in overhauling OpenAI’s deep learning infrastructure of the course of his two years with the company. In our conversation, we discuss the evolution of OpenAI’s deep learning platform, the core principles which have guided that evolution, and its current architecture. We dig deep into their use of Kubernetes and discuss various ecosystem players and projects that support running deep learning at scale on the open source project.

For the complete show notes for this episode, visit twimlai.com/talk/199.

For more information on the AI Platforms Series, or to sign up for our eBooks, visit twimlai.com/aiplatforms.

]]>
In this episode of our AI Platforms series we’re joined by OpenAI’s Head of Infrastructure, Christopher Berner.

Chris has played a key role in overhauling OpenAI’s deep learning infrastructure of the course of his two years with the company. In our conversation, we discuss the evolution of OpenAI’s deep learning platform, the core principles which have guided that evolution, and its current architecture. We dig deep into their use of Kubernetes and discuss various ecosystem players and projects that support running deep learning at scale on the open source project.

For the complete show notes for this episode, visit twimlai.com/talk/199.

For more information on the AI Platforms Series, or to sign up for our eBooks, visit twimlai.com/aiplatforms.

]]>
51:04 clean podcast,science,open,technology,tech,christopher,data,deep,intelligence,learning,source,architecture,artificial,infrastructure,machine,ai,platform,scale,platforms,ml,berner,kubernetes,openai,twiml In this episode of our AI Platforms series we’re joined by OpenAI’s Head of Infrastructure, Christopher Berner. In our conversation, we discuss the evolution of OpenAI’s deep learning platform, the core principles which have guided that evolution, and its current architecture. We dig deep into their use of Kubernetes and discuss various ecosystem players and projects that support running deep learning at scale on the open source project. 199 full Sam Charrington
Bighead: Airbnb's Machine Learning Platform with Atul Kale - TWiML Talk #198 Bighead: Airbnb's Machine Learning Platform with Atul Kale Thu, 08 Nov 2018 20:17:11 +0000 In this episode of our AI Platforms series, we’re joined by Atul Kale, Engineering Manager on the machine learning infrastructure team at Airbnb.

Atul and I met at the Strata Data conference a while back to discuss Airbnb’s internal machine learning platform, Bighead. In our conversation, Atul outlines the ML lifecycle at Airbnb and how the various components of Bighead support it. We then dig into the major components of Bighead, which include Redspot, their supercharged Jupyter notebook service, Deep Thought, their real-time inference environment, Zipline, their data management platform, and quite a few others. We also take a look at some of Atul’s best practices for scaling machine learning, and discuss a special announcement that Atul and his team made at Strata.

For the complete show notes, visit twimlai.com/talk/198

For more information on the AI Platforms series, visit twimlai.com/aiplatforms.

]]>
In this episode of our AI Platforms series, we’re joined by Atul Kale, Engineering Manager on the machine learning infrastructure team at Airbnb.

Atul and I met at the Strata Data conference a while back to discuss Airbnb’s internal machine learning platform, Bighead. In our conversation, Atul outlines the ML lifecycle at Airbnb and how the various components of Bighead support it. We then dig into the major components of Bighead, which include Redspot, their supercharged Jupyter notebook service, Deep Thought, their real-time inference environment, Zipline, their data management platform, and quite a few others. We also take a look at some of Atul’s best practices for scaling machine learning, and discuss a special announcement that Atul and his team made at Strata.

For the complete show notes, visit twimlai.com/talk/198

For more information on the AI Platforms series, visit twimlai.com/aiplatforms.

]]>
51:15 clean podcast,science,conference,technology,tech,data,deep,intelligence,learning,thought,artificial,infrastructure,notebook,machine,ai,platform,strata,scale,kale,scaling,zipline,atul,platforms,ml,airbnb,bighead,jupyter,twiml In this episode of our AI Platforms series, we’re joined by Atul Kale, Engineering Manager on the machine learning infrastructure team at Airbnb. In our conversation, we discuss Airbnb’s internal machine learning platform, Bighead. Atul outlines the ML lifecycle at Airbnb and how the various components of Bighead support it. We then dig into the major components of Bighead, some of Atul’s best practices for scaling machine learning, and a special announcement that Atul and his team made at Strata. 198 full Sam Charrington
Facebook's FBLearner Platform with Aditya Kalro - TWiML Talk #197 Facebook's FBLearner Platform with Aditya Kalro Tue, 06 Nov 2018 21:53:16 +0000 In this, the kickoff episode of our AI Platforms series, we’re joined by Aditya Kalro, Engineering Manager at Facebook, to discuss their internal machine learning platform FBLearner Flow.

Introduced in May of 2016, FBLearner Flow is the workflow management platform at the heart of the Facebook ML engineering ecosystem. In our conversation, Aditya and I discuss the history and development of the platform, as well as its functionality and its evolution from an initial focus on model training to supporting the entire ML lifecycle at Facebook. Aditya also walks us through the data science tech stack at Facebook, and shares his advice for supporting ML development at scale.

For the complete show notes, visit twimlai.com/talk/197.

To learn more about our AI Platforms series, or to download our upcoming ebooks, visit twimlai.com/aiplatforms.

]]>
In this, the kickoff episode of our AI Platforms series, we’re joined by Aditya Kalro, Engineering Manager at Facebook, to discuss their internal machine learning platform FBLearner Flow.

Introduced in May of 2016, FBLearner Flow is the workflow management platform at the heart of the Facebook ML engineering ecosystem. In our conversation, Aditya and I discuss the history and development of the platform, as well as its functionality and its evolution from an initial focus on model training to supporting the entire ML lifecycle at Facebook. Aditya also walks us through the data science tech stack at Facebook, and shares his advice for supporting ML development at scale.

For the complete show notes, visit twimlai.com/talk/197.

To learn more about our AI Platforms series, or to download our upcoming ebooks, visit twimlai.com/aiplatforms.

]]>
40:44 clean podcast,science,facebook,technology,tech,data,intelligence,flow,learning,artificial,machine,ai,structure,platforms,ml,kubernetes,aditya,twiml,kalro,fblearner In the kickoff episode of our AI Platforms series, we’re joined by Aditya Kalro, Engineering Manager at Facebook, to discuss their internal machine learning platform FBLearner Flow. FBLearner Flow is the workflow management platform at the heart of the Facebook ML engineering ecosystem. We discuss the history and development of the platform, as well as its functionality and its evolution from an initial focus on model training to supporting the entire ML lifecycle at Facebook. 197 full Sam Charrington
Geometric Statistics in Machine Learning w/ geomstats with Nina Miolane - TWiML Talk #196 Geometric Statistics in Machine Learning w/ geomstats with Nina Miolane Thu, 01 Nov 2018 16:40:44 +0000 In this episode we’re joined by Nina Miolane, researcher and lecturer at Stanford University.

Nina and I recently spoke about her work in the field of geometric statistics in machine learning. Specifically, we discuss the application of Riemannian geometry, which is the study of curved surfaces, to ML. Riemannian geometry can be helpful in building machine learning models in a number of situations including in computational anatomy and medicine where it helps Nina create models of organs like the brain and heart. In our discussion we review the differences between Riemannian and Euclidean geometry in theory and practice, and discuss several examples from Nina’s research. We also discuss her new Geomstats project, which is a python package that simplifies computations and statistics on manifolds with geometric structures.

The full show notes for this episode can be found at twimlai.com/talk/196.

]]>
In this episode we’re joined by Nina Miolane, researcher and lecturer at Stanford University.

Nina and I recently spoke about her work in the field of geometric statistics in machine learning. Specifically, we discuss the application of Riemannian geometry, which is the study of curved surfaces, to ML. Riemannian geometry can be helpful in building machine learning models in a number of situations including in computational anatomy and medicine where it helps Nina create models of organs like the brain and heart. In our discussion we review the differences between Riemannian and Euclidean geometry in theory and practice, and discuss several examples from Nina’s research. We also discuss her new Geomstats project, which is a python package that simplifies computations and statistics on manifolds with geometric structures.

The full show notes for this episode can be found at twimlai.com/talk/196.

]]>
44:45 clean podcast,science,nina,technology,tech,data,intelligence,learning,statistics,geometry,python,stanford,artificial,ibm,machine,ai,ml,euclidean,twiml,miolane,geomstats,riemannian In this episode we’re joined by Nina Miolane, researcher and lecturer at Stanford University. Nina and I spoke about her work in the field of geometric statistics in ML, specifically the application of Riemannian geometry, which is the study of curved surfaces, to ML. In our discussion we review the differences between Riemannian and Euclidean geometry in theory and her new Geomstats project, which is a python package that simplifies computations and statistics on manifolds with geometric structures. 196 full Sam Charrington
Milestones in Neural Natural Language Processing with Sebastian Ruder - TWiML Talk #195 Milestones in Neural Natural Language Processing with Sebastian Ruder Mon, 29 Oct 2018 20:16:23 +0000 In this episode, we’re joined by Sebastian Ruder, a PhD student studying natural language processing at the National University of Ireland and a Research Scientist at text analysis startup Aylien.

In our conversation, Sebastian and I discuss recent milestones in neural NLP, including multi-task learning and pretrained language models. We also discuss the use of attention-based models, Tree RNNs and LSTMs, and memory-based networks. Finally, Sebastian walks us through his recent ULMFit paper, short for “Universal Language Model Fine-tuning for Text Classification,” which he co-authored with Jeremy Howard of fast.ai who I interviewed in episode 186.

For the complete show notes for this episode, visit https://twimlai.com/talk/195.

]]>
In this episode, we’re joined by Sebastian Ruder, a PhD student studying natural language processing at the National University of Ireland and a Research Scientist at text analysis startup Aylien.

In our conversation, Sebastian and I discuss recent milestones in neural NLP, including multi-task learning and pretrained language models. We also discuss the use of attention-based models, Tree RNNs and LSTMs, and memory-based networks. Finally, Sebastian walks us through his recent ULMFit paper, short for “Universal Language Model Fine-tuning for Text Classification,” which he co-authored with Jeremy Howard of fast.ai who I interviewed in episode 186.

For the complete show notes for this episode, visit https://twimlai.com/talk/195.

]]>
01:01:40 clean podcast,of,science,technology,tech,data,language,intelligence,jeremy,learning,university,national,processing,natural,howard,artificial,ibm,neural,ruder,machine,ai,sebastian,ireland,nlp,ml,lstm,twiml,fastai,umlfit In this episode, we’re joined by Sebastian Ruder, PhD student studying NLP at National University of Ireland and Research Scientist at text analysis startup Aylien. We discuss recent milestones in neural NLP, including multi-task learning and pretrained language models. We also look at the use of attention-based models, Tree RNNs and LSTMs, and memory-based networks. Finally, Sebastian walks us through his ULMFit paper, which he co-authored with Jeremy Howard of fast.ai who I interviewed in episode 186. 195 full Sam Charrington
Natural Language Processing at StockTwits with Garrett Hoffman - TWiML Talk #194 Natural Language Processing at StockTwits with Garrett Hoffman Thu, 25 Oct 2018 21:22:02 +0000 In this episode, we’re joined by Garrett Hoffman, Director of Data Science at Stocktwits.

Garrett and I caught up at last month’s Strata Data conference, where he presented a tutorial on “Deep Learning Methods for NLP with Emphasis on Financial Services.” Stocktwits is a social network for the investing community which has its roots in the use of the $cashtag on Twitter. In our conversation, we discuss applications such as Stocktwits’ own use of “social sentiment graphs” built on multilayer LSTM networks to gauge community sentiment about certain stocks in real time, as well as the more general use of natural language processing for generating trading ideas.

I’d also like to send a huge thanks to our friends at IBM for their sponsorship of this episode. Are you interested in exploring code patterns leveraging multiple technologies, including ML and AI? Then check out IBM Developer. With more than 100 open source programs, a library of knowledge resources, developer advocates ready to help, and a global community of developers, what in the world will you create? Dive in at https://ibm.biz/mlaipodcast, and be sure to let them know that TWiML sent you!

For the complete show notes for this episode, visit https://twimlai.com/talk/194.

]]>
In this episode, we’re joined by Garrett Hoffman, Director of Data Science at Stocktwits.

Garrett and I caught up at last month’s Strata Data conference, where he presented a tutorial on “Deep Learning Methods for NLP with Emphasis on Financial Services.” Stocktwits is a social network for the investing community which has its roots in the use of the $cashtag on Twitter. In our conversation, we discuss applications such as Stocktwits’ own use of “social sentiment graphs” built on multilayer LSTM networks to gauge community sentiment about certain stocks in real time, as well as the more general use of natural language processing for generating trading ideas.

I’d also like to send a huge thanks to our friends at IBM for their sponsorship of this episode. Are you interested in exploring code patterns leveraging multiple technologies, including ML and AI? Then check out IBM Developer. With more than 100 open source programs, a library of knowledge resources, developer advocates ready to help, and a global community of developers, what in the world will you create? Dive in at https://ibm.biz/mlaipodcast, and be sure to let them know that TWiML sent you!

For the complete show notes for this episode, visit https://twimlai.com/talk/194.

]]>
51:38 clean podcast,science,technology,tech,data,language,deep,intelligence,learning,processing,natural,financial,services,artificial,ibm,developer,machine,ai,hoffman,garrett,nlp,ml,stocktwits,fintech,lstm,twiml,cashtag In this episode, we’re joined by Garrett Hoffman, Director of Data Science at Stocktwits. Stocktwits is a social network for the investing community which has its roots in the use of the $cashtag on Twitter. In our conversation, we discuss applications such as Stocktwits’ own use of “social sentiment graphs” built on multilayer LSTM networks to gauge community sentiment about certain stocks in real time, as well as the more general use of natural language processing for generating trading ideas. 194 full Sam Charrington
Advanced Reinforcement Learning & Data Science for Social Impact with Vukosi Marivate - TWiML Talk #193 Advanced Reinforcement Learning & Data Science for Social Impact with Vukosi Marivate Tue, 23 Oct 2018 19:30:30 +0000 In this, the final show of our Deep Learning Indaba Series, we speak with Vukosi Marivate, Chair of Data Science at the University of Pretoria and a co-organizer of the Indaba.

My conversation with Vukosi fell into two distinct parts. The first part focused on his PhD research in the area of reinforcement learning, discussing several advanced RL scenarios including inverse RL, multiple agent RL, and using RL when we have incomplete knowledge of the environment. We then moved on to discuss his current research, which broadly falls under the banner of data science with social impact. Specifically, we review several of the applications he and his students are currently exploring in areas such as public safety and energy.

The complete show notes for this episode can be found at https://twimlai.com/talk/193.

For more information on our Deep Learning Indaba Series, visit https://twimlai.com/indaba2018

]]>
In this, the final show of our Deep Learning Indaba Series, we speak with Vukosi Marivate, Chair of Data Science at the University of Pretoria and a co-organizer of the Indaba.

My conversation with Vukosi fell into two distinct parts. The first part focused on his PhD research in the area of reinforcement learning, discussing several advanced RL scenarios including inverse RL, multiple agent RL, and using RL when we have incomplete knowledge of the environment. We then moved on to discuss his current research, which broadly falls under the banner of data science with social impact. Specifically, we review several of the applications he and his students are currently exploring in areas such as public safety and energy.

The complete show notes for this episode can be found at https://twimlai.com/talk/193.

For more information on our Deep Learning Indaba Series, visit https://twimlai.com/indaba2018

]]>
47:14 clean podcast,of,science,social,technology,tech,data,deep,intelligence,learning,university,artificial,machine,ai,reinforcement,indaba,ml,pretoria,twiml,vukosi,marivate In the final episode of our Deep Learning Indaba series, we speak with Vukosi Marivate, Chair of Data Science at the University of Pretoria and a co-organizer of the Indaba. My conversation with Vukosi falls into two distinct parts, his PhD research in reinforcement learning, and his current research, which falls under the banner of data science with social impact. We discuss several advanced RL scenarios, along with several applications he is currently exploring in areas like public safety and energy. 193 full Sam Charrington
AI Ethics, Strategic Decisioning and Game Theory with Osonde Osoba - TWiML Talk #192 AI Ethics, Strategic Decisioning and Game Theory with Osonde Osoba Thu, 18 Oct 2018 14:59:28 +0000 In this episode of our Deep Learning Indaba Series, we’re joined by Osonde Osoba, Engineer at RAND Corporation and Professor at the Pardee RAND Graduate School.

Osonde and I spoke on the heels of the Indaba, where he presented on AI Ethics and Policy. We discuss his framework-based approach for evaluating ethical issues, such as applying the ethical principles laid out in the Belmont Report, and how to build an intuition for where ethical flashpoints may exist in these discussions. We then shift gears to Osonde’s own model development research and end up in a really interesting discussion about the application of machine learning to strategic decisions and game theory, including the use of fuzzy cognitive map models.

The complete show notes for this episode can be found at twimlai.com/talk/192.

For more info on the Deep Learning Indaba series, visit twimlai.com/indaba2018.

]]>
In this episode of our Deep Learning Indaba Series, we’re joined by Osonde Osoba, Engineer at RAND Corporation and Professor at the Pardee RAND Graduate School.

Osonde and I spoke on the heels of the Indaba, where he presented on AI Ethics and Policy. We discuss his framework-based approach for evaluating ethical issues, such as applying the ethical principles laid out in the Belmont Report, and how to build an intuition for where ethical flashpoints may exist in these discussions. We then shift gears to Osonde’s own model development research and end up in a really interesting discussion about the application of machine learning to strategic decisions and game theory, including the use of fuzzy cognitive map models.

The complete show notes for this episode can be found at twimlai.com/talk/192.

For more info on the Deep Learning Indaba series, visit twimlai.com/indaba2018.

]]>
47:26 clean podcast,science,technology,tech,model,data,game,intelligence,development,theory,policy,learning,belmont,fuzzy,ethics,rand,report,cognitive,maps,artificial,framework,machine,ai,ml,osoba,twiml,osonde In this episode of our Deep Learning Indaba Series, we’re joined by Osonde Osoba, Engineer at RAND Corporation. Osonde and I spoke on the heels of the Indaba, where he presented on AI Ethics and Policy. We discuss his framework-based approach for evaluating ethical issues and how to build an intuition for where ethical flashpoints may exist in these discussions. We also discuss Osonde’s own model development research, including the application of machine learning to strategic decisions and game theor 192 full Sam Charrington
Acoustic Word Embeddings for Low Resource Speech Processing with Herman Kamper - TWiML Talk #191 Acoustic Word Embeddings for Low Resource Speech Processing with Herman Kamper Tue, 16 Oct 2018 16:47:40 +0000 In this episode of our Deep Learning Indaba Series, we’re joined by Herman Kamper, Lecturer in the electrical and electronics engineering department at Stellenbosch University in SA and a co-organizer of the Indaba.

Herman and I discuss his work on limited- and zero-resource speech recognition, how those differ from regular speech recognition, and the tension between linguistic and statistical methods in this space. We dive into the specifics of the methods being used and developed in Herman’s lab as well, including how phoneme data is used for segmenting and processing speech data.

The full show notes for this episode can be found at https://twimlai.com/talk/191.

For more on the Deep Learning Indaba series, visit https://twimlai.com/indaba2018.

 

]]>
In this episode of our Deep Learning Indaba Series, we’re joined by Herman Kamper, Lecturer in the electrical and electronics engineering department at Stellenbosch University in SA and a co-organizer of the Indaba.

Herman and I discuss his work on limited- and zero-resource speech recognition, how those differ from regular speech recognition, and the tension between linguistic and statistical methods in this space. We dive into the specifics of the methods being used and developed in Herman’s lab as well, including how phoneme data is used for segmenting and processing speech data.

The full show notes for this episode can be found at https://twimlai.com/talk/191.

For more on the Deep Learning Indaba series, visit https://twimlai.com/indaba2018.

 

]]>
01:02:00 clean podcast,science,technology,tech,data,deep,intelligence,learning,university,processing,speech,herman,artificial,machine,ai,nlp,phoneme,indaba,stellenbosch,ml,twiml,kamper In this episode of our Deep Learning Indaba Series, we’re joined by Herman Kamper, lecturer at Stellenbosch University in SA and a co-organizer of the Indaba. We discuss his work on limited- and zero-resource speech recognition, how those differ from regular speech recognition, and the tension between linguistic and statistical methods in this space. We also dive into the specifics of the methods being used and developed in Herman’s lab. 191 full Sam Charrington
Learning Representations for Visual Search with Naila Murray - TWiML Talk #190 Learning Representations for Visual Search with Naila Murray Fri, 12 Oct 2018 16:52:54 +0000 In this episode of our Deep Learning Indaba series, we’re joined by Naila Murray, Senior Research Scientist and Group Lead in the computer vision group at Naver Labs Europe.

Naila presented at the Indaba on computer vision, and in this discussion we explore her work on visual attention, including why visual attention is important and the trajectory of work in the field over time. We also discuss her paper “Generalized Max Pooling,” and her recent research interest in learning representations with deep learning.

For the complete show notes, visit twimlai.com/talk/190.

]]>
In this episode of our Deep Learning Indaba series, we’re joined by Naila Murray, Senior Research Scientist and Group Lead in the computer vision group at Naver Labs Europe.

Naila presented at the Indaba on computer vision, and in this discussion we explore her work on visual attention, including why visual attention is important and the trajectory of work in the field over time. We also discuss her paper “Generalized Max Pooling,” and her recent research interest in learning representations with deep learning.

For the complete show notes, visit twimlai.com/talk/190.

]]>
41:54 clean podcast,science,technology,tech,data,deep,intelligence,vision,learning,computer,visual,attention,artificial,machine,murray,ai,tracking,labs,naila,indaba,gaze,ml,naver,representations,twiml In this episode of our Deep Learning Indaba series, we’re joined by Naila Murray, Senior Research Scientist and Group Lead in the computer vision group at Naver Labs Europe. Naila presented at the Indaba on computer vision. In this discussion, we explore her work on visual attention, including why visual attention is important and the trajectory of work in the field over time. We also discuss her paper  “Generalized Max Pooling,” and much more! For the complete show notes, visit twimlai.com/tal 190 full Sam Charrington
Evaluating Model Explainability Methods with Sara Hooker - TWiML Talk #189 Evaluating Model Explainability Methods with Sara Hooker Wed, 10 Oct 2018 18:24:51 +0000 In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain.

I had the pleasure of speaking with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks. We discuss what interpretability means and when it’s important, and explore some nuances like the distinction between interpreting model decisions vs model function. We also dig into her paper Evaluating Feature Importance Estimates and look at the relationship between this work and interpretability approaches like LIME.

We also talk a bit about Google, in particular, the relationship between Brain and the rest of the Google AI landscape and the significance of the recently announced Google AI Lab in Accra, Ghana, being led by friend of the show Moustapha Cisse. And, of course, we chat a bit about the Indaba as well.

For the complete show notes for this episode, visit twimlai.com/talk/189.

For more information on the Deep Learning Indaba series, visit twimlai.com/indaba2018

]]>
In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain.

I had the pleasure of speaking with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks. We discuss what interpretability means and when it’s important, and explore some nuances like the distinction between interpreting model decisions vs model function. We also dig into her paper Evaluating Feature Importance Estimates and look at the relationship between this work and interpretability approaches like LIME.

We also talk a bit about Google, in particular, the relationship between Brain and the rest of the Google AI landscape and the significance of the recently announced Google AI Lab in Accra, Ghana, being led by friend of the show Moustapha Cisse. And, of course, we chat a bit about the Indaba as well.

For the complete show notes for this episode, visit twimlai.com/talk/189.

For more information on the Deep Learning Indaba series, visit twimlai.com/indaba2018

]]>
01:05:02 clean podcast,science,technology,tech,brain,model,google,data,deep,intelligence,learning,sara,artificial,machine,ai,hooker,lime,indaba,ml,explainability,twiml,interpretability In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain. I spoke with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks. We discuss what interpretability means and nuances like the distinction between interpreting model decisions vs model function. We also talk about the relationship between Google Brain and the rest of the Google AI landscape and the significance of the Google AI Lab in Accra, Ghana. 189 full Sam Charrington
Graph Analytic Systems with Zachary Hanif - TWiML Talk #188 Graph Analytic Systems with Zachary Hanif Mon, 08 Oct 2018 19:49:27 +0000 In this, the final episode of our Strata Data Conference series, we’re joined by Zachary Hanif, Director of Machine Learning at Capital One’s Center for Machine Learning.

Zach led a session at Strata called “Network effects: Working with modern graph analytic systems,” which we had a great chat about back in New York. We start our discussion with a look at the role of graph analytics in the machine learning toolkit, including some important application areas for graph-based systems. We continue with an overview of the different ways to implement graph analytics, with a particular emphasis on the emerging role of what he calls graphical processing engines which excel at handling large datasets. We also discuss the relationship between these kinds of systems and probabilistic graphical models, graphical embedding models, and graph convolutional networks in deep learning.

The complete show notes for this episode can be found at twimlai.com/talk/188.

For more information on the Strata Data Conference series, visit twimlai.com/stratany2018.

]]>
In this, the final episode of our Strata Data Conference series, we’re joined by Zachary Hanif, Director of Machine Learning at Capital One’s Center for Machine Learning.

Zach led a session at Strata called “Network effects: Working with modern graph analytic systems,” which we had a great chat about back in New York. We start our discussion with a look at the role of graph analytics in the machine learning toolkit, including some important application areas for graph-based systems. We continue with an overview of the different ways to implement graph analytics, with a particular emphasis on the emerging role of what he calls graphical processing engines which excel at handling large datasets. We also discuss the relationship between these kinds of systems and probabilistic graphical models, graphical embedding models, and graph convolutional networks in deep learning.

The complete show notes for this episode can be found at twimlai.com/talk/188.

For more information on the Strata Data Conference series, visit twimlai.com/stratany2018.

]]>
55:29 clean podcast,science,center,technology,system,engine,networks,tech,for,data,deep,intelligence,one,learning,processing,artificial,capital,analytics,machine,ai,zach,strata,graph,graphical,hanif,ml,cloudera,analytic,convolutional,twiml,embeddings In this, the final episode of our Strata Data Conference series, we’re joined by Zachary Hanif, Director of Machine Learning at Capital One’s Center for Machine Learning. We start our discussion with a look at the role of graph analytics in the ML toolkit, including some important application areas for graph-based systems. Zach gives us an overview of the different ways to implement graph analytics, including what he calls graphical processing engines which excel at handling large datasets, & much m 188 full Sam Charrington
Diversification in Recommender Systems with Ahsan Ashraf - TWiML Talk #187 Diversification in Recommender Systems with Ahsan Ashraf Thu, 04 Oct 2018 17:28:05 +0000 In this episode of our Strata Data conference series, we’re joined by Ahsan Ashraf, data scientist at Pinterest.

In our conversation, Ahsan and I discuss his presentation from the conference, “Diversification in recommender systems: Using topical variety to increase user satisfaction.” We cover the experiments his team ran to explore the impact of diversification in user’s boards, the methodology his team used to incorporate variety into the Pinterest recommendation system, the metrics they monitored through the process, and how they performed sensitivity sanity testing.

The show notes for this episode can be found at https://twimlai.com/talk/187.

]]>
In this episode of our Strata Data conference series, we’re joined by Ahsan Ashraf, data scientist at Pinterest.

In our conversation, Ahsan and I discuss his presentation from the conference, “Diversification in recommender systems: Using topical variety to increase user satisfaction.” We cover the experiments his team ran to explore the impact of diversification in user’s boards, the methodology his team used to incorporate variety into the Pinterest recommendation system, the metrics they monitored through the process, and how they performed sensitivity sanity testing.

The show notes for this episode can be found at https://twimlai.com/talk/187.

]]>
45:43 clean podcast,science,technology,tech,data,systems,intelligence,learning,testing,artificial,machine,ai,pin,sanity,diversification,ml,recommender,pinterest,ashraf,ahsan,twiml,embeddings In this episode of our Strata Data conference series, we’re joined by Ahsan Ashraf, data scientist at Pinterest. We discuss his presentation, “Diversification in recommender systems: Using topical variety to increase user satisfaction,” covering the experiments his team ran to explore the impact of diversification in user’s boards, the methodology his team used to incorporate variety into the Pinterest recommendation system and much more! The show notes can be found at https://twimlai.com/talk/18 187 full Sam Charrington
The Fastai v1 Deep Learning Framework with Jeremy Howard - TWiML Talk #186 The Fastai v1 Deep Learning Framework with Jeremy Howard Tue, 02 Oct 2018 16:13:49 +0000 In today's episode we’ll be taking a break from our Strata Data conference series and presenting a special conversation with Jeremy Howard, founder and researcher at Fast.ai.

Fast.ai is a company many of our listeners are quite familiar with due to their popular deep learning course. This episode is being released today in conjunction with the company’s announcement of version 1.0 of their fastai library at the inaugural Pytorch Devcon in San Francisco.

Jeremy and I cover a ton of ground in this conversation. Of course, we dive into the new library and explore why it’s important and what’s changed. We also explore the unique way in which it was developed and what it means for the future of the fast.ai courses. Jeremy shares a ton of great insights and lessons learned in this conversation, not to mention mentions a bunch of really interesting-sounding papers.

The complete show notes, and links to the fastai library can be found here.

]]>
In today's episode we’ll be taking a break from our Strata Data conference series and presenting a special conversation with Jeremy Howard, founder and researcher at Fast.ai.

Fast.ai is a company many of our listeners are quite familiar with due to their popular deep learning course. This episode is being released today in conjunction with the company’s announcement of version 1.0 of their fastai library at the inaugural Pytorch Devcon in San Francisco.

Jeremy and I cover a ton of ground in this conversation. Of course, we dive into the new library and explore why it’s important and what’s changed. We also explore the unique way in which it was developed and what it means for the future of the fast.ai courses. Jeremy shares a ton of great insights and lessons learned in this conversation, not to mention mentions a bunch of really interesting-sounding papers.

The complete show notes, and links to the fastai library can be found here.

]]>
01:11:19 clean podcast,science,technology,tech,data,deep,intelligence,jeremy,learning,rachel,library,howard,python,thomas,artificial,framework,notebook,machine,ai,transfer,v1,ml,jupyter,twiml,fastai,pytorch,kaggle In today's episode we're presenting a special conversation with Jeremy Howard, founder and researcher at Fast.ai. This episode is being released today in conjunction with the company’s announcement of version 1.0 of their fastai library at the inaugural Pytorch Devcon in San Francisco. In our conversation, we dive into the new library, exploring why it’s important and what’s changed, the unique way in which it was developed, what it means for the future of the fast.ai courses, and much more! 186 full Sam Charrington
Federated ML for Edge Applications with Justin Norman - TWiML Talk #185 Federated ML for Edge Applications with Justin Norman Thu, 27 Sep 2018 21:40:25 +0000 In this episode of our Strata Data conference series, we’re joined by Justin Norman, Director of Research and Data Science Services at Cloudera Fast Forward Labs.

Fast Forward Labs was an Applied AI research firm and consultancy founded by Hilary Mason, who’s TWiML Talk episode remains an all-time fan favorite. My chat with Justin took place on the 1 year anniversary of Fast Forward Labs’ acquisition by Cloudera, so we start with an update on the company before diving into a look at some of recent and upcoming research projects. Specifically, we discuss their recent report on Multi-Task Learning and their upcoming research into Federated Machine Learning for AI at the edge.

To learn more about Cloudera and CFFL, visit Cloudera's Machine Learning resource center at cloudera.com/ml.

For the complete show notes, visit https://twimlai.com/talk/185.

 

]]>
In this episode of our Strata Data conference series, we’re joined by Justin Norman, Director of Research and Data Science Services at Cloudera Fast Forward Labs.

Fast Forward Labs was an Applied AI research firm and consultancy founded by Hilary Mason, who’s TWiML Talk episode remains an all-time fan favorite. My chat with Justin took place on the 1 year anniversary of Fast Forward Labs’ acquisition by Cloudera, so we start with an update on the company before diving into a look at some of recent and upcoming research projects. Specifically, we discuss their recent report on Multi-Task Learning and their upcoming research into Federated Machine Learning for AI at the edge.

To learn more about Cloudera and CFFL, visit Cloudera's Machine Learning resource center at cloudera.com/ml.

For the complete show notes, visit https://twimlai.com/talk/185.

 

]]>
48:25 clean podcast,science,justin,technology,tech,data,intelligence,one,mason,hilary,learning,research,multi,fast,artificial,norman,capital,machine,applied,forward,ai,strata,task,federated,labs,ml,cloudera,twiml In this episode we’re joined by Justin Norman, Director of Research and Data Science Services at Cloudera Fast Forward Labs. In my chat with Justin we start with an update on the company before diving into a look at some of recent and upcoming research projects. Specifically, we discuss their recent report on Multi-Task Learning and their upcoming research into Federated Machine Learning for AI at the edge. For the complete show notes, visit https://twimlai.com/talk/185. 185 full Sam Charrington
Exploring Dark Energy & Star Formation w/ ML with Viviana Acquaviva - TWiML Talk #184 Exploring Dark Energy & Star Formation w/ ML with Viviana Acquaviva Wed, 26 Sep 2018 17:49:27 +0000 In today’s episode of our Strata Data series, we’re joined by Viviana Acquaviva, Associate Professor at City Tech, the New York City College of Technology.

Viviana led a tutorial at the conference, titled “Learning Machine Learning using Astronomy data sets.” In our conversation, we begin by discussing an ongoing project she’s a part of called the “Hobby-Eberly Telescope Dark Energy eXperiment,” or HETDEX. In this project, Viviana tackles the challenge of understanding of how and why the expansion of the universe is accelerating, which is directly contrary to the principles of gravity. We discuss her motivation for undertaking this project, how she gets her data, the models she uses, and how she evaluates their performance.

The complete show notes can be found at https://twimlai.com/talk/184

]]>
In today’s episode of our Strata Data series, we’re joined by Viviana Acquaviva, Associate Professor at City Tech, the New York City College of Technology.

Viviana led a tutorial at the conference, titled “Learning Machine Learning using Astronomy data sets.” In our conversation, we begin by discussing an ongoing project she’s a part of called the “Hobby-Eberly Telescope Dark Energy eXperiment,” or HETDEX. In this project, Viviana tackles the challenge of understanding of how and why the expansion of the universe is accelerating, which is directly contrary to the principles of gravity. We discuss her motivation for undertaking this project, how she gets her data, the models she uses, and how she evaluates their performance.

The complete show notes can be found at https://twimlai.com/talk/184

]]>
41:22 clean podcast,science,energy,technology,tech,data,intelligence,astronomy,dark,universe,hobby,learning,telescope,artificial,machine,ai,strata,astrophysics,ml,eberly,viviana,acquaviva,twiml,hetdex In today’s episode of our Strata Data series, we’re joined by Viviana Acquaviva, Associate Professor at City Tech, the New York City College of Technology. In our conversation, we discuss an ongoing project she’s a part of called the “Hobby-Eberly Telescope Dark Energy eXperiment,” her motivation for undertaking this project, how she gets her data, the models she uses, and how she evaluates their performance. The complete show notes can be found at https://twimlai.com/talk/184.  184 full Sam Charrington
Document Vectors in the Wild with James Dreiss - TWiML Talk #183 Document Vectors in the Wild with James Dreiss Mon, 24 Sep 2018 18:13:13 +0000 In this episode of our Strata Data series we’re joined by James Dreiss, Senior Data Scientist at international news syndicate Reuters.

James and I sat down to discuss his talk from the conference “Document vectors in the wild, building a content recommendation system,” in which he details how Reuters implemented document vectors to recommend content to users of their new “infinite scroll” page layout. In our conversation we take a look at what document vectors are and how they’re created, how they tested the accuracy of their models, and the future of embeddings for natural language processing.

The complete show notes for this episode can be found at https://twimlai.com/talk/183.

For more info on the Strata Data Conference Series, visit https://twimlai.com/stratany2018.

]]>
In this episode of our Strata Data series we’re joined by James Dreiss, Senior Data Scientist at international news syndicate Reuters.

James and I sat down to discuss his talk from the conference “Document vectors in the wild, building a content recommendation system,” in which he details how Reuters implemented document vectors to recommend content to users of their new “infinite scroll” page layout. In our conversation we take a look at what document vectors are and how they’re created, how they tested the accuracy of their models, and the future of embeddings for natural language processing.

The complete show notes for this episode can be found at https://twimlai.com/talk/183.

For more info on the Strata Data Conference Series, visit https://twimlai.com/stratany2018.

]]>
42:07 clean podcast,science,james,technology,recommendation,tech,word,data,intelligence,one,learning,content,artificial,capital,machine,ai,reuters,strata,document,nlp,unsupervised,vectors,ml,cloudera,twiml,embeddings,dreiss In this episode of our Strata Data series we’re joined by James Dreiss, Senior Data Scientist at international news syndicate Reuters. James and I sat down to discuss his talk from the conference “Document vectors in the wild, building a content recommendation system,” in which he details how Reuters implemented document vectors to recommend content to users of their new “infinite scroll” page layout. 183 full Sam Charrington
Applied Machine Learning for Publishers with Naveed Ahmad - TWiML Talk #182 Applied Machine Learning for Publishers with Naveed Ahmad Thu, 20 Sep 2018 20:56:07 +0000 In today’s episode we’re joined by Naveed Ahmad, Senior Director of data engineering and machine learning at Hearst Newspapers.

A few months ago, Naveed gave a talk at the Google Cloud Next Conference on “How Publishers Can Take Advantage of Machine Learning.” In our conversation, we discuss into the role of ML at Hearst, including their motivations for implementing it and some of their early projects, the challenges of data acquisition within a large organization, and the benefits they enjoy from using Google’s BigQuery as their data warehouse.

For the complete show notes for this episode, visit https://twimlai.com/talk/182.

]]>
In today’s episode we’re joined by Naveed Ahmad, Senior Director of data engineering and machine learning at Hearst Newspapers.

A few months ago, Naveed gave a talk at the Google Cloud Next Conference on “How Publishers Can Take Advantage of Machine Learning.” In our conversation, we discuss into the role of ML at Hearst, including their motivations for implementing it and some of their early projects, the challenges of data acquisition within a large organization, and the benefits they enjoy from using Google’s BigQuery as their data warehouse.

For the complete show notes for this episode, visit https://twimlai.com/talk/182.

]]>
39:34 clean podcast,science,technology,tech,cloud,google,data,intelligence,learning,next,publishing,artificial,machine,newspapers,ai,warehouse,prediction,nlp,ahmad,ml,bigquery,hearst,churn,naveed,twiml,automl In today’s episode we’re joined by Naveed Ahmad, Senior Director of data engineering and machine learning at Hearst Newspapers. In our conversation, we discuss into the role of ML at Hearst, including their motivations for implementing it and some of their early projects, the challenges of data acquisition within a large organization, and the benefits they enjoy from using Google’s BigQuery as their data warehouse. For the complete show notes for this episode, visit https://twimlai.com/talk/182. 182 full Sam Charrington
Anticipating Superintelligence with Nick Bostrom - TWiML Talk #181 Anticipating Superintelligence with Nick Bostrom Mon, 17 Sep 2018 19:49:25 +0000 In this episode, we’re joined by Nick Bostrom, professor in the faculty of philosophy at the University of Oxford, where he also heads the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to AI safety and ethics.

Nick is of course also author of the book “Superintelligence: Paths, Dangers, Strategies.” In our conversation, we discuss the risks associated with Artificial General Intelligence and the more advanced AI systems Nick refers to as superintelligence. We also discuss Nick’s writings on the topic of openness in AI development, and the advantages and costs of open and closed development on the part of nations and AI research organizations. Finally, we take a look at what good safety precautions might look like, and how we can create an effective ethics framework for superintelligent systems.

The notes for this episode can be found at https://twimlai.com/talk/181.

]]>
In this episode, we’re joined by Nick Bostrom, professor in the faculty of philosophy at the University of Oxford, where he also heads the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to AI safety and ethics.

Nick is of course also author of the book “Superintelligence: Paths, Dangers, Strategies.” In our conversation, we discuss the risks associated with Artificial General Intelligence and the more advanced AI systems Nick refers to as superintelligence. We also discuss Nick’s writings on the topic of openness in AI development, and the advantages and costs of open and closed development on the part of nations and AI research organizations. Finally, we take a look at what good safety precautions might look like, and how we can create an effective ethics framework for superintelligent systems.

The notes for this episode can be found at https://twimlai.com/talk/181.

]]>
45:29 clean podcast,science,nick,technology,tech,data,philosophy,intelligence,learning,general,safety,ethics,artificial,machine,ai,ml,bostrom,superintelligence,twiml In this episode, we’re joined by Nick Bostrom, professor at the University of Oxford and head of the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to AI safety and ethics. In our conversation, we discuss the risks associated with Artificial General Intelligence, advanced AI systems Nick refers to as superintelligence, openness in AI development and more! The notes for this episode can be found at https://twimlai.com/talk/18 181 full Sam Charrington
Can We Train an AI to Understand Body Language? with Hanbyul Joo - TWIML Talk #180 Can We Train an AI to Understand Body Language? with Hanbyul Joo Thu, 13 Sep 2018 19:46:18 +0000 In this episode, we’re joined by Hanbyul Joo, a PhD student in the Robotics Institute at Carnegie Mellon University.

Han, who is on track to complete his thesis at the end of the year, is working on what is called the “Panoptic Studio,” a multi-dimension motion capture studio with over 500 camera sensors that are used to capture human body behavior and body language. While robotic and other artificially intelligent systems can interact with humans, Han’s work focuses on understanding how humans interact and behave so that we can teach AI-based systems to react to humans more naturally. In our conversation, we discuss his CVPR best student paper award winner “Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies.” Han also shares a complete overview of the Panoptic studio, and we dig into the creation and performance of the models, and much more.

For the complete show notes for this episode, visit https://twimlai.com/talk/180.

]]>
In this episode, we’re joined by Hanbyul Joo, a PhD student in the Robotics Institute at Carnegie Mellon University.

Han, who is on track to complete his thesis at the end of the year, is working on what is called the “Panoptic Studio,” a multi-dimension motion capture studio with over 500 camera sensors that are used to capture human body behavior and body language. While robotic and other artificially intelligent systems can interact with humans, Han’s work focuses on understanding how humans interact and behave so that we can teach AI-based systems to react to humans more naturally. In our conversation, we discuss his CVPR best student paper award winner “Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies.” Han also shares a complete overview of the Panoptic studio, and we dig into the creation and performance of the models, and much more.

For the complete show notes for this episode, visit https://twimlai.com/talk/180.

]]>
51:53 clean podcast,science,camera,technology,tech,data,intelligence,vision,learning,university,computer,human,3d,studio,artificial,institute,robotics,2d,machine,ai,interaction,mellon,cmu,carnegie,joo,ml,sensors,panoptic,cvpr,twiml,hanbyul In this episode, we’re joined by Hanbyul Joo, a PhD student at CMU. Han is working on what is called the “Panoptic Studio,” a multi-dimension motion capture studio used to capture human body behavior and body language. His work focuses on understanding how humans interact and behave so that we can teach AI-based systems to react to humans more naturally. We also discuss his CVPR best student paper award winner “Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies.” 180 full Sam Charrington
Biological Particle Identification and Tracking with Jay Newby - TWiML Talk #179 Biological Particle Identification and Tracking with Jay Newby Mon, 10 Sep 2018 18:08:00 +0000 In today’s episode we’re joined by Jay Newby, Assistant Professor in the Department of Mathematical and Statistical Sciences at the University of Alberta.

Jay joins us to discuss his work applying deep learning to biology, including his paper “Deep neural networks automate detection for tracking of submicron scale particles in 2D and 3D.” In our conversation, Jay gives us an overview of particle tracking and a look at how he combines neural networks with physics-based particle filter models. We also touch on some of the unique challenges to working at the micron level in biology, how he evaluated the success of his experiments, and the next steps for his research.

The complete show notes for this episode can be found at https://twimlai.com/talk/179.

 

]]>
In today’s episode we’re joined by Jay Newby, Assistant Professor in the Department of Mathematical and Statistical Sciences at the University of Alberta.

Jay joins us to discuss his work applying deep learning to biology, including his paper “Deep neural networks automate detection for tracking of submicron scale particles in 2D and 3D.” In our conversation, Jay gives us an overview of particle tracking and a look at how he combines neural networks with physics-based particle filter models. We also touch on some of the unique challenges to working at the micron level in biology, how he evaluated the success of his experiments, and the next steps for his research.

The complete show notes for this episode can be found at https://twimlai.com/talk/179.

 

]]>
45:57 clean podcast,of,science,jay,network,technology,tech,data,deep,intelligence,biology,physics,learning,university,alberta,artificial,neural,particle,machine,ai,tracking,ml,newby,twiml,submicron In today’s episode we’re joined by Jay Newby, Assistant Professor in the Department of Mathematical and Statistical Sciences at the University of Alberta. Jay joins us to discuss his work applying deep learning to biology, including his paper “Deep neural networks automate detection for tracking of submicron scale particles in 2D and 3D.” He gives us an overview of particle tracking and a look at how he combines neural networks with physics-based particle filter models. 179 full Sam Charrington
AI for Content Creation with Debajyoti Ray - TWiML Talk #178 AI for Content Creation with Debajyoti Ray Thu, 06 Sep 2018 19:09:46 +0000 In today’s episode we’re joined by Debajyoti Ray, Founder and CEO of RivetAI, a startup producing AI-powered tools for storytellers and filmmakers.

Rivet’s tools are inspired in part by the founders’ collaboration with the team that created Sunspring, a short, AI-written film starring Silicon Valley’s Thomas Middleditch, which you may have seen when it was making the rounds a while back. Deb and I discuss some of what he’s learned in the journey to apply AI to content creation, including how Rivet approaches the use of machine learning to automate creative processes, the company’s use hierarchical LSTM models and autoencoders, and the tech stack that they’ve put in place to support the business.

For the complete show notes for this episode, visit twimlai.com/talk/178.

]]>
In today’s episode we’re joined by Debajyoti Ray, Founder and CEO of RivetAI, a startup producing AI-powered tools for storytellers and filmmakers.

Rivet’s tools are inspired in part by the founders’ collaboration with the team that created Sunspring, a short, AI-written film starring Silicon Valley’s Thomas Middleditch, which you may have seen when it was making the rounds a while back. Deb and I discuss some of what he’s learned in the journey to apply AI to content creation, including how Rivet approaches the use of machine learning to automate creative processes, the company’s use hierarchical LSTM models and autoencoders, and the tech stack that they’ve put in place to support the business.

For the complete show notes for this episode, visit twimlai.com/talk/178.

]]>
55:58 clean podcast,of,science,creative,technology,tech,data,toronto,intelligence,ray,learning,university,filmmaking,artificial,machine,ai,geoffrey,creator,nlp,nlg,ml,scriptwriting,hinton,lstm,twiml,nlu,rivetai,debajyoti In today’s episode we’re joined by Debajyoti Ray, Founder and CEO of RivetAI, a startup producing AI-powered tools for storytellers and filmmakers. Deb and I discuss some of what he’s learned in the journey to apply AI to content creation, including how Rivet approaches the use of machine learning to automate creative processes, the company’s use hierarchical LSTM models and autoencoders, and the tech stack that they’ve put in place to support the business. 178 full Sam Charrington
Deep Reinforcement Learning Primer and Research Frontiers with Kamyar Azizzadenesheli - TWiML Talk #177 Deep Reinforcement Learning Primer and Research Frontiers with Kamyar Azizzadenesheli Thu, 30 Aug 2018 20:07:16 +0000 Today we’re joined by Kamyar Azizzadenesheli, PhD student at the University of California, Irvine, and visiting researcher at Caltech where he works with Anima Anandkumar, who you might remember from TWiML Talk 142.

We begin with a reinforcement learning primer of sorts, in which we review the core elements of RL, along with quite a few examples to help get you up to speed. We then discuss a pair of Kamyar’s RL-related papers: “Efficient Exploration through Bayesian Deep Q-Networks” and “Sample-Efficient Deep RL with Generative Adversarial Tree Search.” In addition to discussing Kamyar’s work, we also chat a bit of the general landscape of RL research today. So whether you’re new to the field or want to dive into cutting-edge reinforcement learning research with us, this podcast is here for you!


If you'd like to skip the Deep Reinforcement Learning primer portion of this and jump to the research discussion, skip ahead to the 34:30 mark of the episode.

]]>
Today we’re joined by Kamyar Azizzadenesheli, PhD student at the University of California, Irvine, and visiting researcher at Caltech where he works with Anima Anandkumar, who you might remember from TWiML Talk 142.

We begin with a reinforcement learning primer of sorts, in which we review the core elements of RL, along with quite a few examples to help get you up to speed. We then discuss a pair of Kamyar’s RL-related papers: “Efficient Exploration through Bayesian Deep Q-Networks” and “Sample-Efficient Deep RL with Generative Adversarial Tree Search.” In addition to discussing Kamyar’s work, we also chat a bit of the general landscape of RL research today. So whether you’re new to the field or want to dive into cutting-edge reinforcement learning research with us, this podcast is here for you!

If you'd like to skip the Deep Reinforcement Learning primer portion of this and jump to the research discussion, skip ahead to the 34:30 mark of the episode.

]]>
01:35:25 clean science,technology,networks,data,deep,intelligence,learning,artificial,christy,inference,ai,bayesian,anima,q,reinforcement,gans,dennison,gan,generative,kamyar,adversarial,twiml,azizzadenesheli,anandkumar Today we’re joined by Kamyar Azizzadenesheli, PhD student at the University of California, Irvine, who joins us to review the core elements of RL, along with a pair of his RL-related papers: “Efficient Exploration through Bayesian Deep Q-Networks” and “Sample-Efficient Deep RL with Generative Adversarial Tree Search.” To skip the Deep Reinforcement Learning primer conversation and jump to the research discussion, skip to the 34:30 mark of the episode. Show notes at https://twimlai.com/talk/177 177 full Sam Charrington
OpenAI Five with Christy Dennison - TWiML Talk #176 OpenAI Five with Christy Dennison Mon, 27 Aug 2018 19:20:01 +0000 Today we’re joined by Christy Dennison, Machine Learning Engineer at OpenAI.

Since joining OpenAI earlier this year, Christy has been working on OpenAI’s efforts to build an AI-powered agent to play the DOTA 2 video game. Our conversation begins with an overview of DOTA 2 gameplay and the recent OpenAI Five benchmark which put the OpenAI agent up against a team of professional human players. We then dig into the underlying technology used to create OpenAI Five, including their use of deep reinforcement learning and LSTM recurrent neural networks, and their liberal use of entity embeddings, plus some of the tricks and techniques they use to train the model on 256 GPUs and 128,000 CPU cores.

The complete show notes for this episode can be found at twimlai.com/talk/176.

 

]]>
Today we’re joined by Christy Dennison, Machine Learning Engineer at OpenAI.

Since joining OpenAI earlier this year, Christy has been working on OpenAI’s efforts to build an AI-powered agent to play the DOTA 2 video game. Our conversation begins with an overview of DOTA 2 gameplay and the recent OpenAI Five benchmark which put the OpenAI agent up against a team of professional human players. We then dig into the underlying technology used to create OpenAI Five, including their use of deep reinforcement learning and LSTM recurrent neural networks, and their liberal use of entity embeddings, plus some of the tricks and techniques they use to train the model on 256 GPUs and 128,000 CPU cores.

The complete show notes for this episode can be found at twimlai.com/talk/176.

 

]]>
48:21 clean podcast,science,2,technology,data,deep,intelligence,modeling,learning,five,artificial,christy,machine,ai,reinforcement,entity,dota,ml,dennison,lstm,openai,twiml,embeddings Today we’re joined by Christy Dennison, Machine Learning Engineer at OpenAI, who has been working on OpenAI’s efforts to build an AI-powered agent to play the DOTA 2 video game. In our conversation we overview of DOTA 2 gameplay and the recent OpenAI Five benchmark, we dig into the underlying technology used to create OpenAI Five, including their use of deep reinforcement learning, LSTM recurrent neural networks, and entity embeddings, plus some tricks and techniques they use to train the models. 176 full Sam Charrington
How ML Keeps Shelves Stocked at Home Depot with Pat Woowong - TWiML Talk #175 How ML Keeps Shelves Stocked at Home Depot with Pat Woowong Thu, 23 Aug 2018 18:37:20 +0000 Today we’re joined by Pat Woowong, principal engineer in the applied machine intelligence group at The Home Depot.

We discuss a project that Pat recently presented at the Google Cloud Next conference which used machine learning to predict shelf-out scenarios within stores. We dig into the motivation for this system and how the team went about building it, including what type of models ended up working best, how they collected their data, their use of kubernetes to support future growth in the platform, and much more.

For the complete show notes, visit twimlai.com/talk/175.

]]>
Today we’re joined by Pat Woowong, principal engineer in the applied machine intelligence group at The Home Depot.

We discuss a project that Pat recently presented at the Google Cloud Next conference which used machine learning to predict shelf-out scenarios within stores. We dig into the motivation for this system and how the team went about building it, including what type of models ended up working best, how they collected their data, their use of kubernetes to support future growth in the platform, and much more.

For the complete show notes, visit twimlai.com/talk/175.

]]>
45:00 clean podcast,the,technology,tech,cloud,in,google,this,data,week,intelligence,modeling,learning,home,next,artificial,pat,machine,ai,depot,prediction,ml,woowong,kubernates Today we’re joined by Pat Woowong, principal engineer in the applied machine intelligence group at The Home Depot. We discuss a project that Pat recently presented at the Google Cloud Next conference which used machine learning to predict shelf-out scenarios within stores. We dig into the motivation for this system and how the team went about building it, their use of kubernetes to support future growth in the platform, and much more. For complete show notes, visit https://twimlai.com/talk/175. 175 full Sam Charrington
Contextual Modeling for Language and Vision with Nasrin Mostafazadeh - TWiML Talk #174 Contextual Modeling for Language and Vision with Nasrin Mostafazadeh Mon, 20 Aug 2018 19:59:02 +0000 Today we’re joined by Nasrin Mostafazadeh, Senior AI Research Scientist at New York-based Elemental Cognition.

Our conversation focuses on Nasrin’s work in event-centric contextual modeling in language and vision, which she sees as a means of giving AI systems a bit of “common sense.” We discuss Nasrin’s work on the Story Cloze Test, which is a reasoning framework for evaluating story understanding and generation. We explore the details of this task--including what constitutes a “story”--and some of the challenges it presents and approaches for solving it. We also discuss how you model what a computer understands, building semantic representation algorithms, different ways to approach “explainability,” and multimodal extensions to her contextual modeling work.

The notes for this episode can be found at https://twimlai.com/talk/174.

]]>
Today we’re joined by Nasrin Mostafazadeh, Senior AI Research Scientist at New York-based Elemental Cognition.

Our conversation focuses on Nasrin’s work in event-centric contextual modeling in language and vision, which she sees as a means of giving AI systems a bit of “common sense.” We discuss Nasrin’s work on the Story Cloze Test, which is a reasoning framework for evaluating story understanding and generation. We explore the details of this task--including what constitutes a “story”--and some of the challenges it presents and approaches for solving it. We also discuss how you model what a computer understands, building semantic representation algorithms, different ways to approach “explainability,” and multimodal extensions to her contextual modeling work.

The notes for this episode can be found at https://twimlai.com/talk/174.

]]>
49:12 clean podcast,science,technology,tech,in,test,story,this,data,week,intelligence,modeling,learning,artificial,cognition,machine,ai,semantic,nlp,representation,elemental,contextual,ml,multimodal,nasrin,twiml,eventcentric,cloze,mostafazadeh,nlu Today we’re joined by Nasrin Mostafazadeh, Senior AI Research Scientist at New York-based Elemental Cognition. Our conversation focuses on Nasrin’s work in event-centric contextual modeling in language and vision including her work on the Story Cloze Test, a reasoning framework for evaluating story understanding and generation. We explore the details of this task, some of the challenges it presents and approaches for solving it. 174 full Sam Charrington
ML for Understanding Satellite Imagery at Scale with Kyle Story - TWiML Talk #173 ML for Understanding Satellite Imagery at Scale with Kyle Story Thu, 16 Aug 2018 17:18:44 +0000 Today we’re joined by Kyle Story, computer vision engineer at Descartes Labs.

Kyle and I caught up after his recent talk at the Google Cloud Next Conference titled “How Computers See the Earth: A Machine Learning Approach to Understanding Satellite Imagery at Scale.” We discuss some of the interesting computer vision problems he’s worked on at Descartes, including custom object detectors and the company’s geovisual search engine, covering everything from the models they’ve developed and platform they’ve built, to the key challenges they’ve had to overcome in scaling them.

For the complete show notes, visit twimlai.com/talk/173.

]]>
Today we’re joined by Kyle Story, computer vision engineer at Descartes Labs.

Kyle and I caught up after his recent talk at the Google Cloud Next Conference titled “How Computers See the Earth: A Machine Learning Approach to Understanding Satellite Imagery at Scale.” We discuss some of the interesting computer vision problems he’s worked on at Descartes, including custom object detectors and the company’s geovisual search engine, covering everything from the models they’ve developed and platform they’ve built, to the key challenges they’ve had to overcome in scaling them.

For the complete show notes, visit twimlai.com/talk/173.

]]>
56:05 clean podcast,science,technology,image,images,tech,cloud,in,google,story,this,data,week,intelligence,learning,search,next,satellite,analysis,kyle,artificial,geospatial,machine,ai,labs,descartes,ml,twiml,geovisual Today we’re joined by Kyle Story, computer vision engineer at Descartes Labs. Kyle and I caught up after his recent talk at the Google Cloud Next Conference titled “How Computers See the Earth: A Machine Learning Approach to Understanding Satellite Imagery at Scale.” We discuss some of the interesting computer vision problems he’s worked on at Descartes, and the key challenges they’ve had to overcome in scaling them. 173 full Sam Charrington
Generating Ground-Level Images From Overhead Imagery Using GANs with Yi Zhu - TWiML Talk #172 Generating Ground-Level Images From Overhead Imagery Using GANs with Yi Zhu Mon, 13 Aug 2018 20:47:23 +0000 Today we’re joined by Yi Zhu, a PhD candidate at UC Merced focused on geospatial image analysis.

In our conversation, Yi and I take a look at his recent paper “What Is It Like Down There? Generating Dense Ground-Level Views and Image Features From Overhead Imagery Using Conditional Generative Adversarial Networks.” Yi and I discuss the goal of this research, which is to train effective land-use classifiers on proximate, or ground-level, images, and how he uses conditional GANs along with images sourced from social media to generate artificial ground-level images for this task. We also explore future research directions such as the use of reversible generative networks as proposed in the recently released OpenAI Glow paper to producing higher resolution images.

The notes for this episode can be found at https://twimlai.com/talk/172.

]]>
Today we’re joined by Yi Zhu, a PhD candidate at UC Merced focused on geospatial image analysis.

In our conversation, Yi and I take a look at his recent paper “What Is It Like Down There? Generating Dense Ground-Level Views and Image Features From Overhead Imagery Using Conditional Generative Adversarial Networks.” Yi and I discuss the goal of this research, which is to train effective land-use classifiers on proximate, or ground-level, images, and how he uses conditional GANs along with images sourced from social media to generate artificial ground-level images for this task. We also explore future research directions such as the use of reversible generative networks as proposed in the recently released OpenAI Glow paper to producing higher resolution images.

The notes for this episode can be found at https://twimlai.com/talk/172.

]]>
38:38 clean podcast,science,technology,image,networks,tech,in,this,data,week,intelligence,learning,analysis,artificial,geospatial,machine,glow,ai,gans,uc,yi,zhu,ml,gan,merced,generative,adversarial,openai,twiml Today we’re joined by Yi Zhu, a PhD candidate at UC Merced focused on geospatial image analysis. In our conversation, Yi and I take a look at his recent paper “What Is It Like Down There? Generating Dense Ground-Level Views and Image Features From Overhead Imagery Using Conditional Generative Adversarial Networks.” We discuss the goal of this research and how he uses conditional GANs to generate artificial ground-level images. 172 full Sam Charrington
Vision Systems for Planetary Landers and Drones with Larry Matthies - TWiML Talk #171 Vision Systems for Planetary Landers and Drones with Larry Matthies Thu, 09 Aug 2018 15:39:52 +0000 Today we’re joined by Larry Matthies, Sr. Research Scientist and head of computer vision in the mobility and robotics division at JPL.

Larry joins us on the heels of two presentations at this year’s CVPR conference, the first on Onboard Stereo Vision for Drone Pursuit or Sense and Avoid and another on Vision Systems for Planetary Landers. In our conversation, we touch on both of these talks, his work on vision systems for the first iteration of Mars rovers in 2004 and the future of planetary landing projects.

For the complete show notes, visit https://twimlai.com/talk/171.

 

]]>
Today we’re joined by Larry Matthies, Sr. Research Scientist and head of computer vision in the mobility and robotics division at JPL.

Larry joins us on the heels of two presentations at this year’s CVPR conference, the first on Onboard Stereo Vision for Drone Pursuit or Sense and Avoid and another on Vision Systems for Planetary Landers. In our conversation, we touch on both of these talks, his work on vision systems for the first iteration of Mars rovers in 2004 and the future of planetary landing projects.

For the complete show notes, visit https://twimlai.com/talk/171.

 

]]>
43:12 clean podcast,technology,tech,in,this,week,jet,mars,intelligence,nasa,vision,learning,lab,larry,artificial,robotics,propulsion,planetary,machine,ai,stereo,drones,jpl,ml,landers,matthies,cvpr,twiml Today we’re joined by Larry Matthies, Sr. Research Scientist and head of computer vision in the mobility and robotics division at JPL. In our conversation, we discuss two talks he gave at CVPR a few weeks back, his work on vision systems for the first iteration of Mars rovers in 2004 and the future of planetary landing projects. For the complete show notes, visit https://twimlai.com/talk/171. 171 full Sam Charrington
Learning Semantically Meaningful and Actionable Representations with Ashutosh Saxena - TWiML Talk #170 Learning Semantically Meaningful and Actionable Representations with Ashutosh Saxena Mon, 06 Aug 2018 20:26:09 +0000 In this episode I'm joined by Ashutosh Saxena, a veteran of Andrew Ng’s Stanford Machine Learning Group, and co-founder and CEO of Caspar.ai.

Ashutosh and I discuss his RoboBrain project, a computational system that creates semantically meaningful and actionable representations of the objects, actions and observations that a robot experiences in its environment, and allows these to be shared and queried by other robots to learn new actions. We also discuss his startup Caspar, which applies these principles to the challenge of creating smart homes.

For complete show notes, visit https://twimlai.com/talk/170.

]]>
In this episode I'm joined by Ashutosh Saxena, a veteran of Andrew Ng’s Stanford Machine Learning Group, and co-founder and CEO of Caspar.ai.

Ashutosh and I discuss his RoboBrain project, a computational system that creates semantically meaningful and actionable representations of the objects, actions and observations that a robot experiences in its environment, and allows these to be shared and queried by other robots to learn new actions. We also discuss his startup Caspar, which applies these principles to the challenge of creating smart homes.

For complete show notes, visit https://twimlai.com/talk/170.

]]>
45:35 clean podcast,technology,tech,in,this,week,intelligence,learning,forbes,robot,stanford,artificial,machine,ai,semantic,representation,ml,caspar,ashutosh,saxena,cvpr,robobrain,casparai In this episode i'm joined by Ashutosh Saxena, a veteran of Andrew Ng’s Stanford Machine Learning Group, and co-founder and CEO of Caspar.ai. Ashutosh and I discuss his RoboBrain project, a computational system that creates semantically meaningful and actionable representations of the objects, actions and observations that a robot experiences in its environment, and allows these to be shared and queried by other robots to learn new actions. For complete show notes, visit https://twimlai.com/talk/170. 170 full Sam Charrington
AI Innovation for Clinical Decision Support with Joe Connor - TWiML Talk #169 AI Innovation for Clinical Decision Support with Joe Connor Thu, 02 Aug 2018 17:44:41 +0000 In this episode I speak with Joe Connor, Founder of Experto Crede.

Joe’s been listening to the podcast for a while and he and I connected after he reached out to discuss an article I wrote regarding AI in the healthcare space. In this conversation, we explore his experiences bringing AI powered healthcare projects to market in collaboration with the UK National Health Service and its clinicians. We take a look at some of various challenges he’s run into when applying ML and AI in healthcare, as well as some of his successes, such as tackling effective triage of mental health patients using emotion recognition within a chatbot environment. We also discuss data protections, especially GDPR, and the challenges that come along with building systems that are dependent on using patient data under these restrictions. Finally we take a look at potential ways to include clinicians in the building of these applications.

The complete show notes can be found at https://twimlai.com/talk/169

]]>
In this episode I speak with Joe Connor, Founder of Experto Crede.

Joe’s been listening to the podcast for a while and he and I connected after he reached out to discuss an article I wrote regarding AI in the healthcare space. In this conversation, we explore his experiences bringing AI powered healthcare projects to market in collaboration with the UK National Health Service and its clinicians. We take a look at some of various challenges he’s run into when applying ML and AI in healthcare, as well as some of his successes, such as tackling effective triage of mental health patients using emotion recognition within a chatbot environment. We also discuss data protections, especially GDPR, and the challenges that come along with building systems that are dependent on using patient data under these restrictions. Finally we take a look at potential ways to include clinicians in the building of these applications.

The complete show notes can be found at https://twimlai.com/talk/169

]]>
42:41 clean podcast,uk,joe,technology,tech,in,this,week,intelligence,learning,healthcare,connor,artificial,machine,ai,nhs,chatbot,ml,clinicians,experto,crede In this episode I speak with Joe Connor, Founder of Experto Crede. In our conversation, we explore his experiences bringing AI powered healthcare projects to market in collaboration with the UK National Health Service and its clinicians, some of the various challenges he’s run into when applying ML and AI in healthcare, as well as some of his successes. We also discuss data protections, especially GDPR, potential ways to include clinicians in the building of applications. 169 full Sam Charrington
Dynamic Visual Localization and Segmentation with Laura Leal-Taixé -TWiML Talk #168 Dynamic Visual Localization and Segmentation with Laura Leal-Taixé Mon, 30 Jul 2018 19:52:18 +0000 In this episode I'm joined by Laura Leal-Taixé, Professor at the Technical University of Munich where she leads the Dynamic Vision and Learning Group, and 2017 recipient of prestigious Sofja Kovalevskaja Award.

In our conversation, we discuss several of her recent projects including work on image-based localization techniques that fuse traditional model-based computer vision approaches with a data-driven approach based on deep learning. We also discuss her paper on one-shot video object segmentation and the broader vision for her research, which aims to create tools for allowing individuals to better navigate cities using systems constructed from visual data.

The show notes for this page can be found at twimlai.com/talk/168.

]]>
In this episode I'm joined by Laura Leal-Taixé, Professor at the Technical University of Munich where she leads the Dynamic Vision and Learning Group, and 2017 recipient of prestigious Sofja Kovalevskaja Award.

In our conversation, we discuss several of her recent projects including work on image-based localization techniques that fuse traditional model-based computer vision approaches with a data-driven approach based on deep learning. We also discuss her paper on one-shot video object segmentation and the broader vision for her research, which aims to create tools for allowing individuals to better navigate cities using systems constructed from visual data.

The show notes for this page can be found at twimlai.com/talk/168.

]]>
45:33 clean podcast,technology,tech,in,this,week,intelligence,one,learning,laura,artificial,machine,shot,ai,munich,navigation,ml,segmentation,leal,taixe,sofja,kovalevskaja In this episode I'm joined by Laura Leal-Taixé, Professor at the Technical University of Munich where she leads the Dynamic Vision and Learning Group. In our conversation, we discuss several of her recent projects including work on image-based localization techniques that fuse traditional model-based computer vision approaches with a data-driven approach based on deep learning, her paper on one-shot video object segmentation and the broader vision for her research. 168 full Sam Charrington
Conversational AI for the Intelligent Workplace with Gillian McCann - TWiML Talk #167 Conversational AI for the Intelligent Workplace with Gillian McCann Thu, 26 Jul 2018 13:49:38 +0000 In this episode I'm joined by Gillian McCann, Head of Cloud Engineering and AI at Workgrid Software.

 

Workgrid offers an intelligent workplace assistant that integrates with a variety of business tools and systems. In our conversation, which focuses on Workgrid’s use of cloud-based AI services, Gillian details some of the underlying systems that make Workgrid tick, including a breakdown of its conversational interface. We also take a look their engineering pipeline and how they build high quality systems that incorporate external APIs. Finally, Gillian shares her view on some of the factors that contribute to misunderstandings and impatience on the part of users of AI-based products.

 

The show notes for this episode can be found at twimlai.com/talk/167.

]]>
In this episode I'm joined by Gillian McCann, Head of Cloud Engineering and AI at Workgrid Software.

 

Workgrid offers an intelligent workplace assistant that integrates with a variety of business tools and systems. In our conversation, which focuses on Workgrid’s use of cloud-based AI services, Gillian details some of the underlying systems that make Workgrid tick, including a breakdown of its conversational interface. We also take a look their engineering pipeline and how they build high quality systems that incorporate external APIs. Finally, Gillian shares her view on some of the factors that contribute to misunderstandings and impatience on the part of users of AI-based products.

 

The show notes for this episode can be found at twimlai.com/talk/167.

]]>
38:05 clean podcast,technology,tech,cloud,in,this,week,intelligence,learning,engineering,artificial,machine,gillian,ai,mccann,ml,workgrid In this episode I'm joined by Gillian McCann, Head of Cloud Engineering and AI at Workgrid Software. In our conversation, which focuses on Workgrid’s use of cloud-based AI services, Gillian details some of the underlying systems that make Workgrid tick, their engineering pipeline & how they build high quality systems that incorporate external APIs and her view on factors that contribute to misunderstandings and impatience on the part of users of AI-based products. 167 full Sam Charrington
Computer Vision and Intelligent Agents for Wildlife Conservation with Jason Holmberg - TWiML Talk #166 Computer Vision and Intelligent Agents for Wildlife Conservation with Jason Holmberg Sun, 22 Jul 2018 03:58:40 +0000 In this episode, I'm joined by Jason Holmberg, Executive Director and Director of Engineering at WildMe.

Wildme’s Wildbook and Whaleshark.org are both open source computer vision based conservation projects, that have been compared to a facebook for wildlife. Jason kicks us off with the interesting story of how Wildbook came to be, and the eventual expansion of the project from a focus on whale sharks to include Giant Manta Rays, Humpback Whales, Zebras and Giraffes. Jason and I explore the evolution of these projects’ use of computer vision and deep learning, the unique characteristics of the models they’re building, and how they’re ultimately enabling a new kind of citizen science. Finally, we take a look at a cool new “intelligent agent” project that Jason is working on, which mines YouTube for wildlife sightings and automatically engages with the relevant individuals and scientists on Wildbook’s behalf.

For the complete show notes, visit twimlai.com/talk/166

]]>
In this episode, I'm joined by Jason Holmberg, Executive Director and Director of Engineering at WildMe.

Wildme’s Wildbook and Whaleshark.org are both open source computer vision based conservation projects, that have been compared to a facebook for wildlife. Jason kicks us off with the interesting story of how Wildbook came to be, and the eventual expansion of the project from a focus on whale sharks to include Giant Manta Rays, Humpback Whales, Zebras and Giraffes. Jason and I explore the evolution of these projects’ use of computer vision and deep learning, the unique characteristics of the models they’re building, and how they’re ultimately enabling a new kind of citizen science. Finally, we take a look at a cool new “intelligent agent” project that Jason is working on, which mines YouTube for wildlife sightings and automatically engages with the relevant individuals and scientists on Wildbook’s behalf.

For the complete show notes, visit twimlai.com/talk/166

]]>
49:42 clean podcast,jason,technology,tech,in,this,week,intelligent,deep,intelligence,nasa,vision,learning,computer,wildlife,youtube,shark,artificial,conservation,machine,ai,agents,holmberg,whale,ml,wildbook,wildme In this episode, I'm joined by Jason Holmberg, Executive Director and Director of Engineering at WildMe. Jason and I discuss Wildme's pair of open source computer vision based conservation projects, Wildbook and Whaleshark.org, Jason kicks us off with the interesting story of how Wildbook came to be, the eventual expansion of the project and the evolution of these projects’ use of computer vision and deep learning. For the complete show notes, visit twimlai.com/talk/166 166 full Sam Charrington
Pragmatic Deep Learning for Medical Imagery with Prashant Warier - TWiML Talk #165 Pragmatic Deep Learning for Medical Imagery with Prashant Warier Thu, 19 Jul 2018 17:52:52 +0000