Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya

800 800 The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Today we close out our 2021 ICML series joined by Lina Montoya, a postdoctoral researcher at UNC Chapel Hill.

In our conversation with Lina, who was an invited speaker at the Neglected Assumptions in Causal Inference Workshop, we explored her work applying Optimal Dynamic Treatment (ODT) to understand which kinds of individuals respond best to specific interventions in the US criminal justice system. We discuss the concept of neglected assumptions and how it connects to ODT rule estimation, as well as a breakdown of the causal roadmap, coined by researchers at UC Berkeley.

Finally, Lina talks us through the roadmap while applying the ODT rule problem, how she’s applied a “superlearner” algorithm to this problem, how it was trained, and what the future of this research looks like.

Thanks to our Sponsor!

Thanks to our friends at SigOpt, an Intel Company, for their continued support of the podcast, and their sponsorship of this series!

Experimentation is critical for AI model development, but is messy and tough to get right. This is why most modelers use tools that help them track what they’ve done. But none of these tools also help them discover what to do next. This is where SigOpt can help. SigOpt combines experiment management with seamless and powerful optimization. With SigOpt, modelers design novel experiments, explore modeling problems and optimize models to meet multiple objective metrics in their iterative workflow. Modelers from Two Sigma, OpenAI, Numenta, MILA and many more apply SigOpt to make model development 8x faster and boost team productivity by 30%. And now, SigOpt is available for free forever. Sign up for an account today at sigopt.com/signup or check out our docs to see how easy it is to get running in minutes at sigopt.com/docs.

Connect with Lina!

Resources

Join Forces!

Leave a Reply

Your email address will not be published.