Inverse Programming for Deeper AI with Zenna Tavares

    800 800 This Week in Machine Learning & AI

    For today’s show, the final episode of our Black in AI Series, I’m joined by Zenna Tavares, a PhD student in the both the department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Lab at MIT.

    I spent some time with Zenna after his talk at the Strange Loop conference titled “Running Programs in Reverse for Deeper AI.” Zenna shares some great insight into his work on program inversion, an idea which lies at the intersection of Bayesian modeling, deep-learning, and computational logic. We set the stage with a discussion of inverse graphics and the similarities between graphic inversion and vision inversion. We then discuss the application of these techniques to intelligent systems, including the idea of parametric inversion. Last but not least, zenna details how these techniques might be implemented, and discusses his work on ReverseFlow, a library to execute tensorflow programs backwards, and Sigma.jl a probabilistic programming environment implemented in the dynamic programming language Julia. This talk packs a punch, and I’m glad to share it with you.

    TWiML Online Meetup Update

    Join us Tuesday, March 13th for the March edition of the Online Meetup! Sean Devlin will be doing an in-depth review of reinforcement learning and presenting the Google DeepMind paper, Playing Atari with Deep Reinforcement Learning. Head over to to learn more or register.

    Conference Update

    Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you’ll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI’s latest developments, separate what’s hype and what’s really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML. Early price ends February 2!

    About Zenna

    Mentioned in the Interview

    “More On That Later” by Lee Rosevere licensed under CC By 4.0

    • Marc Meketon

      interesting talk. This could become a really exciting field.

      Few quick general questions:

      When Google a few years ago published their paper on data center efficiency, they wrote a lot about the ability to predict power usage, but never really discussed how the inverse program worked: given the loads, what controls for air conditioning and other things should be set? I wonder if they also used the concept of inverse programming?

      Autoencoders also seem like they have a built-in inverse program. I wonder if Zenna or others considered that?

      My last comment is on how to handle categorical data. Often if there are, say, 3 categories for an input that would be modeled as a 2-vector that has 0’s and zero or one 1. I would think that inverse programming would be very difficult when part of what is trying to be recovered is categorical. That it would come back as a 2-vector of real numbers that cannot be easily translated to a specific category.

    Leave a Reply

    Your email address will not be published.