This Week in Machine Learning & AI This Week in Machine Learning & AI This Week in Machine Learning & AI This Week in Machine Learning & AI This Week in Machine Learning & AI

    Neuroevolution: Evolving Novel Neural Network Architectures with Kenneth Stanley

    800 800 This Week in Machine Learning & AI

    Today, I’m joined by Kenneth Stanley, Professor in the Department of Computer Science at the University of Central Florida and senior research scientist at Uber AI Labs.

    Kenneth studied under TWiML Talk #47 guest Risto Miikkulainen at UT Austin, and joined Uber AI Labs after Geometric Intelligence , the company he co-founded with Gary Marcus and others, was acquired in late 2016. Kenneth’s research focus is what he calls Neuroevolution, applies the idea of genetic algorithms to the challenge of evolving neural network architectures. In this conversation, we discuss the Neuroevolution of Augmenting Topologies (or NEAT) paper that Kenneth authored along with Risto, which won the 2017 International Society for Artificial Life’s Award for Outstanding Paper of the Decade 2002 – 2012. We also cover some of the extensions to that approach he’s created since, including, HyperNEAT, which can efficiently evolve very large networks with connectivity patterns that look more like those of the human and that are generally much larger than what prior approaches to neural learning could produce, and novelty search, an approach which unlike most evolutionary algorithms has no defined objective, but rather simply searches for novel behaviors. We also cover concepts like “Complexification” and “Deception”, biology vs computation including differences and similarities, and some of his other work including his book, and NERO, a video game complete with Real-time Neuroevolution. This is a meaty “Nerd Alert” interview that I think you’ll really enjoy.

    Giveaway Update!

    Thanks to everyone who took the time to enter our #TWiML1MIL listener giveaway! We sent out an email to entrants a few days ago, so please be on the lookout for that. If you haven’t heard from us yet, please reach out to us at so that we can get you your swag!

    TWiML Online Meetup

    The details for our January Meetup are set! Tuesday, January 16, we will be joined by former TWiML guest and Microsoft Researcher Timnit Gebru. Timnit joined us a few weeks ago to discuss her recently released, and much acclaimed paper, “Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States”, and I’m excited that she’s be joining us to discuss the paper, and the pipeline she used to identify 22 million cards in 50 million Google Street View images, in more detail. I’m anticipating a lively discussion segment, in which we’ll be exploring your AI resolutions & predictions for 2018. For links to the paper, or to register for the meetup, or to check out previous meetups, visit

    About Kenneth

    Mentioned in the Interview

    • Joe Perez

      This is one of the best interviews delivered by Twiml & AI ever. It ranks close to Francisco Weber and Mat Taylor. It once more makes it clear how foolish it is to try to navigate towards strong AI without using the map which nature (biology and neurosciences) provides us with. It may be true, that many different paths lead to Rome, but why do the majority insist on getting there blindfolded, when nature delivers us a beautiful map with all the turns, mountains and valleys on the way there? Please do not misunderstand my critical remarks. I admire and appreciate all the Mathematical and Statistical geniuses out there that are so wise, they can even make a flawed neurological model work better than most natural ones at certain narrow AI tasks. But firing the “Linguist” is not the path toward AGI. We need more Neuroscientist, Biologists, Linguists and domain specialists to hold the torch for the “mainstream AI” specialists.

      I would also love to listen to an interview with Jeff Hawkins (Numenta), the author of “On Intelligence”. His insights have been quoted and referred to in other interviews, but have not been given enough attention to yet. Especially how the changes to the neuroscientific paradigm shed light on many of the shortcomings of other current more popular, narrow approaches, which would benefit from taking into account this newer paradigm (i.e. the 3 states including the predictive state instead of only 2 states of a neurons in present models, the cortical column as unit of representation, instead of individual neurons, the additive, distal and temporal effects on activation, instead of the sygmoid activation function of single synapses, the concept of sparse distributed representation, SDR, as a foundation for classification and inference efficiency, instead of dense knowledge representation models still in use). Is it not arrogant of us, to ignore millions of years of the evolution with the pretense that we can outwit nature? Even if we could, we would still owe this to the very nature we are denying our attention to. Thank you for this great interview with Dr. Kenneth Stanley. And thank you to him, for sharing his wonderful insights with the wor

      • sam

        Glad to hear you enjoyed it, Joe!

        I agree with your comments regarding the need for multiple disciplines to contribute to making progress in AI. Yael Niv spoke to this point as well in our recent interview from NIPS. I find the work that Numenta is doing to create a more biologically “true” neurological model both intriguing and important, but at the same time I’ve asked a bunch of times and haven’t gotten a very clear answer to the question “what is the killer app?” for these models; in other words, where does it outperform all other approaches, for some arbitrary definition of outperform that can include both hard metrics like accuracy as well as softer ones like ease-of-use. That said, getting Jeff on the show is a great idea and one we’ll work on!

        Thanks for your comment!

    Leave a Reply

    Your email address will not be published.