Semantic Folding for Natural Language Understanding with Francisco Webber

800 800 The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Today we’re joined by return guest Francisco Webber, CEO & Co-founder of

Francisco was originally a guest over 4 years and 400 episodes ago, where we discussed his company, and their unique approach to natural language processing. In this conversation, Francisco gives us an update on Cortical, including their applications and toolkit, including semantic extraction, classifier, and search use cases. We also discuss GPT-3, and how it compares to semantic folding, the unreasonable amount of data needed to train these models, and the difference between the GPT approach and semantic modeling for language understanding.

Connect with Francisco!


Join Forces!

“More On That Later” by Lee Rosevere licensed under CC By 4.0

1 comment
  • Thomas E

    I think his argument of why they can’t compare with benchmarks is incoherent. Why in the world would you not be able to at least measure the relative differences in real-world efficacy of different models/algorithms to a problem like entity or relation extraction, using academical benchmarks? It seems negligent to build any real-world predictive model without establishing a baseline at all, like did you at least try fine-tuning a random huggingface transformer?

Leave a Reply

Your email address will not be published.