Happy New Year!

I hope you had a great one and that you’re as pumped about 2018 as I am. I’ve enjoyed the opportunity to relax a bit with family over the past few weeks, but it’s also been great to jump back into the podcast, my research and other projects!

Before we turn to what’s next, a brief reflection on 2017… Last year was an exciting one for Team TWIML. It’s been a pleasure to bring you 75 interviews, meet you at 16 events, learn with you at 4 online meetups, send you tons of TWIML stickers, and field your feedback, questions and comments on Twitter, Facebook, our website, and our meetup’s Slack channel.

You helped us become a Top 40 Technology podcast on Apple Podcasts (we’ve seen it as high as number 33!), and break 1 million plays, two milestones we’re really proud of!

We couldn’t ask for a better community and our team is committed to bringing you even better content and programs in 2018.

 

What’s Ahead for AI in 2018

Speaking of 2018… I have mixed feelings about predictions posts. They can be fun, but so often they’re self-serving. So this isn’t one. Rather, it’s more of a what’s-on-my-radar-in-2018 post.

Here are a few of the things you should expect to hear more about from me and on the podcast next year.

  • AI in the Cloud. Cloud service providers are continuing to build out strong AI offerings. In the process, they make ML/AI more accessible with each passing month. This is why I expect cloud-based ML/AI to continue to grow, and why I’m planning to explore this area in my research and on the podcast in 2018.
  • Reinforcement Learning. Amassing labeled training data for supervised learning is very expensive and time consuming. This is one of the reasons why reinforcement learning is so interesting—it allows intelligent agents to learn in an unsupervised manner in simulated environments. We’ve tackled RL quite a bit on the show already and I expect to dig in even more in 2018.
  • Meta-Learning. Humans, even toddlers, can adapt intelligently to a wide variety of new, unseen situations. By teaching systems how to learn, meta-learning and related ideas like few-shot, one-shot and zero-shot learning seeks to achieve the same for intelligent agents. This will be important for the same reason as RL.
  • Capsule Networks. Deep learning luminary Geoffrey Hinton believes CNNs are dead, and that Capsule Networks are the next big thing in AI. CapsNets use what Hinton calls “inverse graphics” and capsules to overcome some of the key challenges of convolutional networks, and have proven to be more efficient at identifying overlapping and distorted images. I’m looking forward to digging into CapsNets a bit more this year.
  • AI Acceleration. While deep learning training is currently dominated by GPUs, a number of new hardware architectures are expected to come online in 2018 and beyond. I’m planning to survey these at some point this year and will be sure to share what I discover.
  • Algorithmic Fairness & Ethics. This is another topic we’ve touched on a bit, but not quite enough. As we shift more power to algorithms to make decisions that affect lives, it’s incumbent on us as a field to really understand the implications of what we’re building and how we’re building it.

There’s certainly more ground to explore in 2018, but these are a few of the topics that are top of mind for me as we head into the new year.

What’s on your mind? What do you want to learn more about? What should be on my radar as I plan my research and content calendar for the year?

Sign up for our Newsletter to receive this weekly to your inbox.