One of the exciting aspects of my day-to-day work involves understanding the way large companies are adopting machine learning, deep learning, and AI. I do quite a bit of this via interviews, and I’m excited to be able to bring you along for the ride by publishing many of them as podcasts.
One of the things I’ve observed in my formal and informal conversations is that we’re at a really interesting time for many enterprise machine learning early adopters. Here I’m referring to traditional enterprises—companies like Capital One, Comcast, Home Depot and Shell—that have been investing heavily in building ML capability over the past three to five years. While not quite as advanced in their use of ML as the Internet giants, relative to their peers, these companies are quite far along. They’ve established formal ML, AI and/or data science teams or centers of excellence, they’re promoting best practices across their organizations, and their first wave of ML projects has recently reached maturity.
The success of these early efforts has had an unsurprising result: a clamoring on behalf of the business for machine learning powered solutions to more and more of their business problems.
As increasing numbers of small and large machine learning, deep learning, and data science efforts get kicked off at these companies, these teams and the IT organizations that they work with are starting to ask what can be done to better support these ML efforts as they scale up to support more projects and teams.
Part of the answer to successfully scaling ML is supporting data scientists and machine learning engineers with modern processes, tooling, and platforms. This is key to reducing the time it takes to deliver a model from idea to production and increasing the number of ML models that the organization is able to deploy with limited resources.
Now, if you’ve been following me or the podcast for a while, you know that “platforms” is one of the topics I really like to geek out on. And so, I’m excited to announce that we’ll be exploring this topic in depth here on the podcast over the next several weeks.
In our AI Platforms podcast series, which launched this week with interviews featuring Facebook and Airbnb, you’ll hear from folks building and supporting ML platforms at scale. We’ll be digging deep into the ML processes at their companies, the technologies they’re deploying to accelerate data science and ML engineering, the challenges they’re facing, what they’re excited about, and more.
In addition, as part of this effort, I’m publishing a series of ebooks on this topic. The first takes a bottoms-up look at AI platforms and is focused on how to support machine learning at scale from an infrastructure perspective. I believe the open-source Kubernetes project—in use at Airbnb, Booking.com, and OpenAI— is a strong contender in this space, and have focused the first book on Kubernetes for Machine Learning, Deep Learning, and AI.
The second book in the series, Agile Machine Learning Platforms explores the challenge of scaling data science and ML development from the top down. It showcases the internal platforms that companies like Airbnb, Facebook, Uber, and Google have built and explores the process disciplines that these platforms embody and what enterprises can learn from them about scaling ML development.
This is an exciting project for us here at TWiML and the ebooks will be available soon on the TWiML web site. If this is a topic you’re interested in, I’d encourage you to stay tuned to the podcast this month and visit twimlai.com/aiplatforms and sign up to be notified as soon as the books are published.