Join our list for notifications and early access to events
In this episode of our AI Platforms series, we're joined by Daniel Jeavons, General Manager of Data Science at Shell.
In our conversation, Daniel and I explore the evolution of analytics and data science at Shell, and cover a ton of interesting machine learning use cases that the company is pursuing, such as well drilling and charging smart cars. A good bit of our conversation centers around IoT-related applications and issues, such as inference at the edge, federated machine learning, and digital twins, all key considerations for the way they apply ML. We also talk about the data science process at Shell and the importance of platform technologies to Daniel's organization and the company as a whole and we discuss some of the technologies he and his team are excited about introducing to the company.
As many of you know, part of my work involves understanding the way large companies are adopting machine learning, deep learning and AI. While it's still fairly early in the game, we're at a really interesting time for many companies. With the first wave of ML projects at early adopter enterprises starting to mature, many of them are asking themselves how can they scale up their ML efforts to support more projects and teams.
Part of the answer to successfully scaling ML is supporting data scientists and machine learning engineers with modern processes, tooling and platforms. Now, if you've been following me or the podcast for a while, you know that this is one of the topics I really like to geek out on.
Well, I'm excited to announce that we'll be exploring this topic in depth here on the podcast over the next several weeks. You'll hear from folks building and supporting ML platforms at a host of different companies. We'll be digging deep into the technologies they're deploying to accelerate data science and ML development in their companies, the challenges they're facing, what they're excited about, and more.
In addition, as part of this effort, I'm publishing a series of eBooks on this topic. The first of them takes a bottoms-up look at AI platforms and is focused on the open source Kubernetes platform which is used to deliver scalable ML and infrastructure at OpenAI, Booking.com, Matroid and many more companies. It'll be available soon on the TWIML web site, and will be followed shortly thereafter by the second book in the series which looks at scaling data science and ML engineering from the top down, exploring the internal platforms companies Facebook, Uber, and Google have built, the process disciplines that they embody, and what enterprises can learn from them.
If this is a topic you're interested in, I'd encourage you to visit twimlai.com/aiplatforms and sign up to be notified as soon as these books are published.