Join our list for notifications and early access to events
Today, we're joined by Abhijit Bose, head of enterprise AI and ML platforms at Capital One to discuss the evolution of the company’s approach and insights on Generative AI and platform best practices. In this episode, we dig into the company’s platform-centric approach to AI, and how they’ve been evolving their existing MLOps and data platforms to support the new challenges and opportunities presented by generative AI workloads and AI agents. We explore their use of cloud-based infrastructure—in this case on AWS—to provide a foundation upon which they then layer open-source and proprietary services and tools. We cover their use of Llama 3 and open-weight models, their approach to fine-tuning, their observability tooling for Gen AI applications, their use of inference optimization techniques like quantization, and more. Finally, Abhijit shares the future of agentic workflows in the enterprise, the application of OpenAI o1-style reasoning in models, and the new roles and skillsets required in the evolving GenAI landscape.
I’d like to send a huge thanks to our friends at Capital One for their support of the podcast and sponsorship of today’s show. Capital One is embedding AI throughout its business with proprietary solutions built on its modern tech stack. Its AI advances—from state-of-the-art fraud detection to generative AI agent servicing capabilities—are helping improve the financial lives of over 100 million customers and thousands of associates.
To learn more about joining their team to solve some of the most challenging problems in finance using science and AI, visit CapitalOne.com/AI.