The Enterprise LLM Landscape with Atul Deo

Play Video

Join our list for notifications and early access to events

About this Episode

Today we’re joined by Atul Deo, General Manager of Amazon Bedrock. In our conversation with Atul, we discuss the process of training large language models in the enterprise, including the pain points of creating and training machine learning models, and the power of pre-trained models. We explore different approaches to how companies can leverage large language models, dealing with the hallucination, and the transformative process of retrieval augmented generation (RAG). Finally, Atul gives us an inside look at Bedrock, a fully managed service that simplifies the deployment of generative AI-based apps at scale.

Connect with Atul
Read More

Thanks to our sponsor Amazon Web Services

You know AWS as a cloud computing technology leader, but did you realize the company offers a broad array of services and infrastructure at all three layers of the machine learning technology stack? AWS has helped more than 100,000 customers of all sizes and across industries to innovate using ML and AI with industry-leading capabilities and they’re taking the same approach to make it easy, practical, and cost-effective for customers to use generative AI in their businesses. At the bottom layer of the ML stack, they’re making generative AI cost-efficient with Amazon EC2 Inf2 instances powered by AWS Inferentia2 chips. At the middle layer, they’re making generative AI app development easier with Amazon Bedrock, a managed service that makes pre-trained FMs easily accessible via an API. And at the top layer, Amazon CodeWhisperer is generally available now, with support for more than 10 programming languages.

To learn more about AWS ML and AI services, and how they’re helping customers accelerate their machine learning journeys, visit

Amazon Web Services Logo

Related Episodes

Related Topics

More from TWIML

One Response

  1. Sam,
    I have been subscribed to TWIML for the last couple of years. Your guests and topics are a big part of helping me keep up with and connect the dots on the warp speed-moving GenAI/LLM space! Thanks for the great content. This episode with Atul reinforced the latest developments on RAG, ReAct and introduced Bedrock as another key resource in the growing GenAI ecosystem.

    Do you make the transcript available? Would be very helpful as an ongoing reference.

Leave a Reply

Your email address will not be published. Required fields are marked *