The Enterprise LLM Landscape with Atul Deo

Play Video

Join our list for notifications and early access to events

About this Episode

Today we’re joined by Atul Deo, General Manager of Amazon Bedrock. In our conversation with Atul, we discuss the process of training large language models in the enterprise, including the pain points of creating and training machine learning models, and the power of pre-trained models. We explore different approaches to how companies can leverage large language models, dealing with the hallucination, and the transformative process of retrieval augmented generation (RAG). Finally, Atul gives us an inside look at Bedrock, a fully managed service that simplifies the deployment of generative AI-based apps at scale.

Connect with Atul
Read More

Thanks to our sponsor Amazon Web Services

You know AWS as a cloud computing technology leader, but did you realize the company offers a broad array of services and infrastructure at all three layers of the machine learning technology stack? AWS has been focused on making ML accessible to customers of all sizes and across industries, and over 100,000 of them trust AWS for machine learning and artificial intelligence services. AWS is constantly innovating across all areas of ML including infrastructure, tools on Amazon SageMaker, and AI services, such as Amazon CodeWhisperer, an AI-powered code companion that improves developer productivity by generating code recommendations based on the code and comments in an IDE. AWS also created purpose-built ML accelerators for the training (AWS Trainium) and inference (AWS Inferentia) of large language and vision models on AWS. 

To learn more about AWS ML and AI services, and how they’re helping customers accelerate their machine learning journeys, visit

Amazon Web Services Logo

Related Episodes

Related Topics

More from TWIML

One Response

  1. Sam,
    I have been subscribed to TWIML for the last couple of years. Your guests and topics are a big part of helping me keep up with and connect the dots on the warp speed-moving GenAI/LLM space! Thanks for the great content. This episode with Atul reinforced the latest developments on RAG, ReAct and introduced Bedrock as another key resource in the growing GenAI ecosystem.

    Do you make the transcript available? Would be very helpful as an ongoing reference.

Leave a Reply

Your email address will not be published. Required fields are marked *