Join our list for notifications and early access to events
Today, we're joined by Byron Cook, VP and distinguished scientist in the Automated Reasoning Group at AWS to dig into the underlying technology behind the newly announced Automated Reasoning Checks feature of Amazon Bedrock Guardrails. Automated Reasoning Checks uses mathematical proofs to help LLM users safeguard against hallucinations. We explore recent advancements in the field of automated reasoning, as well as some of the ways it is applied broadly, as well as across AWS, where it is used to enhance security, cryptography, virtualization, and more. We discuss how the new feature helps users to generate, refine, validate, and formalize policies, and how those policies can be deployed alongside LLM applications to ensure the accuracy of generated text. Finally, Byron also shares the benchmarks they’ve applied, the use of techniques like ‘constrained coding’ and ‘backtracking,’ and the future co-evolution of automated reasoning and generative AI.
I’d like to send a huge thanks to our friends at AWS for their support of the podcast and their sponsorship of today's episode. Amazon Bedrock is a fully managed platform for building and scaling generative AI applications and is used by tens of thousands of AWS customers today. To help developers build genAI responsibly, AWS offers Amazon Bedrock Guardrails, which allow users to set up configurable safeguards for their applications using policies like filters, contextual grounding checks, and now, automated reasoning checks. This new feature uses automated reasoning to detect hallucinations and for the first time, provide mathematically verifiable proof that your model response is accurate. Automated reasoning checks let users enhance the reliability of their applications for use cases where accuracy is critical such as HR, finance, compliance, and more. Stay tuned to dig into the underlying science, or visit twimlai.com/awsarc to learn more.