Join our list for notifications and early access to events
Today we’re joined by Ed Anuff, chief product officer at DataStax. In our conversation, we discuss Ed’s insights on RAG, vector databases, embedding models, and more. We dig into the underpinnings of modern vector databases (like HNSW and DiskANN) that allow them to efficiently handle massive and unstructured data sets, and discuss how they help users serve up relevant results for RAG, AI assistants, and other use cases. We also discuss embedding models and their role in vector comparisons and database retrieval as well as the potential for GPU usage to enhance vector database performance.
I’d like to send a big thanks DataStax for their support of the podcast and their sponsorship of today’s show. DataStax is the real-time AI company. With DataStax, any enterprise can mobilize real-time data and quickly build smart, high-growth GenAI applications at unlimited scale, on any cloud. Companies building real-time generative AI apps can leverage the DataStax vector search capabilities to build LLM applications, AI assistants, and more. To learn more, head to twimlai.com/datastax.