Join our list for notifications and early access to events
Today, we're joined by Patricia Thaine, co-founder and CEO of Private AI to discuss techniques for ensuring privacy, data minimization, and compliance when using 3rd-party large language models (LLMs) and other AI services. We explore the risks of data leakage from LLMs and embeddings, the complexities of identifying and redacting personal information across various data flows, and the approach Private AI has taken to mitigate these risks. We also dig into the challenges of entity recognition in multimodal systems including OCR files, documents, images, and audio, and the importance of data quality and model accuracy. Additionally, Patricia shares insights on the limitations of data anonymization, the benefits of balancing real-world and synthetic data in model training and development, and the relationship between privacy and bias in AI. Finally, we touch on the evolving landscape of AI regulations like GDPR, CPRA, and the EU AI Act, and the future of privacy in artificial intelligence.
A big thanks to Forum Ventures for supporting the pod and sponsoring this episode.
Forum is a leading early-stage venture fund for B2B SaaS startups. Run by former entrepreneurs, Forum works with deeply technical founders, providing strategic guidance in go-to-market, sales, and fundraising. Patricia Thaine, who I interviewed in this episode, went through Forum's accelerator in 2020, and her company, Private AI, has since gone on to raise over $8 million. Patricia raves about Forum's hands-on support, expert guidance, and vast investor network, which helped her achieve meaningful traction and growth.
If you're excited about the future of AI infrastructure, agents, privacy, robotics, or deep tech, and you're ready to build, apply to their Accelerator program or AI Venture Studio today for an investment of up to $250,000. Visit forumvc.com to learn more.