AI Trends 2023: Natural Language Proc – ChatGPT, GPT-4 and Cutting Edge Research with Sameer Singh with Sameer Singh
EPISODE 613
|
JANUARY
23,
2023
Watch
Follow
Share
About this Episode
Today we continue our AI Trends 2023 series joined by Sameer Singh, an associate professor in the department of computer science at UC Irvine and fellow at the Allen Institute for Artificial Intelligence (AI2). In our conversation with Sameer, we focus on the latest and greatest advancements and developments in the field of NLP, starting out with one that took the internet by storm just a few short weeks ago, ChatGPT. We also explore top themes like decomposed reasoning, causal modeling in NLP, and the need for “clean” data. We also discuss projects like HuggingFace’s BLOOM, the debacle that was the Galactica demo, the impending intersection of LLMs and search, use cases like Copilot, and of course, we get Sameer’s predictions for what will happen this year in the field.
About the Guest
Sameer Singh
UC Irvine
Resources
- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
- Teaching Algorithmic Reasoning via In-context Learning
- Successive Prompting for Decomposing Complex Questions
- Decomposed Prompting: A Modular Approach for Solving Complex Tasks
- TALM: Tool Augmented Language Models
- Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks
- PAL: Program-aided Language Models
- Impact of Pretraining Term Frequencies on Few-Shot Reasoning
- Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions
- The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset
- Illustrating Reinforcement Learning from Human Feedback (RLHF)
- Aligning Language Models to Follow Instructions
- Scaling Instruction-Finetuned Language Models
- Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
- NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics
- TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
- OPT: Open Pre-trained Transformer Language Models
- BigScience Large Open-science Open-access Multilingual Language Model
- Inverse Scaling Prize
- Galactica: A Large Language Model for Science -
- TabLLM: Few-shot Classification of Tabular Data with Large Language Models
- Minerva: Solving Quantitative Reasoning Problems with Language Models
- Building Open-Ended Embodied Agents with Internet-Scale Knowledge
- Cramming: Training a Language Model on a Single GPU in One Day
- Training Compute-Optimal Large Language Models
- Correcting Diverse Factual Errors in Abstractive Summarization via Post-Editing and Language Model Infilling
- PEER: A Collaborative Language Model
- FRUIT: Faithfully Reflecting Updated Information in Text
- Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurements
- You.com
- Perplexity.ai
- NeevaAI
- NotionAI
- Jasper
- Github CoPilot
- CodeGen
- Incoder
- AlphaCode
- ChatGPT
- Petals
- ROOTS search tool
- Ought
- Cohere for AI
- AI Rewind 2020: Trends in Natural Language Processing with Sameer Singh
- Beyond Accuracy: Behavioral Testing of NLP Models with Sameer Singh
- Multimodal, Multi-Lingual NLP at Hugging Face with Douwe Kiela, John Bohannon
- Big Science and Embodied Learning at Hugging Face with Thomas Wolf
- Engineering an ML-Powered Developer-First Search Engine with Richard Socher

