149 - AI Frontiers: The future of scale with Ahmed Awadallah and Ashley Llorens
Powerful large-scale AI models like GPT-4 are showing dramatic improvements in reasoning, problem-solving, and language capabilities. This marks a phase change for artificial intelligence—and a signal of accelerating progress to come.
In this Microsoft Research Podcast series, AI scientist and engineer Ashley Llorens hosts conversations with his collaborators and colleagues about what these models—and the models that will come next—mean for our approach to creating, understanding, and deploying AI, its applications in areas such as healthcare and education, and its potential to benefit humanity.
This episode features Senior Principal Research Manager Ahmed H. Awadallah, whose work improving the efficiency of large-scale AI models and efforts to help move advancements in the space from research to practice have put him at the forefront of this new era of AI. Awadallah discusses the shift in dynamics between model size and amount—and quality—of data when it comes to model training; the recently published paper “Orca: Progressive Learning from Complex Explanation Traces of GPT-4,” which further explores the use of large-scale AI models to improve the performance of smaller, less powerful ones; and the need for better evaluation strategies, particularly as we move into a future in which Awadallah hopes to see gains in these models’ ability to continually learn.
Learn more:
- Orca: Progressive Learning from Complex Explanation Traces of GPT-4, June 2023
- Textbooks Are All You Need II: phi-1.5 technical report, September 2023
- AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework, August 2023
- LIDA: Automatic Generation of Grammar-Agnostic Visualizations and Infographics using Large Language Models, March 2023
- AI Explainer: Foundation models and the next era of AI, March 2023
- AI and Microsoft Research
Published on:
Learn more