AI Frontiers: Measuring and mitigating harms with Hanna Wallach
This episode of the Microsoft Research Podcast features a conversation with Partner Research Manager Hanna Wallach, who has extensively researched fairness, accountability, transparency, and ethics in AI and machine learning. Her insights have informed the use of AI in Microsoft products and services for years. Wallach explains how she and her team expanded their tools for measuring fairness-related harms in AI systems to address harmful content more broadly during their work on the Bing Chat deployment. She also discusses her interest in filtering, a technique widely used but not often discussed for mitigating harms. Additionally, cross-company collaboration brings policy, engineering, and research together to execute Microsoft's approach to responsible AI development and deployment.
The episode is part of a series hosted by AI scientist and engineer, Ashley Llorens, who engages in conversations with his colleagues and collaborators about the latest in AI. The series explores what the latest AI models, such as GPT-4, mean for our approach to understanding, creating, and deploying AI. It also examines the potential of AI to benefit humanity, as well as its applications in areas such as healthcare and education. With powerful AI models showing dramatic improvements in reasoning, problem-solving, and language capabilities, AI research is accelerating, and so is its progress.
For more information, check out Microsoft AI: Responsible AI Principles and Approach and AI and Microsoft Research.
Published on:
Learn moreRelated posts
Decoding AI: Part 6, Creating boundary conditions in generative AI
Welcome to Part 6 of the Decoding AI learning series, focused on creating boundary conditions in generative AI. Large language models and gene...
AI Frontiers: Measuring and mitigating harms with Hanna Wallach
In this podcast episode of AI Frontiers, Partner Research Manager, Hanna Wallach, describes her research into fairness, accountability, transp...
Decoding AI: Part 5, Navigating trust in the age of large language models and generative AI
Part 5 of the "Decoding AI: A government perspective learning series" focuses on one of the most critical issues in today's AI landscape: trus...
AI Frontiers: Measuring and mitigating harms with Hanna Wallach
This Microsoft Research Podcast episode focuses on the impact of large-scale AI models, specifically GPT-4, and the progress it signals for ar...
InstructGPT: Revolutionizing AI-Powered Language Models
Enter the world of InstructGPT, the latest revolution in AI-powered language models. If you're already familiar with ChatGPT, you're ahead of ...
138 - AI Frontiers: Models and Systems with Ece Kamar
In this episode of the Microsoft Research Podcast, AI scientist and engineer Ashley Llorens discusses the latest advances in AI models and the...
138 - AI Frontiers: Models and Systems with Ece Kamar
This Microsoft Research Podcast series explores the exciting progress in the field of artificial intelligence, specifically focusing on the la...
138 - AI Frontiers: Models and Systems with Ece Kamar
In this episode of the Microsoft Research Podcast, AI scientist and engineer Ashley Llorens hosts Ece Kamar, deputy lab director at Microsoft ...
136 - AI Frontiers: The Physics of AI with Sébastien Bubeck
Podcasts are an excellent source of information, and in this episode of Microsoft Research, AI scientist and engineer Ashley Llorens engages i...