Loading...

AI Frontiers: Measuring and mitigating harms with Hanna Wallach

AI Frontiers: Measuring and mitigating harms with Hanna Wallach

In this podcast episode of AI Frontiers, Partner Research Manager, Hanna Wallach, describes her research into fairness, accountability, transparency, and ethics in AI and machine learning, and how it informs the use of AI in Microsoft products and services. Wallach and her team of applied scientists have expanded their tools for measuring fairness-related harms in AI systems to address harmful content more broadly during their involvement in the deployment of Bing Chat. She also discusses the technique of filtering, which helps in mitigating harms that are not often talked about. Additionally, Wallach talks about the cross-company collaboration that brings policy, engineering, and research together to evolve and execute the Microsoft approach to developing and deploying AI responsibly. This podcast explores the implications of the latest advancements in artificial intelligence, touching upon Microsoft's approach to creating, understanding, and deploying AI, its applications in various fields, and its potential to benefit humanity.

In conclusion, this podcast offers valuable insights into the responsible development and deployment of AI, particularly in the context of fairness, accountability, transparency, and ethics. It is worth a listen for anyone interested in exploring the latest advancements in AI and their implications for our society.

Learn more:

Microsoft AI: Responsible AI Principles and Approach  

AI and Microsoft Research

The original post can be accessed on the podcast site of Microsoft Research.

Published on:

Learn more
Microsoft Research Podcast
Microsoft Research Podcast

An ongoing series of conversations bringing you right up to the cutting edge of Microsoft Research.

Share post:

Related posts

Decoding AI: Part 6, Creating boundary conditions in generative AI

Welcome to Part 6 of the Decoding AI learning series, focused on creating boundary conditions in generative AI. Large language models and gene...

12 months ago

Decoding AI: Part 5, Navigating trust in the age of large language models and generative AI

Part 5 of the "Decoding AI: A government perspective learning series" focuses on one of the most critical issues in today's AI landscape: trus...

1 year ago

AI Frontiers: Measuring and mitigating harms with Hanna Wallach

This episode of the Microsoft Research Podcast features a conversation with Partner Research Manager Hanna Wallach, who has extensively resear...

1 year ago

AI Frontiers: Measuring and mitigating harms with Hanna Wallach

This Microsoft Research Podcast episode focuses on the impact of large-scale AI models, specifically GPT-4, and the progress it signals for ar...

1 year ago

GPTZero: AI Content Detection Tool Explained

Have you ever read text online and wondered if it was written by an AI or a human? Meet GPTZero, an AI content detection tool designed to help...

1 year ago

AI Ethics with Matthew Renze - Episode 249

Join the host as he sits down with Matthew Renze, a renowned data science consultant, author, and public speaker, in this fascinating discussi...

1 year ago

138 - AI Frontiers: Models and Systems with Ece Kamar

In this episode of the Microsoft Research Podcast, AI scientist and engineer Ashley Llorens discusses the latest advances in AI models and the...

1 year ago

Introducing the Responsible AI dashboard accelerator kit for healthcare

Azure Machine Learning has introduced a new Responsible AI dashboard accelerator kit for healthcare. This dashboard helps to ensure that model...

1 year ago

136 - AI Frontiers: The Physics of AI with Sébastien Bubeck

Podcasts are an excellent source of information, and in this episode of Microsoft Research, AI scientist and engineer Ashley Llorens engages i...

1 year ago
Stay up to date with latest Microsoft Dynamics 365 and Power Platform news!
* Yes, I agree to the privacy policy