Episode 502 - Azure Open AI and Security
Azure Open AI is widely used in industry but there are number of security aspects that must be taken into account when using the technology. Luckily for us, Audrey Long, a Software Engineer at Microsoft, security expert and renowned conference speaker, gives us insights into securing LLMs and provides various tips, tricks and tools to help developers use these models safely in their applications.
Media file: https://azpodcast.blob.core.windows.net/episodes/Episode502.mp3
YouTube: https://youtu.be/64Achcz97PI
Resources:
AI Tooling:
- Azure AI Tooling Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications | Microsoft Azure Blog
- Prompt Shields to detect and block prompt injection attacks, including a new model for identifying indirect prompt attacks before they impact your model, coming soon and now available in preview in Azure AI Content Safety.
- Groundedness detection to detect “hallucinations” in model outputs, coming soon.
- Safety system messagesto steer your model’s behavior toward safe, responsible outputs, coming soon.
- Safety evaluations to assess an application’s vulnerability to jailbreak attacks and to generating content risks, now available in preview.
- Risk and safety monitoring to understand what model inputs, outputs, and end users are triggering content filters to inform mitigations, coming soon, and now available in preview in Azure OpenAI Service.
- AI Defender for Cloud
- AI Red Teaming Tool
AI Development Considerations:
- AI Assessment from Microsoft
- Microsoft Responsible AI Processes
- Define Use Case and Model Architecture
- Content Filtering System
- How to use content filters (preview) with Azure OpenAI Service - Azure OpenAI | Microsoft Learn
- Azure OpenAI Service includes a content filtering system that works alongside core models, including DALL-E image generation models. This system uses an ensemble of classification models to detect and prevent harmful content in both input prompts and output completions
- The filtering system covers four main categories: hate, sexual, violence, and self-harm
- Each category is assessed at four severity levels: safe, low, medium, and high
- Additional classifiers are available for detecting jailbreak risks and known content for text and code. JailBreaking Content Filters
- Red Teaming the LLM
- Create a Threat Model with OWASP Top 10
Other updates:
- Los Angeles Azure Extended Zones
- Carbon Optimization
- App Config Ref GA
- OS SKU In-Place Migration for AKS
- Operator CRD Support with Azure Monitor Managed Service
- Azure API Center Visual Studio Code Extension Pre-release
- Azure API Management WordPress Plugin
- Announcing a New OpenAI Feature for Developers on Azure
Published on:
Learn moreRelated posts
Azure Developer CLI (azd) – August 2025
This post announces the August release of the Azure Developer CLI (`azd`). The post Azure Developer CLI (azd) – August 2025 appeared fir...
Azurite: Build Azure Queues and Functions Locally with C#
Lets say you are a beginner Microsoft Azure developer and you want to : Normally, these tasks require an Azure Subscription. But what if I tol...
Data encryption with customer-managed key (CMK) for Azure Cosmos DB for MongoDB vCore
Built-in security for every configuration Azure Cosmos DB for MongoDB vCore is designed with security as a foundational principle. Regardless ...
Azure Developer CLI: From Dev to Prod with Azure DevOps Pipelines
Building on our previous post about implementing dev-to-prod promotion with GitHub Actions, this follow-up demonstrates the same “build ...
Azure DevOps OAuth Client Secrets Now Shown Only Once
We’re making an important change to how Azure DevOps displays OAuth client secrets to align with industry best practices and improve our overa...
Azure Managed Instance for Apache Cassandra v5.0 Generally Available!
Azure Managed Instance for Apache Cassandra Upgrade to Cassandra v5.0 is now generally available, bringing a host of powerful new features and...
Hunting Living Secrets: Secret Validity Checks Arrive in GitHub Advanced Security for Azure DevOps
If you’ve ever waded through a swamp of secret scanning alerts wondering, “Which of these are actually dangerous right now?”— this enhancement...
Real-Time Security with Continuous Access Evaluation (CAE) comes to Azure DevOps
We’re thrilled to announce that Continuous Access Evaluation (CAE) is now supported on Azure DevOps, bringing a new level of near real-time se...