Copilot Studio Agent Vulnerability to Prompt Injection

Security researchers documented a prompt injection vulnerability in an agent created with Copilot Studio that allowed the exfiltration of customer data. Microsoft has fixed the problem, but the researchers figure that natural language prompts and the way that AI responds means that other ways will be found to cause agents to do silly things. Microsoft 365 tenants need to think about the deployment and management of agents.
Published on:
Learn more