Securing AI with Microsoft Purview banner image
Webinar

Securing AI with Microsoft Purview

16 July 2026 | 10:00 AM
45 mins
Data leakage, regulatory compliance, and loss of control over sensitive information are just some of the risks you may face from using AI tools. This session guides you on how you can use Microsoft Purview to enable AI safely without compromising security, privacy, or your compliance obligations.

In addition to looking at Purview’s capabilities across governance, visibility, and protection, this webinar will also provide a walkthrough of a real‑world scenario showing how Purview blocks an AI prompt that would expose highly confidential project information, while logging the activity for security and audit teams.

Webinar Highlights

By attending this webinar you will learn:

  • How the AI threat landscape is evolving: covering accidental data leakage, over‑sharing, and lack of user awareness when interacting with AI tools.
  • How Purview provides a “safety bubble” for AI, using Data Security Posture Management (DSPM) to give visibility into AI usage, data exposure, and risk signals across the organisation.
  • How Purview enables organisations to retain AI prompt and response activity for legal, regulatory, and investigative purposes using audit logs, eDiscovery, and retention policies.
  • The three primary controls used to secure AI interactions:
    • Sensitivity Labels that persist when AI rewrites or summarises content
    • Data Loss Prevention (DLP) to block or warn on risky AI prompts in real time
    • Insider Risk Management (IRM) to demonstrate how IRM identifies risky behaviour by AI agent

Speakers

Liam Newton

Liam Newton

Senior Microsoft Purview Consultant

Bridewell

James Cradock

James Cradock

Senior Microsoft Purview Consultant

Bridewell

Explore our Microsoft Purview Service

Register to Watch