April 21, 2025
Microsoft Azure

Mastering Azure and AI adoption: How to design and build secure AI projects

Microsoft Azure logo

Joey Snow walks through how to design secure AI workloads by starting where the trouble lives: the threat model. He breaks down what’s familiar (DDoS, classic app threats) and what’s uniquely spicy in generative AI—direct and indirect prompt injection, jailbreak attempts, data leakage, and the risks that show up when you add system messages, grounding, and retrieval-augmented generation (RAG) into the mix. From there, he maps practical guardrails across the stack: tight authentication and access control with Microsoft Entra ID and managed identities instead of long-lived keys and secrets, network segmentation with virtual networks (with a memorable London Underground analogy), and edge protection using a web application firewall deployed with Azure Front Door or Azure Application Gateway. He also spotlights Azure AI Content Safety capabilities, including detecting jailbreak patterns, blocking prompt injection attempts, and watching for protected material and intellectual property issues—because “security” is not a vibe, it’s a plan.

We produced this as a pre-recorded, streamed event that feels clean, confident, and suspiciously calm—because we did the chaos in advance and locked it in the edit bay. We handled show development, designed the graphics package, prepared the speaker, produced the footage, edited it into a crisp, coherent session, and streamed it to multiple social media channels using our remote studio and streaming platform. The payoff is simple: your experts look natural, your message lands clearly, and your audience gets a polished on-demand asset that keeps working long after the stream ends.

Microsoft Azure logo
Share this video