by Sujit D'Mello
August 15, 2024
Azure Open AI is widely used in industry but there are number of security aspects that must be taken into account when using the technology. Luckily for us, Audrey Long, a Software Engineer at Microsoft, security expert and renowned conference speaker, gives us insights into securing LLMs and provides various tips, tricks and tools to help developers use these models safely in their applications.
Media file: https://azpodcast.blob.core.windows.net/episodes/Episode502.mp3
YouTube: https://youtu.be/64Achcz97PI
Resources:
AI Tooling:
- Azure AI Tooling
- to detect and block prompt injection attacks, including a new model for identifying indirect prompt attacks before they impact your model, coming soon and now available in preview in Azure AI Content Safety.
- to detect “hallucinations” in model outputs, coming soon.
- to steer your model’s behavior toward safe, responsible outputs, coming soon.
- to assess an application’s vulnerability to jailbreak attacks and to generating content risks, now available in preview.
- to understand what model inputs, outputs, and end users are triggering content filters to inform mitigations, coming soon, and now available in preview in Azure OpenAI Service.
- AI Defender for Cloud
- AI Security Posture Management
- AI Workloads
- AI Red Teaming Tool
AI Development Considerations:
- AI Assessment from Microsoft
- Microsoft Responsible AI Processes
- Define Use Case and Model Architecture
- Content Filtering System
- Azure OpenAI Service includes a content filtering system that works alongside core models, including DALL-E image generation models.
- Additional classifiers are available for detecting jailbreak risks and known content for text and code.
- Red Teaming the LLM
- Create a Threat Model with OWASP Top 10
Other updates: