GS-102Axalon Original
AI Security with Model Armor
Taught by Dan King, 4x Google Trainer of the Year
1 dayIntensiveAdvancedLoading...
Public class policy: Classes run with a minimum of 6 participants. If minimum enrollment isn't reached, you'll be notified 7 days before with options to transfer or receive a full refund.
Overview
Secure your AI applications using Model Armor and Google Cloud security tools. Learn to protect against prompt injection, data leakage, adversarial attacks, and model manipulation.
What You'll Learn
- Understand AI-specific security threats
- Configure Model Armor for prompt injection protection
- Implement sensitive data filtering
- Use Security Command Center for AI monitoring
- Design defense-in-depth for AI applications
Who Should Attend
Security engineers, DevSecOps, security architects, IT security managers
Prerequisites
Google Cloud security fundamentals
Products Covered
Model ArmorSecurity Command CenterCloud DLPVPC Service Controls
Course Modules
1
AI Security Threat Landscape
Topics
- Prompt injection attacks
- Jailbreaking techniques
- Data exfiltration risks
- Model poisoning and manipulation
2
Model Armor Deep Dive
Topics
- Architecture and capabilities
- Configuration and deployment
- Prompt injection detection
- Jailbreak prevention
3
Sensitive Data Protection
Topics
- PII detection and filtering
- Data loss prevention for AI
- Compliance-driven filtering
- Custom filter policies
4
AI Security Operations
Topics
- Security Command Center integration
- Monitoring and alerting
- Incident response for AI systems
- Red teaming your AI applications
Get This Training
No public classes currently scheduled. Express interest below or request private training.
Course Details
- Course Code
- GS-102
- Duration
- 1 day
- Format
- Intensive
- Level
- Advanced
- Price
- Loading...
Questions About This Course?
Contact us for custom scheduling, group discounts, or curriculum customization.
Contact UsStarting fromLoading...