
Vertex AI Safety with Terraform: Model Armor + Gemini Content Filters as Code 🛡️
GCP gives you two layers of AI safety - Gemini's built-in content filters and Model Armor for PII, prompt injection, and malicious URL detection. Here's how to deploy both with Terraform. You deployed your first Vertex AI endpoint ( Post 1 ). Gemini responds, tokens flow. But what stops it from leaking a customer's SSN, falling for a prompt injection, or generating harmful content? GCP gives you two safety layers that work together: Gemini Safety Settings - per-request content filters (hate speech, harassment, dangerous content, sexually explicit) configured in your application code via environment variables Model Armor - a standalone security service for prompt injection detection, PII/DLP filtering, malicious URL scanning, and RAI content safety, all managed with Terraform This is fundamentally different from AWS Bedrock, where guardrails are a single unified resource. On GCP, content filtering lives at the model level, while PII/injection protection is a separate service. Understand
Continue reading on Dev.to DevOps
Opens in a new tab




