
Why Your AI Agent Skill Sucks
You wrote a skill prompt for your AI agent. It looks great — diagnosis protocol, safety rules, operational discipline. Your agent fixes broken deployments 4x faster. Ship it? We tested role-based skills across 16 real infrastructure scenarios on 4 models. Here's what happened. The Setup infra-bench runs AI agents against real Kubernetes clusters and Terraform projects. No mocks. Kind clusters, real kubectl, real failures. The agent gets a task ("the deployment is broken"), tools (kubectl, terraform, helm), and a turn budget. Fix it or fail. We tested two modes: Baseline : no skill — the model uses its own judgment With skill : a compact ~300-token role prompt (k8s-admin for Kubernetes, platform-eng for Terraform) Same model, same scenarios, same cluster. The only difference: did we tell the agent how to think? The Results Kubernetes Scenarios (8 CKA/CKS scenarios, L2-L3) Model Baseline With k8s-admin skill Delta Claude Sonnet 4 8/8 8/8 0 Gemini 2.5 Flash 6/8 5/8 -1 GPT-4o 4/6 4/8 -2 De
Continue reading on Dev.to
Opens in a new tab



