
# I Built a Local AI Terraform Generator and Tested It By Actually Deploying to AWS — Here Are the Results
The Idea Every time I needed a new AWS resource, I'd spend 20 minutes reading Terraform docs just to get the syntax right for something I'd done before. I wanted to type plain English and get working HCL back. But I also didn't want to just generate code — I wanted to know if it actually deploys. So I tested every resource by running terraform apply against a real AWS account. How It Works You describe infrastructure in plain English. The tool sends it to a local Llama 3.2 model via Ollama, which returns four Terraform files. Those files get saved to a generated/ folder, ready for terraform init and terraform apply . Plain English → Python → Ollama (local) → Parse HCL → main.tf + variables.tf + outputs.tf + tfvars.example The key piece is the prompt. Getting consistent, parseable HCL out of an LLM required a very specific structure: prompt = f """ You are a Terraform/OpenTofu expert. Generate production-ready infrastructure code. USER REQUEST: { description } PROVIDER: { provider } CRI
Continue reading on Dev.to
Opens in a new tab



