
Why we built a programming language for AI prompts instead of another GUI
The first version of our AI product had this in the codebase: system_prompt = f """ You are a customer support agent for { company } . { premium_instructions if tier == ' premium ' else free_instructions } { billing_policy if issue_type in [ ' billing ' , ' refund ' ] else '' } ...12 more conditional blocks... """ This works until it doesn't. By month three we had: 4,000-token prompts being sent unconditionally Conditional logic scattered across Python files, config files, and Notion docs No way to test a prompt change without deploying the app No version history - just vibes and git blame We looked at every existing tool. They all had one thing in common: they stored prompts as strings with variable substitution. That doesn't solve the problem. It just moves the string somewhere else. The actual problem: prompts need real conditional logic The LLM doesn't need to see instructions for premium users when the current user is free. It doesn't need the billing policy when the question is t
Continue reading on Dev.to Webdev
Opens in a new tab


