Back to articles
LLM Observability for Laravel - trace every AI call with Langfuse

LLM Observability for Laravel - trace every AI call with Langfuse

via Dev.toMartijn van Nieuwenhoven

How much did your LLM calls cost yesterday? Which prompts are slow? Are your RAG answers actually good? If you're building AI features with Laravel, you probably can't answer any of these. I couldn't either. So I built a package to fix it. Laravel is ready for AI. Observability wasn't. The official Laravel AI SDK launched in February 2026. It's built on top of Prism, which has become the go-to package for LLM calls in Laravel. Neuron AI is gaining traction for agent workflows. With Laravel 13, AI is a first-class concern in the framework. Building agents, RAG pipelines, and LLM features with Laravel is no longer experimental. But once those features run in production, you're flying blind. Which documents are being retrieved? How long does generation take? What's the cost per query? Is the output actually correct? Python and JavaScript developers have had mature tooling for these questions for years. Langfuse, LangSmith, Arize Phoenix - the list is long. Laravel had nothing. What is Lan

Continue reading on Dev.to

Opens in a new tab

Read Full Article
3 views

Related Articles