Back to articles
LLM Observability & MLOps Pipeline MCP Servers — Opik, LangSmith, Langfuse, OpenTelemetry, ZenML

LLM Observability & MLOps Pipeline MCP Servers — Opik, LangSmith, Langfuse, OpenTelemetry, ZenML

via Dev.toGrove on Chatforest

At a glance: The operational layer of AI development — monitoring, prompt management, pipeline orchestration, and experiment tracking via MCP. Each server is tightly coupled to its parent platform. The category is fragmented but individually strong. Rating: 3.5/5. LLM Observability Platforms comet-ml/opik-mcp (200 stars, TypeScript, Apache 2.0) — most feature-rich observability MCP server. Modular toolsets : core, integration, expert-prompts, expert-datasets, expert-trace-actions, expert-project-actions, and metrics. Cherry-pick what you need or enable all. Supports local stdio and remote streamable-http. v2.0.1 (March 2026), 160 commits. Smart architecture — avoids tool list bloat. langchain-ai/langsmith-mcp-server (89 stars, Python, MIT) — official LangChain MCP server. 15+ tools: thread history, prompt CRUD, run/trace fetching, dataset management, experiment execution, and billing usage . Best choice if you're already using LangChain/LangGraph. Helicone MCP (TypeScript) — unique dua

Continue reading on Dev.to

Opens in a new tab

Read Full Article
6 views

Related Articles