
Copilot CLI Weekly: MCP Servers Get LLM Access
MCP Sampling Lands in v1.0.13-0 The most significant change this week is buried in a prerelease tag: MCP servers can now request LLM inference . Version 1.0.13-0 , released today, adds sampling support to the Model Context Protocol implementation. MCP servers can call the user's LLM through a permission prompt, eliminating the need for servers to maintain their own API subscriptions. This is a shift in how MCP servers work. Before this, an MCP server was a tool provider — it exposed functions the agent could call, but it couldn't reason on its own. Now, with sampling, an MCP server can delegate reasoning back to the user's LLM mid-execution. A recipe generator can ask the LLM to format output. A code analysis server can ask for natural language summaries. The user approves each request via a review prompt, maintaining control over what their LLM processes. The feature has been in the MCP spec since VS Code shipped it last summer , but adoption has been slow. Copilot CLI supporting it m
Continue reading on Dev.to
Opens in a new tab

