
Use Cursor with LM Studio
Elevate Your Coding: Integrating Cursor with Local LLMs (LM Studio) and GitHub Copilot Prerequisites Before you begin, make sure you have: Cursor installed LM Studio installed (see lmstudio.ai ) ngrok (optional, discussed later) A GitHub Copilot subscription (optional but recommended) One or more local models downloaded (e.g. Gemma2, Llama3, DeepSeekCoder) Part 1: Setting Up the Engine – LM Studio & ngrok The goal of this section is to run a local LLM and expose it as an API that Cursor can consume. Install LM Studio Download the appropriate package for your OS. Run the installer and launch the application. LM Studio Model List Example model selector inside LM Studio. Choose and Download a Model Use the search bar to find a model. Some good starting points: Llama3(8B) or Gemma2(9B) for general use DeepSeekCoder for coding-heavy tasks GLM4 for modern language capabilities Click Download and wait for the model to finish downloading. Start the Local Server Switch to the Local Server tab (
Continue reading on Dev.to Tutorial
Opens in a new tab



