Skip to main content

Set up LM Studio on Windows

This guide walks you through the main steps of setting up LM Studio for use in GPT for Work on Windows.

This guide assumes that you use GPT for Work on the same machine that hosts LM Studio.

Prerequisites

To set up LM Studio on Windows:

  1. Install LM Studio and download a model.

  2. Start and configure the LM Studio server.

Install LM Studio and download a model

  1. Download and run the Windows installer. Follow the on-screen instructions to complete the installation.

    The installer sets LM Studio to start automatically as a background service on system boot.

  2. Run LM Studio.

  3. On the welcome screen, click Skip onboarding.

  4. In LM Studio, in the sidebar, select Discover.

  5. Find and select the model you want to use, and click Download. For example, to get started with a small model, select Llama 3.2 3B.

    Selecting a model to download in LM Studio

    After the download completes, the model is available for prompting in LM Studio.

You have installed LM Studio and downloaded your first model. For more information about working with models, see the LM Studio documentation.

Start and configure the LM Studio server

  1. In LM Studio, in the sidebar, select Developer.

  2. Click the Status toggle to change the status from Stopped to Running. You have started the LM Studio server.

    Starting the LM Studio server
  3. Click Settings.

    Open LM Studio server settings
  4. Click the Enable CORS toggle to enable the setting.

    Enable CORS for the LM Studio server
    note

    By default, the LM Studio server only accepts same-origin requests. Since GPT for Work always has a different origin from the LM Studio server, you must enable cross-origin resource sharing (CORS) for the server.

  5. To verify that the /v1/models endpoint of the LM Studio server works, open http://127.0.0.1:1234/v1/models.

    GPT for Work uses the endpoint to fetch a list of models installed on the server. If the endpoint works, the server returns a JSON object with a data property listing all currently installed models:

    LM Studio /v1/models endpoint over HTTP in the browser

You have started and configured the LM Studio server.

What's next

You have completed the setup required to access LM Studio from GPT for Work on the same machine. You can now set http://127.0.0.1:1234 as the local server URL in GPT for Work.