Too many models selected for your tier
"can I load flux dev in LM studio?"
4.3s
Add to Favorites
Cozy Upvote
Share
Export

Tap a circle to see that AI's answer

LM Studio now supports loading the FLUX 1‑dev (or its GGUF‑quantized variant) through its Local Server, enabling text‑to‑image generation via an OpenAI‑compatible API.

Quick Facts
  • Use the Local Server tab to add the GGUF model file.
  • Prefer the Q8 GGUF checkpoint (≈ 24 GB) for most GPUs.
  • Set a high token limit (e.g., 1024) to receive the full image payload.
AI Consensus
Models Agreed
  • Yes, FLUX 1‑dev can be loaded in LM Studio via the Local Server feature.
  • The recommended approach is to use a GGUF Q8 quantized checkpoint for compatibility and memory efficiency.
  • After loading, you can generate images either from the built‑in chat UI or by calling the exposed OpenAI‑compatible API endpoint.
Points of Debate

How to Load FLUX 1‑dev in LM Studio

  1. Install / update LM Studio

    • Download the latest LM Studio release (≥ 0.3.14) from the official site and launch it.
    • Newer builds include built‑in GGUF support and the Local Server mode.
  2. Enable the Local Server

    • Open the Developer tab → toggle “Local Server” on.
    • This exposes an OpenAI‑compatible endpoint (e.g., http://localhost:1234/v1).
  3. Obtain a FLUX 1‑dev checkpoint

    • Preferred: FLUX 1‑dev GGUF Q8 (≈ 24 GB) – see the guide on quantized GGUF loading 7.
    • Alternative: the prompt‑crafter‑for‑flux.1‑dev repository which bundles ready‑to‑use prompts 1.
  4. Add the model to LM Studio

    • In the Models pane click “Add Model”, browse to the downloaded flux1-dev-q8.gguf file, and give it a name (e.g., “FLUX‑dev”).
    • LM Studio will index the model and list it for selection.
  5. Configure model settings

    • Increase max tokens (e.g., 1024) so the image payload isn’t truncated.
    • Enable vision support if you plan to use text‑to‑image or in‑painting features.
  6. Test the model

    • In the chat UI, send a prompt such as:
      A futuristic cityscape at sunset, photorealistic, 8K
      
    • LM Studio returns a base‑64‑encoded PNG (or a file path) that can be viewed directly.
  7. Optional: Call the model from code

    • Use any HTTP client to POST to http://localhost:1234/v1 with JSON:
      {
        "model": "FLUX‑dev",
        "prompt": "A cyberpunk street market, neon lights"
      }
      
    • The response contains the image data; decode or save it as needed. See the tool‑use docs for details 6.

Common Pitfalls & Fixes

  • Memory constraints – The full model needs > 24 GB VRAM; use the Q8 quantized file or run on CPU (slower).
  • Version mismatch – Older LM Studio versions cannot load GGUF files; updating resolves the issue.
  • Model not appearing – Ensure the file extension is .gguf and the folder is not nested; LM Studio only scans the top‑level directory you select.
AI can make mistakes. Verify important information.

No follow-up threads yet

Dive deeper into this topic with a follow-up.

Sign in to start a follow-up thread