What Is Stable Diffusion and Why Run It Locally?
Stable Diffusion is a free, open-source AI image generator that runs entirely on your own computer. Unlike cloud-based tools like Midjourney, DALL-E, or Adobe Firefly, running Stable Diffusion locally means:
- No content filtering — full creative freedom with zero prompt restrictions
- No subscriptions — completely free after initial setup
- No usage limits — generate as many images as you want, 24/7
- Total privacy — your prompts and images never leave your machine
- Offline capable — works without an internet connection once installed
This guide covers everything you need to know: the two best interfaces (Automatic1111 and Forge), how to install them on Windows and Linux, how to enter prompts, which models work with your GPU, and how to avoid the content restrictions that cloud tools enforce.
Automatic1111 vs Forge: Which Interface Should You Use?
Stable Diffusion is the AI engine that generates images. To actually use it, you need a web interface (UI) that lets you type prompts and see results. The two most popular options are:
Automatic1111 (A1111)
The original and most widely used Stable Diffusion interface. It has the largest community, the most extensions, and years of documentation and tutorials.
Forge (SD WebUI Forge)
A newer fork of Automatic1111 created by lllyasviel (the developer behind ControlNet). It looks and works nearly identically to A1111 but is significantly faster and uses less GPU memory.
| Feature | Automatic1111 | Forge |
|---|---|---|
| Content Filtering | None | None |
| Prompt Restrictions | None | None |
| Generation Speed | Baseline | 30-75% faster |
| VRAM Usage | Higher | Significantly lower |
| UI/Interface | Standard tabbed layout | Nearly identical to A1111 |
| Extension Support | Huge library (thousands) | Compatible with most A1111 extensions |
| Community Size | Largest | Growing fast |
| Best For | Maximum compatibility, most tutorials | Speed, low VRAM GPUs (6-8GB) |
Our recommendation: If you’re starting fresh, go with Forge. You get the same experience but faster performance and lower memory requirements. If you already have A1111 set up with extensions you rely on, there’s no rush to switch.
Content Filtering: Local vs Cloud Tools
One of the biggest reasons people run Stable Diffusion locally is creative freedom. Here’s how content filtering compares across platforms:
Cloud Tools (Filtered)
- Midjourney — Heavy content filtering, rejects many prompts, bans accounts for violations
- DALL-E 3 / ChatGPT — Strict filters, blocks a wide range of prompt keywords
- Adobe Firefly — Strict commercial-safe filtering
- Bing Image Creator — Same DALL-E filters as ChatGPT
- Leonardo AI — Has a content filter toggle but still restricts certain content
Local Tools (Unfiltered)
- Automatic1111 — No content filtering whatsoever
- Forge — No content filtering whatsoever
- ComfyUI — No content filtering whatsoever
Why is there no filter? These are open-source tools that run 100% on your hardware. There is no company server in the middle to enforce rules. You download the model, you run it locally, and nothing is censored, blocked, or reported.
Note: Some models shipped by Stability AI included a basic “safety checker” module, but it can be easily disabled in settings. Most community models on platforms like CivitAI do not include any safety checker at all.
How to Disable the Safety Checker (If Present)
In Automatic1111 or Forge, if you encounter a black image with a safety warning:
- Go to Settings (tab at the top)
- Search for “NSFW” or “safety checker”
- Uncheck or disable the safety checker option
- Click Apply Settings and Reload UI
Alternatively, launch with the command line flag: --disable-safe-unpickle
Most users never encounter this since community models don’t include a safety checker by default.
How to Check If You Already Have Stable Diffusion Installed
On Windows
Open PowerShell or Command Prompt and run:
dir C:\stable-diffusion-webui
dir %USERPROFILE%\stable-diffusion-webui
For Forge:
dir C:\stable-diffusion-webui-forge
dir %USERPROFILE%\stable-diffusion-webui-forge
If the folder exists, you have it installed.
On Linux
Open a terminal and run:
ls ~/stable-diffusion-webui
ls ~/stable-diffusion-webui-forge
Or search for it:
find ~ -maxdepth 3 -name "webui.sh" 2>/dev/null
Check Your Version
Once you find the installation folder:
cd ~/stable-diffusion-webui
git log --oneline -1
git describe --tags
This shows your exact version and commit. You can also see the version in the bottom left corner of the WebUI when it’s running in your browser.
Installation Guide: Windows
Installing Automatic1111 on Windows
- Install Python 3.10.x from python.org (check “Add to PATH” during install)
- Install Git from git-scm.com
- Clone the repository:
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git cd stable-diffusion-webui - Launch:
webui-user.bat - Wait for the first-time setup (downloads dependencies and a base model automatically)
- Open your browser to http://localhost:7860
Installing Forge on Windows
- Download the one-click installer from the Forge GitHub releases page (search “stable-diffusion-webui-forge” on GitHub)
- Extract the zip file to a folder
- Run:
run.bat - Open your browser to http://localhost:7860
Installation Guide: Linux
Installing Automatic1111 on Linux
sudo apt install python3 python3-venv python3-pip git -y
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
bash webui.sh
The script handles everything — creates a virtual environment, installs PyTorch, and downloads a base model. First launch takes 10-20 minutes.
Installing Forge on Linux
git clone https://github.com/lllyasviel/stable-diffusion-webui-forge.git
cd stable-diffusion-webui-forge
bash webui.sh
Both interfaces open at http://localhost:7860 by default. To access from another device on your network, launch with: --listen --share
How to Enter Prompts
Once the WebUI is running in your browser, generating images is simple:
Step 1: Write Your Positive Prompt
The top text box is your positive prompt — describe what you want to see:
a cyberpunk city at night, neon lights reflecting on wet streets, rain, photorealistic, highly detailed, 8k resolution, cinematic lighting
Step 2: Write Your Negative Prompt
The bottom text box is your negative prompt — describe what you do NOT want:
blurry, low quality, deformed, extra fingers, mutated hands, watermark, text, logo, bad anatomy, ugly, distorted
Step 3: Adjust Settings
- Sampling Steps: 20-30 (more = more detail, but slower)
- CFG Scale: 7-12 (higher = follows prompt more strictly)
- Sampler: DPM++ 2M Karras (great default for most models)
- Resolution: 512×512 for SD 1.5, 1024×1024 for SDXL models
Step 4: Click Generate
Hit the Generate button and wait a few seconds to a minute depending on your GPU. Your image appears in the output panel.
Prompt Tips for Better Results
- Be specific: “A golden retriever playing in autumn leaves, soft afternoon sunlight, Canon EOS R5” beats “dog in park”
- Add quality boosters: Include terms like “masterpiece, best quality, highly detailed, professional” in your positive prompt
- Use emphasis: Put parentheses around important words to increase their weight:
(neon lights:1.3)makes neon lights 30% more prominent - Specify art styles: “oil painting,” “watercolor,” “anime,” “photorealistic,” “3D render,” “pencil sketch”
- Describe lighting: “golden hour,” “dramatic shadows,” “studio lighting,” “backlit,” “volumetric fog”
- Set the camera: “close-up portrait,” “wide angle landscape,” “bird’s eye view,” “macro photography”
Models by GPU VRAM: What Can Your Hardware Run?
Different Stable Diffusion models require different amounts of GPU memory (VRAM). Here’s a practical guide to matching models with your hardware:
4GB VRAM (GTX 1650, GTX 1050 Ti)
- Recommended UI: Forge (critical for low VRAM — A1111 may struggle)
- Best models: SD 1.5 based models at 512×512
- Popular models: Realistic Vision 1.5, DreamShaper 1.5, Anything V5 (anime)
- Launch flag:
--medvram-sdxl --xformers - What to expect: Functional but slow. Stick to 512×512 resolution. Forge makes this tier much more usable.
6GB VRAM (RTX 2060, RTX 3060 Mobile, GTX 1660)
- Recommended UI: Forge or A1111
- Best models: SD 1.5 (comfortable), SDXL (possible with Forge)
- Popular models: Realistic Vision, DreamShaper XL, Juggernaut XL
- Launch flag:
--medvram --xformers - What to expect: SD 1.5 runs great. SDXL works in Forge with optimizations. Can do 768×768 comfortably.
8GB VRAM (RTX 3060 Ti, RTX 3070, RTX 4060)
- Recommended UI: Either A1111 or Forge
- Best models: SD 1.5 (fast), SDXL (comfortable), Flux Schnell (with Forge)
- Popular models: Juggernaut XL, RealVisXL, SDXL base, Flux Schnell
- What to expect: The sweet spot for most users. SDXL at 1024×1024 runs well. Can use ControlNet and other advanced features.
12GB VRAM (RTX 3060 12GB, RTX 4070, RTX 4080)
- Recommended UI: Either — both run smoothly
- Best models: SDXL (fast), Flux Dev, Flux Schnell (fast), large LoRA stacks
- Popular models: Flux Dev, Juggernaut XL, RealVisXL, Pony Diffusion XL
- What to expect: Runs everything comfortably. Can stack multiple LoRAs, use ControlNet, and generate at high resolutions. Flux Dev works great here.
16-24GB VRAM (RTX 4080, RTX 4090, RTX 3090)
- Recommended UI: Either — go wild
- Best models: Everything. Flux Pro, Flux Dev (fast), SDXL (instant), large batches
- What to expect: No limitations. Generate in seconds. Run multiple models. Use every feature at maximum quality.
AMD GPU Users
- Linux: AMD GPUs work via ROCm. Install ROCm drivers, then launch with
--use-rocm - Windows: Limited support via DirectML. Launch with
--use-directml - Performance: Expect roughly 50-70% of equivalent NVIDIA performance
Mac (Apple Silicon) Users
- M1/M2/M3/M4 chips work with Stable Diffusion via the MPS backend
- Launch with:
--use-mps(usually auto-detected) - 8GB unified memory handles SD 1.5 well. 16GB+ handles SDXL.
- Slower than equivalent NVIDIA GPUs but fully functional
Where to Download Models
Models are the “brains” of Stable Diffusion. Different models produce different styles:
- CivitAI.com — The largest model library. Browse by category, style, and popularity. Community reviews and example images for every model.
- Hugging Face — The official home for many models. More technical but reliable.
How to Install a Model
- Download the model file (.safetensors format — always prefer this over .ckpt for security)
- Place it in your
stable-diffusion-webui/models/Stable-diffusion/folder - Click the refresh button next to the model dropdown in the WebUI
- Select your new model from the dropdown
Recommended Starter Models
| Model | Style | Base | Min VRAM |
|---|---|---|---|
| Realistic Vision | Photorealistic | SD 1.5 | 4GB |
| DreamShaper | Artistic/Fantasy | SD 1.5 | 4GB |
| Anything V5 | Anime/Illustration | SD 1.5 | 4GB |
| Juggernaut XL | Photorealistic | SDXL | 6GB |
| RealVisXL | Photorealistic | SDXL | 6GB |
| DreamShaper XL | Artistic | SDXL | 6GB |
| Pony Diffusion XL | Versatile/Stylized | SDXL | 8GB |
| Flux Schnell | High quality general | Flux | 8GB |
| Flux Dev | Highest quality | Flux | 12GB |
Troubleshooting Common Issues
“CUDA out of memory” Error
- Switch to Forge (uses less VRAM)
- Lower your resolution (try 512×512)
- Add
--medvramor--lowvramto your launch flags - Close other GPU-intensive apps (games, video editors, other AI tools)
Black Images
- The safety checker may be enabled — disable it in Settings
- Try adding
--no-halfto your launch flags (fixes some GPU issues) - Switch to a different sampler (try “Euler a” or “DPM++ 2M Karras”)
Slow Generation
- Enable xformers: add
--xformersto launch flags - Switch to Forge for 30-75% speed improvement
- Reduce steps to 20 (diminishing returns after 30)
- Generate at 512×512 and upscale after
WebUI Won’t Start
- Make sure Python 3.10.x is installed (not 3.12+, which can cause issues)
- On Linux, ensure NVIDIA drivers and CUDA are installed:
nvidia-smi - Delete the
venvfolder and relaunch to force a fresh install of dependencies
Bottom Line
Running Stable Diffusion locally with Automatic1111 or Forge gives you the most powerful, unrestricted AI image generation available. No content filters, no subscriptions, no limits. Forge is the better choice for most people in 2026 — it’s faster, uses less memory, and works identically to A1111. Match your model choice to your GPU’s VRAM, start with the recommended models above, and you’ll be generating professional-quality AI images in minutes.