Stable Diffusion WebUI Installation on Linux

Stable Diffusion WebUI (AUTOMATIC1111) is the most popular interface for AI image generation, and can be self-hosted on a Linux server with an NVIDIA GPU for full control over models, extensions, and output. This guide covers installing all dependencies, configuring the WebUI for GPU use, downloading models, and setting up remote access via Nginx.

Prerequisites

  • Ubuntu 20.04 or 22.04 (recommended)
  • NVIDIA GPU with at least 4GB VRAM (8GB+ recommended)
  • NVIDIA drivers installed (nvidia-smi must work)
  • At least 20GB free disk space (models are 2-7GB each)
  • Python 3.10 recommended

Installing System Dependencies

# Update system packages
sudo apt-get update && sudo apt-get upgrade -y

# Install Python and build dependencies
sudo apt-get install -y \
  python3 python3-pip python3-venv python3-dev \
  git wget curl \
  build-essential libgl1 libglib2.0-0 \
  libsm6 libxext6 libxrender-dev \
  google-perftools  # For memory optimization

# Install Python 3.10 if not the default
sudo apt-get install -y python3.10 python3.10-venv python3.10-dev

Installing Stable Diffusion WebUI

# Create a dedicated user (optional but recommended)
sudo useradd -m -s /bin/bash sdwebui
sudo -u sdwebui -i

# Clone the AUTOMATIC1111 repository
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui

# The webui.sh script handles virtualenv creation automatically
# No manual pip install needed — it installs everything on first launch

Directory Structure

stable-diffusion-webui/
├── models/
│   ├── Stable-diffusion/   <- Checkpoint models (.safetensors, .ckpt)
│   ├── Lora/               <- LoRA adapters
│   ├── VAE/                <- VAE models
│   └── embeddings/         <- Textual inversions
├── outputs/                <- Generated images
├── extensions/             <- Installed extensions
└── webui.sh                <- Launch script

Downloading Models

cd ~/stable-diffusion-webui/models/Stable-diffusion

# Download SDXL 1.0 base model (6.5GB)
wget "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors"

# Download Realistic Vision v5 (popular photo-realistic model, 2GB)
wget "https://huggingface.co/SG161222/Realistic_Vision_V5.1_noVAE/resolve/main/Realistic_Vision_V5.1_fp16-no-ema.safetensors"

# Using Hugging Face CLI for easier downloads
pip install huggingface_hub
huggingface-cli download stabilityai/stable-diffusion-2-1 \
  --local-dir ~/stable-diffusion-webui/models/Stable-diffusion/

# Download a VAE (improves image quality for some models)
cd ~/stable-diffusion-webui/models/VAE
wget "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors"

Launching the WebUI

cd ~/stable-diffusion-webui

# Basic launch — listens on localhost:7860
./webui.sh

# Launch with remote access enabled
./webui.sh --listen

# Launch with specific settings
./webui.sh \
  --listen \
  --port 7860 \
  --xformers \          # Memory-efficient attention (faster)
  --opt-sdp-attention \ # PyTorch 2.0 scaled dot product attention
  --no-half-vae         # Fixes NaN issues with some VAEs

On first launch, it automatically creates a Python virtual environment and installs all dependencies (~5-10 minutes). Subsequent launches are fast.

webui-user.sh for Persistent Settings

cat > ~/stable-diffusion-webui/webui-user.sh << 'EOF'
#!/bin/bash

# GPU optimization flags
export COMMANDLINE_ARGS="--listen --port 7860 --xformers --opt-sdp-attention"

# Use specific Python version
#export python_cmd="python3.10"

# Disable VRAM-consuming features if running low
#export COMMANDLINE_ARGS="--listen --medvram --no-half-vae"

# For very low VRAM (4GB):
#export COMMANDLINE_ARGS="--listen --lowvram"
EOF

Remote Access with Nginx

sudo tee /etc/nginx/sites-available/sdwebui << 'EOF'
server {
    listen 80;
    server_name sd.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name sd.example.com;

    ssl_certificate /etc/letsencrypt/live/sd.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/sd.example.com/privkey.pem;

    # Protect with basic auth
    auth_basic "Stable Diffusion";
    auth_basic_user_file /etc/nginx/.sdwebui-htpasswd;

    location / {
        proxy_pass http://127.0.0.1:7860;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_http_version 1.1;
        proxy_read_timeout 300;     # Long timeout for image generation
        client_max_body_size 50M;
    }
}
EOF

sudo ln -s /etc/nginx/sites-available/sdwebui /etc/nginx/sites-enabled/

# Create password file
sudo apt-get install -y apache2-utils
sudo htpasswd -c /etc/nginx/.sdwebui-htpasswd your-username

sudo nginx -t && sudo systemctl reload nginx

Running as a Systemd Service

sudo tee /etc/systemd/system/sdwebui.service << 'EOF'
[Unit]
Description=Stable Diffusion WebUI
After=network.target

[Service]
Type=simple
User=sdwebui
WorkingDirectory=/home/sdwebui/stable-diffusion-webui
ExecStart=/home/sdwebui/stable-diffusion-webui/webui.sh
Restart=on-failure
RestartSec=15

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable sdwebui
sudo systemctl start sdwebui

sudo journalctl -u sdwebui -f

Performance and Configuration

# Check GPU memory usage during generation
watch -n 1 nvidia-smi

# For 8GB VRAM — balanced settings
COMMANDLINE_ARGS="--listen --xformers"

# For 6GB VRAM — save VRAM
COMMANDLINE_ARGS="--listen --xformers --medvram"

# For 4GB VRAM — maximum savings (slower)
COMMANDLINE_ARGS="--listen --lowvram"

# For 24GB VRAM — maximum quality
COMMANDLINE_ARGS="--listen --xformers --no-half --precision full"

Key settings in the WebUI:

SettingImpact
xformersReduces VRAM by 30-40%, slightly faster
Batch sizeIncreases VRAM usage; parallel generation
StepsHigher = better quality, slower (20-30 is sweet spot)
CFG Scale7-12 is typical; controls prompt adherence

Troubleshooting

"CUDA out of memory" errors

# Add --medvram or --lowvram flags
./webui.sh --listen --medvram

# Or reduce image resolution (768px instead of 1024px)

Black images generated

# Usually a VAE issue — try a different VAE
# In Settings > Stable Diffusion > SD VAE, select a specific VAE
# Or add --no-half-vae to COMMANDLINE_ARGS

Slow first-generation time

# xformers not installed — verify
./venv/bin/python -c "import xformers; print(xformers.__version__)"

# Reinstall with
./venv/bin/pip install xformers --upgrade

WebUI won't start — port in use

# Check what's using port 7860
ss -tlnp | grep 7860
# Kill the process and restart

Model not appearing in the dropdown

# Refresh the model list in WebUI (settings button)
# Or verify model is in models/Stable-diffusion/
ls ~/stable-diffusion-webui/models/Stable-diffusion/

Conclusion

Stable Diffusion WebUI (AUTOMATIC1111) on a Linux server with NVIDIA GPU provides a full-featured, self-hosted AI image generation platform with support for hundreds of community models and extensions. Pairing it with Nginx reverse proxy and basic authentication gives you secure remote access, while the webui-user.sh configuration file lets you tune VRAM usage and performance flags for your specific GPU.