Open WebUI Installation for LLM Chat Interface

Open WebUI is a feature-rich, ChatGPT-like interface for self-hosted LLMs that integrates directly with Ollama and OpenAI-compatible APIs, supporting multi-user authentication, conversation history, RAG pipelines, and model management. This guide covers deploying Open WebUI with Docker, connecting it to Ollama, configuring user management, and setting up a production-ready reverse proxy.

Prerequisites

  • Docker and Docker Compose installed
  • Ollama running locally (or an OpenAI-compatible API)
  • 2GB+ RAM for the container
  • A domain name (for SSL setup)

Deploying with Docker

Standalone Docker Run

# If Ollama is running on the same host
docker run -d \
  --name open-webui \
  --network=host \
  -v open-webui:/app/backend/data \
  -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \
  --restart unless-stopped \
  ghcr.io/open-webui/open-webui:main

# If Ollama is on a different host
docker run -d \
  --name open-webui \
  -p 3000:8080 \
  -v open-webui:/app/backend/data \
  -e OLLAMA_BASE_URL=http://OLLAMA_HOST:11434 \
  --restart unless-stopped \
  ghcr.io/open-webui/open-webui:main
mkdir -p ~/open-webui && cd ~/open-webui

cat > docker-compose.yml << 'EOF'
services:
  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    ports:
      - "3000:8080"
    volumes:
      - open-webui-data:/app/backend/data
    environment:
      - OLLAMA_BASE_URL=http://host.docker.internal:11434
      # - OPENAI_API_BASE_URL=https://api.openai.com/v1
      # - OPENAI_API_KEY=sk-...
    extra_hosts:
      - "host.docker.internal:host-gateway"
    restart: unless-stopped

volumes:
  open-webui-data:
EOF

docker compose up -d
docker compose logs -f

Access at http://your-server:3000

Connecting to Ollama

Ensure Ollama is accessible from within the Docker network:

# Configure Ollama to listen on all interfaces
sudo systemctl edit ollama
# Add:
# [Service]
# Environment="OLLAMA_HOST=0.0.0.0"
sudo systemctl restart ollama

# Verify Ollama is accessible
curl http://localhost:11434/api/tags

# In Open WebUI:
# Settings > Connections > Ollama API > http://host.docker.internal:11434

Pull models directly from the Open WebUI interface:

  1. Go to Settings > Models
  2. In the "Pull a model from Ollama.com" field, enter llama3.2 or any model name
  3. Click the download button — the model is pulled and immediately available

Or pull models from the Ollama CLI and they appear automatically in Open WebUI:

ollama pull llama3.2
ollama pull mistral
ollama pull codellama

Connecting to OpenAI-Compatible APIs

Open WebUI can connect to any OpenAI-compatible endpoint — vLLM, LiteLLM, Groq, etc:

# Update the docker-compose.yml environment:
environment:
  - OLLAMA_BASE_URL=http://host.docker.internal:11434
  - OPENAI_API_BASE_URL=http://host.docker.internal:8000/v1  # vLLM
  - OPENAI_API_KEY=not-needed  # Required field but value doesn't matter for local

# Or for real OpenAI:
  - OPENAI_API_BASE_URL=https://api.openai.com/v1
  - OPENAI_API_KEY=sk-your-real-key

In the UI: Settings > Connections > OpenAI API

User Management and Authentication

First Login (Admin Setup)

The first user to register becomes the admin. After that:

  1. Go to Admin Panel > Users
  2. Control whether new users can self-register (Admin Panel > Settings > General > Default User Role)
  3. Set to pending to require admin approval for new accounts

Environment Variables for Auth Configuration

environment:
  # Disable signup entirely (invite-only)
  - ENABLE_SIGNUP=false

  # Set default role for new users
  - DEFAULT_USER_ROLE=user  # or "admin" or "pending"

  # Require email verification
  - ENABLE_OAUTH_SIGNUP=true

  # JWT secret (generate a secure one)
  - WEBUI_SECRET_KEY=your-random-32-char-secret-key

  # OAuth with Google (optional)
  - GOOGLE_CLIENT_ID=your-client-id
  - GOOGLE_CLIENT_SECRET=your-client-secret
  - GOOGLE_OAUTH_SCOPE=openid email profile

LDAP Integration

environment:
  - ENABLE_LDAP=true
  - LDAP_SERVER_LABEL=Company LDAP
  - LDAP_SERVER_HOST=ldap.example.com
  - LDAP_SERVER_PORT=389
  - LDAP_ATTRIBUTE_FOR_MAIL=mail
  - LDAP_ATTRIBUTE_FOR_USERNAME=uid
  - LDAP_APP_DN=cn=admin,dc=example,dc=com
  - LDAP_APP_PASSWORD=admin-password
  - LDAP_SEARCH_BASE=ou=users,dc=example,dc=com
  - LDAP_SEARCH_FILTERS=

Nginx Reverse Proxy with SSL

sudo tee /etc/nginx/sites-available/openwebui << 'EOF'
server {
    listen 80;
    server_name chat.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name chat.example.com;

    ssl_certificate /etc/letsencrypt/live/chat.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/chat.example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Required for streaming responses
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_buffering off;
        proxy_cache off;
        proxy_read_timeout 300;
    }
}
EOF

sudo ln -s /etc/nginx/sites-available/openwebui /etc/nginx/sites-enabled/
sudo certbot --nginx -d chat.example.com
sudo nginx -t && sudo systemctl reload nginx

RAG Pipeline Setup

Open WebUI includes a built-in RAG (Retrieval Augmented Generation) system for chatting with documents:

  1. Navigate to Workspace > Documents
  2. Upload files (PDF, TXT, Markdown, DOCX, etc.)
  3. Click the document icon in the chat interface to reference uploaded documents
  4. Prefix messages with # to search documents: # What does the policy say about remote work?

Configure the embedding model:

# In docker-compose.yml — use a local embedding model
environment:
  - RAG_EMBEDDING_ENGINE=ollama
  - RAG_EMBEDDING_MODEL=nomic-embed-text  # Pull this model in Ollama first
  - CHUNK_SIZE=1500
  - CHUNK_OVERLAP=100
# Pull the embedding model
ollama pull nomic-embed-text

Updating Open WebUI

cd ~/open-webui

# Pull the latest image
docker compose pull

# Restart with the new image (data volume is preserved)
docker compose up -d

# Verify new version
docker compose logs open-webui | head -20

Troubleshooting

"Cannot connect to Ollama" in the UI

# Test from within the container
docker exec -it open-webui curl http://host.docker.internal:11434/api/tags

# Verify host.docker.internal resolves
docker exec -it open-webui ping host.docker.internal

# If not working, use the host's actual IP
docker network inspect bridge | grep Gateway
# Then set OLLAMA_BASE_URL=http://172.17.0.1:11434

Container exits immediately

docker logs open-webui
# Look for database migration errors or missing environment variables

Streaming doesn't work through Nginx

# Ensure these Nginx settings are present:
# proxy_buffering off;
# proxy_cache off;
# proxy_set_header Connection "upgrade";

RAG search returns no results

# Ensure the embedding model is available in Ollama
ollama list | grep embed

# Re-index documents: go to Admin Panel > Documents > Reset Vector DB

Users can't register

# Check ENABLE_SIGNUP environment variable
docker exec open-webui env | grep SIGNUP
# Set to "true" if disabled

Conclusion

Open WebUI provides a polished, multi-user chat interface for self-hosted LLMs that requires minimal configuration when paired with Ollama. Its built-in RAG system, model management, and flexible authentication make it suitable for both personal use and team deployments, and the Docker setup ensures simple updates and data persistence.