Directus Headless CMS Installation

Directus is an open-source headless CMS and data platform providing SQL database abstraction, role-based access control, webhook support, and REST/GraphQL APIs. Installing Directus enables data management with complete API coverage and flexible access control without vendor lock-in. This guide covers production-ready Directus deployment using Docker containers, PostgreSQL database, Nginx reverse proxy, Redis caching, SSL/TLS security, user authentication, data models, and production optimization.

Table of Contents

Directus Architecture Overview

Directus provides a database abstraction layer with automatic API generation, admin interface, and extensible architecture.

Architecture components:

  • Admin Interface: visual data management
  • REST API: automatic JSON endpoints for all data
  • GraphQL API: flexible query language
  • Database Layer: SQL-agnostic abstraction
  • Roles & Permissions: field-level access control
  • Extensions: hooks, endpoints, flows for customization
  • File Storage: asset management and delivery

Request flow:

  1. Admin creates data model in interface
  2. Directus automatically generates API endpoints
  3. Frontend requests data via REST/GraphQL
  4. API returns filtered data based on permissions
  5. Frontend renders content

Docker Installation

Install Docker for containerized Directus deployment.

Update system:

sudo apt update
sudo apt upgrade -y
sudo apt install curl wget git -y

Install Docker:

# Download Docker installation script
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

# Add user to docker group
sudo usermod -aG docker $USER

# Log out and back in for group changes to take effect
# Verify
docker --version
docker run hello-world

Install Docker Compose:

# Download latest Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# Verify
docker-compose --version

Create application directory:

sudo mkdir -p /home/docker/directus
sudo chown -R $(whoami) /home/docker/directus
cd /home/docker/directus

PostgreSQL Database

Set up PostgreSQL for Directus.

Create database directory:

mkdir -p /home/docker/directus/postgres-data
mkdir -p /home/docker/directus/directus-data

Or use managed PostgreSQL:

# Install PostgreSQL on host
sudo apt install postgresql postgresql-contrib -y
sudo systemctl start postgresql
sudo systemctl enable postgresql

# Create database and user
sudo -u postgres psql << EOF
CREATE DATABASE directus_db;
CREATE USER directus_user WITH PASSWORD 'SecurePassword123!';
ALTER ROLE directus_user SET client_encoding TO 'utf8';
ALTER ROLE directus_user SET default_transaction_isolation TO 'read committed';
ALTER ROLE directus_user SET timezone TO 'UTC';
GRANT ALL PRIVILEGES ON DATABASE directus_db TO directus_user;
\q
EOF

Verify database:

psql -U directus_user -d directus_db -h localhost
# Should connect successfully
\q

Directus Installation via Docker

Deploy Directus using Docker Compose.

Create docker-compose.yml:

cat > /home/docker/directus/docker-compose.yml << 'EOF'
version: '3.8'

services:
  postgres:
    image: postgres:15-alpine
    container_name: directus-postgres
    environment:
      POSTGRES_DB: directus_db
      POSTGRES_USER: directus_user
      POSTGRES_PASSWORD: SecurePassword123!
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"
    networks:
      - directus-network
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U directus_user"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    container_name: directus-redis
    ports:
      - "6379:6379"
    networks:
      - directus-network
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5

  directus:
    image: directus/directus:latest
    container_name: directus-app
    ports:
      - "8055:8055"
    environment:
      DB_CLIENT: pg
      DB_HOST: postgres
      DB_PORT: 5432
      DB_DATABASE: directus_db
      DB_USER: directus_user
      DB_PASSWORD: SecurePassword123!
      REDIS_HOST: redis
      REDIS_PORT: 6379
      KEY: your-secret-key-generate-this
      SECRET: your-secret-key-generate-this
      ADMIN_EMAIL: [email protected]
      ADMIN_PASSWORD: AdminPassword123!
      PUBLIC_URL: https://cms.example.com
      ACCESS_TOKEN_TTL: 15m
      REFRESH_TOKEN_TTL: 7d
      REFRESH_TOKEN_COOKIE_SECURE: true
      REFRESH_TOKEN_COOKIE_SAME_SITE: Strict
      LOG_LEVEL: info
    volumes:
      - directus_data:/directus/uploads
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    networks:
      - directus-network
    restart: unless-stopped

volumes:
  postgres_data:
  directus_data:

networks:
  directus-network:
    driver: bridge
EOF

Generate secret keys:

# Generate random secrets
openssl rand -hex 32
openssl rand -hex 32

# Update docker-compose.yml with generated secrets
nano docker-compose.yml

Start services:

cd /home/docker/directus

# Start all services
docker-compose up -d

# Check status
docker-compose ps

# View logs
docker-compose logs -f directus

# Wait for startup (first start takes a few minutes)
docker-compose logs directus | grep "ready"

Verify installation:

# Check Directus is running
curl -I http://localhost:8055/

# Should return 200 OK

# Access admin interface
# http://localhost:8055/admin
# Email: [email protected]
# Password: AdminPassword123!

Initial Configuration

Configure Directus for production use.

Create .env file for persistent configuration:

cat > /home/docker/directus/.env << 'EOF'
# Database
DB_CLIENT=pg
DB_HOST=postgres
DB_PORT=5432
DB_DATABASE=directus_db
DB_USER=directus_user
DB_PASSWORD=SecurePassword123!

# Cache
REDIS_HOST=redis
REDIS_PORT=6379

# Authentication
KEY=your-generated-secret-key
SECRET=your-generated-secret-key
[email protected]
ADMIN_PASSWORD=AdminPassword123!

# Public URL
PUBLIC_URL=https://cms.example.com
CORS_ALLOWED_ORIGINS=https://example.com,https://www.example.com

# Tokens
ACCESS_TOKEN_TTL=15m
REFRESH_TOKEN_TTL=7d
REFRESH_TOKEN_COOKIE_SECURE=true
REFRESH_TOKEN_COOKIE_SAME_SITE=Strict

# Email (optional)
[email protected]
EMAIL_TRANSPORT=smtp
EMAIL_SMTP_HOST=smtp.example.com
EMAIL_SMTP_PORT=587
EMAIL_SMTP_USER=username
EMAIL_SMTP_PASSWORD=password

# Logging
LOG_LEVEL=info

# File uploads
FILE_STORAGE_LOCATION=local
FILE_STORAGE_LOCAL_ROOT=./uploads

# Or use S3
# FILE_STORAGE_LOCATION=s3
# FILE_STORAGE_S3_DRIVER=s3
# FILE_STORAGE_S3_KEY=your-access-key
# FILE_STORAGE_S3_SECRET=your-secret-key
# FILE_STORAGE_S3_BUCKET=your-bucket
# FILE_STORAGE_S3_REGION=us-east-1
EOF

Update docker-compose to use .env:

# Edit docker-compose.yml to use env_file
nano /home/docker/directus/docker-compose.yml

# Add to directus service:
# env_file:
#   - .env

Restart services with new configuration:

cd /home/docker/directus
docker-compose down
docker-compose up -d

Data Models and Collections

Create data structures in Directus.

Access admin interface:

# Navigate to: http://localhost:8055/admin
# Login with [email protected] / AdminPassword123!

Create collection via interface:

# Click "+" icon to create new collection
# Name: posts
# Display Template: title
# 
# Add fields:
# - title (String, Required)
# - slug (String, Unique)
# - content (Text)
# - published (Boolean, Default: false)
# - author (Many-to-One to users)
# - created_at (DateTime, Auto-set)
# - updated_at (DateTime, Auto-set)

Create via API:

# Create collection
curl -X POST http://localhost:8055/collections \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "collection": "posts",
    "meta": {
      "display_template": "title"
    }
  }'

# Add fields
curl -X POST http://localhost:8055/fields/posts \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "field": "title",
    "type": "string",
    "meta": {
      "interface": "input",
      "required": true
    }
  }'

Access API endpoints:

# List items
curl http://localhost:8055/items/posts \
  -H "Authorization: Bearer YOUR_TOKEN"

# Create item
curl -X POST http://localhost:8055/items/posts \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Hello World",
    "content": "Post content here",
    "published": true
  }'

# Update item
curl -X PATCH http://localhost:8055/items/posts/1 \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"published": false}'

Users, Roles, and Permissions

Configure access control for team collaboration.

Create role via admin interface:

# Navigate to: Settings > Roles & Permissions
# Click "New Role"
# Name: Editor
# 
# Permissions:
# - posts: create, read, update, delete
# - users: read (self)

Create user:

# Navigate to: Settings > Users
# Click "Create New"
# Email: [email protected]
# Role: Editor
# Status: Active

Or create via API:

# Create role
curl -X POST http://localhost:8055/roles \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Editor",
    "icon": "edit"
  }'

# Create user
curl -X POST http://localhost:8055/users \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "email": "[email protected]",
    "password": "securepassword",
    "role": "ROLE_UUID",
    "status": "active"
  }'

Configure field-level permissions:

# Navigate to: Settings > Roles & Permissions > [Role]
# Click on collection
# Configure per-field permissions:
# - Some fields read-only
# - Some fields hidden
# - Some fields required on creation

File Storage Configuration

Configure media library and file storage.

Default local storage:

# Files stored in container volume
# /directus/uploads/
# 
# Mounted to host at:
# /home/docker/directus/directus-data/

# Access files
ls -la /home/docker/directus/directus-data/

Configure S3 storage:

# Edit .env file
FILE_STORAGE_LOCATION=s3
FILE_STORAGE_S3_DRIVER=s3
FILE_STORAGE_S3_KEY=your-aws-key
FILE_STORAGE_S3_SECRET=your-aws-secret
FILE_STORAGE_S3_BUCKET=your-bucket-name
FILE_STORAGE_S3_REGION=us-east-1
FILE_STORAGE_S3_ENDPOINT=https://s3.amazonaws.com

# Restart services
docker-compose down
docker-compose up -d

Upload and manage files:

# Via admin interface:
# Navigate to File Library
# Drag and drop or click to upload
# 
# Configure access:
# Manage who can view/download files
# Set expiration dates if needed

Nginx Reverse Proxy

Configure Nginx as reverse proxy for Directus.

Create Nginx configuration:

sudo cat > /etc/nginx/sites-available/directus.conf << 'EOF'
upstream directus {
    server 127.0.0.1:8055;
    keepalive 64;
}

server {
    listen 80;
    server_name cms.example.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name cms.example.com;

    ssl_certificate /etc/letsencrypt/live/cms.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/cms.example.com/privkey.pem;

    access_log /var/log/nginx/directus_access.log;
    error_log /var/log/nginx/directus_error.log;

    # Security headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;

    # Gzip compression
    gzip on;
    gzip_types application/json text/plain text/css text/javascript;

    # Cache uploads
    location /assets/ {
        expires 30d;
        add_header Cache-Control "public, max-age=2592000";
        proxy_pass http://directus;
    }

    # Proxy to Directus
    location / {
        proxy_pass http://directus;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}
EOF

sudo ln -s /etc/nginx/sites-available/directus.conf /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx

Security and SSL

Implement security best practices.

Enable HTTPS:

# Install Certbot for Let's Encrypt
sudo apt install certbot python3-certbot-nginx -y

# Obtain certificate
sudo certbot certonly --nginx -d cms.example.com

# Verify certificate
sudo certbot renew --dry-run

Configure security in Directus:

# Update .env
REFRESH_TOKEN_COOKIE_SECURE=true
REFRESH_TOKEN_COOKIE_SAME_SITE=Strict
PUBLIC_URL=https://cms.example.com

# Restart services
docker-compose down
docker-compose up -d

Set up API authentication:

# Create API token via admin interface:
# Navigate to: Settings > API Tokens
# Click "New"
# Name: Frontend API
# Select permissions
# 
# Use token in frontend requests:
# curl -H "Authorization: Bearer TOKEN" http://localhost:8055/items/posts

Enable webhooks for events:

# Via admin interface:
# Navigate to: Settings > Webhooks
# Create new webhook:
# - Event: items.create on posts
# - URL: https://example.com/api/webhooks/post-created
# - Method: POST

Conclusion

Installing Directus with Docker provides a flexible, API-first CMS enabling rapid content platform development. This guide covers production-ready deployment using Docker Compose for easy management, PostgreSQL for data persistence, Redis for caching, Nginx reverse proxy for efficient request handling, and security hardening with HTTPS. Key focus areas are containerized infrastructure for portability and reliability, database and cache configuration for performance, role-based access control for team collaboration, flexible file storage supporting local or S3 backends, and comprehensive API coverage via REST and GraphQL. Regular backups of database and uploaded files ensure data protection. Following these practices creates a robust Directus installation ready to power multiple frontend applications with managed content and data.