DigitalOcean Spaces Integration for Object Storage

DigitalOcean Spaces is an S3-compatible object storage service that integrates seamlessly with existing AWS CLI and S3 tooling, making it an affordable option for storing backups, static assets, and user uploads from your VPS. This guide covers creating a Space, configuring API access, using rclone for automation, enabling the CDN, and managing lifecycle policies.

Prerequisites

  • A DigitalOcean account with a Space created or ready to create
  • A Linux server (Ubuntu/Debian or CentOS/Rocky)
  • AWS CLI or rclone installed (this guide covers both)

Creating a Space

  1. Log into the DigitalOcean Control Panel
  2. Navigate to Spaces Object Storage > Create a Space
  3. Select a datacenter region (choose one close to your VPS)
  4. Choose a unique name for your Space (e.g., my-vps-backups)
  5. Set file listing to Restricted for private buckets

Note your Space endpoint: https://my-vps-backups.nyc3.digitaloceanspaces.com

Generating API Keys

  1. Go to API > Spaces Keys in the DigitalOcean dashboard
  2. Click Generate New Key and give it a descriptive name
  3. Copy the Access Key and Secret Key — the secret is only shown once

AWS CLI Configuration for Spaces

Since Spaces is S3-compatible, you can use the AWS CLI with a custom endpoint:

# Configure a named profile for DigitalOcean Spaces
aws configure --profile spaces
# AWS Access Key ID: YOUR_SPACES_ACCESS_KEY
# AWS Secret Access Key: YOUR_SPACES_SECRET_KEY
# Default region name: nyc3
# Default output format: json

# Set the endpoint URL (required for Spaces)
aws configure set endpoint_url https://nyc3.digitaloceanspaces.com --profile spaces

Now use standard AWS CLI commands with --profile spaces:

# List all Spaces (buckets)
aws s3 ls --profile spaces

# List objects in a Space
aws s3 ls s3://my-vps-backups/ --profile spaces

# Upload a file
aws s3 cp /var/backups/dump.tar.gz s3://my-vps-backups/db/ --profile spaces

# Upload a directory
aws s3 sync /var/www/uploads s3://my-vps-backups/uploads/ --profile spaces

# Download a file
aws s3 cp s3://my-vps-backups/db/dump.tar.gz /tmp/restore.tar.gz --profile spaces

# Delete an object
aws s3 rm s3://my-vps-backups/old-file.tar.gz --profile spaces

# Make a file public (for static assets)
aws s3 cp image.png s3://my-vps-backups/public/ \
  --acl public-read \
  --profile spaces

Use ~/.aws/config for a persistent endpoint configuration:

[profile spaces]
aws_access_key_id = YOUR_SPACES_ACCESS_KEY
aws_secret_access_key = YOUR_SPACES_SECRET_KEY
region = nyc3
endpoint_url = https://nyc3.digitaloceanspaces.com

rclone Configuration and Usage

rclone is often easier to use for automation and bidirectional sync:

# Install rclone
curl https://rclone.org/install.sh | sudo bash

# Configure Spaces remote interactively
rclone config

# Choose: n (New remote)
# Name: spaces
# Type: s3
# Provider: DigitalOcean
# Access Key ID: YOUR_SPACES_ACCESS_KEY
# Secret Access Key: YOUR_SPACES_SECRET_KEY
# Endpoint: nyc3.digitaloceanspaces.com
# Location constraint: (leave empty)

Or configure directly in ~/.config/rclone/rclone.conf:

[spaces]
type = s3
provider = DigitalOcean
access_key_id = YOUR_SPACES_ACCESS_KEY
secret_access_key = YOUR_SPACES_SECRET_KEY
endpoint = nyc3.digitaloceanspaces.com
acl = private

Common rclone operations:

# List buckets
rclone lsd spaces:

# List contents of a Space
rclone ls spaces:my-vps-backups

# Copy files to Spaces
rclone copy /var/www/uploads spaces:my-vps-backups/uploads/

# Sync a directory (adds and removes to match source)
rclone sync /var/www/uploads spaces:my-vps-backups/uploads/

# Sync with bandwidth limit (useful on production servers)
rclone copy /data spaces:my-vps-backups/data/ --bwlimit 20M

# Check sync status (compare checksums)
rclone check /var/www/uploads spaces:my-vps-backups/uploads/

# Move files from Spaces to local
rclone copy spaces:my-vps-backups/backups/ /tmp/restore/

Enabling the Spaces CDN

The Spaces CDN serves your objects from DigitalOcean's edge nodes globally, improving download performance for static assets:

  1. In the DigitalOcean dashboard, open your Space
  2. Click Settings tab
  3. Under CDN, click Enable CDN
  4. Optionally add a custom subdomain (e.g., cdn.example.com) — you'll need to add a CNAME record at your DNS provider

Once enabled, objects are accessible at:

  • Default: https://my-vps-backups.nyc3.cdn.digitaloceanspaces.com/path/to/file
  • Custom domain: https://cdn.example.com/path/to/file

To serve public files via CDN:

# Upload with public-read ACL
aws s3 cp logo.png s3://my-vps-backups/assets/ \
  --acl public-read \
  --profile spaces

# Access via CDN URL
curl -I "https://my-vps-backups.nyc3.cdn.digitaloceanspaces.com/assets/logo.png"

Backup Automation

cat > /usr/local/bin/spaces-backup.sh << 'EOF'
#!/bin/bash
SPACE="my-vps-backups"
REGION="nyc3"
DATE=$(date +%Y-%m-%d)
LOG="/var/log/spaces-backup.log"

log() {
  echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG"
}

log "=== Starting Spaces backup ==="

# Backup MySQL databases
log "Dumping MySQL..."
for DB in $(mysql -e "SHOW DATABASES;" 2>/dev/null | \
  grep -Ev "(Database|information_schema|performance_schema|sys)"); do
  DUMP="/tmp/${DB}-${DATE}.sql.gz"
  mysqldump "$DB" 2>/dev/null | gzip > "$DUMP"
  rclone copy "$DUMP" "spaces:${SPACE}/databases/${DATE}/" --quiet
  rm -f "$DUMP"
  log "  Uploaded ${DB}"
done

# Backup web files
log "Syncing /var/www..."
rclone sync /var/www "spaces:${SPACE}/www/" \
  --exclude "*.tmp" \
  --exclude "*/.git/**" \
  --transfers 4 \
  --quiet

# Backup /etc
log "Syncing /etc..."
rclone copy /etc "spaces:${SPACE}/etc/${DATE}/" \
  --exclude "*.swp" \
  --quiet

log "=== Backup complete ==="
EOF

chmod +x /usr/local/bin/spaces-backup.sh

# Schedule with cron
(crontab -l 2>/dev/null; echo "0 3 * * * /usr/local/bin/spaces-backup.sh") | crontab -

Managing Lifecycle Policies

Use the S3-compatible API to set lifecycle rules that automatically delete old backups:

cat > lifecycle.json << 'EOF'
{
  "Rules": [
    {
      "ID": "delete-old-backups",
      "Status": "Enabled",
      "Filter": {
        "Prefix": "databases/"
      },
      "Expiration": {
        "Days": 30
      }
    },
    {
      "ID": "delete-old-etc",
      "Status": "Enabled",
      "Filter": {
        "Prefix": "etc/"
      },
      "Expiration": {
        "Days": 14
      }
    }
  ]
}
EOF

# Apply lifecycle policy via AWS CLI
aws s3api put-bucket-lifecycle-configuration \
  --bucket my-vps-backups \
  --lifecycle-configuration file://lifecycle.json \
  --endpoint-url https://nyc3.digitaloceanspaces.com \
  --profile spaces

# Verify the policy
aws s3api get-bucket-lifecycle-configuration \
  --bucket my-vps-backups \
  --endpoint-url https://nyc3.digitaloceanspaces.com \
  --profile spaces

Troubleshooting

"SignatureDoesNotMatch" errors with AWS CLI

# Ensure the endpoint URL is set correctly
aws s3 ls --endpoint-url https://nyc3.digitaloceanspaces.com --profile spaces

# Verify your keys are correct — regenerate if unsure

Upload hangs on large files

# Increase multipart settings
aws configure set s3.multipart_threshold 64MB --profile spaces
aws configure set s3.multipart_chunksize 16MB --profile spaces
aws configure set s3.max_concurrent_requests 10 --profile spaces

rclone sync deletes unexpected files

# Use copy instead of sync for one-way transfers without deletion
rclone copy /source spaces:bucket/dest/

# Preview what sync would do
rclone sync /source spaces:bucket/dest/ --dry-run

CDN not serving updated content

# Purge the CDN cache via DigitalOcean API
curl -X DELETE \
  -H "Authorization: Bearer DO_API_TOKEN" \
  "https://api.digitalocean.com/v2/cdn/endpoints/ENDPOINT_ID/cache" \
  -H "Content-Type: application/json" \
  --data '{"files":["assets/logo.png"]}'

Conclusion

DigitalOcean Spaces provides an affordable, S3-compatible object storage solution that integrates with the AWS CLI and rclone without requiring any special tooling. Combined with the built-in CDN and lifecycle management policies, it's an excellent choice for automating VPS backups and serving static assets at scale.