Duplicacy Backup with Cloud Storage Backends

Duplicacy is a lock-free, cross-platform backup tool that supports deduplication across multiple computers backing up to the same cloud storage. Its lock-free approach allows simultaneous backups from multiple clients without repository corruption, making it suitable for team environments on Linux.

Prerequisites

  • Ubuntu/Debian or CentOS/Rocky Linux
  • Access to at least one cloud storage backend (S3, Backblaze B2, SFTP, etc.)
  • Sufficient storage quota on your chosen backend

Installing Duplicacy

# Download the latest CLI binary
DUPLICACY_VERSION=$(curl -s https://api.github.com/repos/gilbertchen/duplicacy/releases/latest | grep -o '"tag_name":"[^"]*"' | cut -d'"' -f4)
wget "https://github.com/gilbertchen/duplicacy/releases/download/${DUPLICACY_VERSION}/duplicacy_linux_x64_${DUPLICACY_VERSION#v}"

# Install to system path
sudo mv "duplicacy_linux_x64_${DUPLICACY_VERSION#v}" /usr/local/bin/duplicacy
sudo chmod +x /usr/local/bin/duplicacy

# Verify installation
duplicacy --version

Repository Initialization

Duplicacy stores configuration in a .duplicacy directory within the backup root:

# Navigate to the directory you want to back up
cd /var/www

# Initialize a repository with S3 backend
duplicacy init \
  -e \
  -repository /var/www \
  myserver \
  s3://us-east-1@my-s3-bucket/backups/myserver

# Flags:
# -e: encrypt the backup (prompts for encryption password)
# myserver: snapshot ID (identifies this machine's backups)

# Initialize with Backblaze B2
duplicacy init \
  -e \
  myserver \
  b2://my-b2-bucket/backups/myserver

# Initialize with SFTP
duplicacy init \
  -e \
  myserver \
  sftp://[email protected]/srv/backups/myserver

# Store credentials (avoid prompts in scripts)
duplicacy set -storage default -key password -value "your-encryption-password"
duplicacy set -storage default -key s3_id -value "your-aws-access-key"
duplicacy set -storage default -key s3_secret -value "your-aws-secret-key"

Credentials are stored in .duplicacy/preferences — protect this file:

chmod 600 /var/www/.duplicacy/preferences

Backing Up to Multiple Backends

Duplicacy supports adding multiple storage backends to a single repository:

# Add a second storage (B2 as backup copy)
cd /var/www
duplicacy add \
  -e \
  b2-backup \
  myserver \
  b2://my-b2-bucket/backups/myserver

# Set credentials for the second storage
duplicacy set -storage b2-backup -key password -value "your-encryption-password"
duplicacy set -storage b2-backup -key b2_id -value "your-b2-account-id"
duplicacy set -storage b2-backup -key b2_key -value "your-b2-application-key"

# Back up to all storages
duplicacy backup -stats
# Or back up to a specific storage:
duplicacy backup -storage b2-backup -stats

# List snapshots on each storage
duplicacy list
duplicacy list -storage b2-backup

Backup with Filters

Create a .duplicacy/filters file to exclude directories:

cat > /var/www/.duplicacy/filters << 'EOF'
# Exclude patterns (prefix with -)
-node_modules/
-__pycache__/
-*.pyc
-.git/
-*.log
-tmp/
-cache/

# Include specific files even if parent is excluded (prefix with +)
+important-config.log
EOF
# Run backup
duplicacy backup -stats -threads 4

# The -threads flag enables parallel uploads (faster for many small files)

Scheduling Backups

Systemd Timer

# Create backup script
sudo tee /usr/local/bin/duplicacy-backup.sh << 'SCRIPT'
#!/bin/bash
set -euo pipefail

BACKUP_DIR="/var/www"
LOG_FILE="/var/log/duplicacy-backup.log"
HEALTHCHECK_URL="https://hc-ping.com/your-uuid"

# Notify healthchecks start
curl -fsS "$HEALTHCHECK_URL/start" > /dev/null 2>&1 || true

echo "[$(date)] Starting backup..." >> "$LOG_FILE"
cd "$BACKUP_DIR"

# Back up to primary storage
duplicacy backup -stats 2>&1 | tee -a "$LOG_FILE"

# Back up to secondary storage
duplicacy backup -storage b2-backup -stats 2>&1 | tee -a "$LOG_FILE"

echo "[$(date)] Backup complete" >> "$LOG_FILE"

# Notify healthchecks success
curl -fsS "$HEALTHCHECK_URL" > /dev/null 2>&1 || true
SCRIPT

chmod +x /usr/local/bin/duplicacy-backup.sh

# Create systemd service
sudo tee /etc/systemd/system/duplicacy-backup.service << 'EOF'
[Unit]
Description=Duplicacy Backup
After=network-online.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/duplicacy-backup.sh
StandardOutput=journal
StandardError=journal
EOF

# Create systemd timer
sudo tee /etc/systemd/system/duplicacy-backup.timer << 'EOF'
[Unit]
Description=Run Duplicacy backup daily

[Timer]
OnCalendar=03:00
RandomizedDelaySec=30m
Persistent=true

[Install]
WantedBy=timers.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable --now duplicacy-backup.timer

Pruning with Retention Policies

# Prune snapshots with a retention policy
# Keep: all backups in last 7 days, 1/week for last 4 weeks,
#        1/month for last 12 months, 1/year for 3 years
duplicacy prune \
  -keep 0:365*3 \
  -keep 7:365 \
  -keep 30:90 \
  -keep 1:7

# Policy format: -keep <days-to-keep>:<oldest-allowed>
# -keep 0:365*3 = keep all snapshots younger than 3 years
# -keep 7:365   = keep 1 per 7 days for snapshots younger than 1 year
# -keep 30:90   = keep 1 per 30 days for snapshots younger than 90 days
# -keep 1:7     = keep 1 per day for snapshots from last 7 days

# Prune secondary storage too
duplicacy prune -storage b2-backup \
  -keep 0:365*3 \
  -keep 7:365 \
  -keep 30:90 \
  -keep 1:7

# Dry run to preview what would be deleted
duplicacy prune -dry-run \
  -keep 0:365 \
  -keep 7:90 \
  -keep 1:7

Restoring Files

# List available snapshots
duplicacy list

# List files in a specific snapshot (revision)
duplicacy list -r 42

# Restore latest snapshot to the same directory
cd /var/www
duplicacy restore -r latest

# Restore to a different directory
duplicacy restore -r 42 -target /restore/www

# Restore only specific files
duplicacy restore -r latest "html/index.html" "config/*.yml"

# Restore from secondary storage
duplicacy restore -r latest -storage b2-backup -target /restore/www

# Check a snapshot for corruption
duplicacy check -r 42
duplicacy check -all  # Check all snapshots

Duplicacy Web UI

Duplicacy Web provides a GUI for managing backups:

# Download Duplicacy Web
wget "https://github.com/gilbertchen/duplicacy-web/releases/latest/download/duplicacy_web_linux_x64"
sudo mv duplicacy_web_linux_x64 /usr/local/bin/duplicacy-web
sudo chmod +x /usr/local/bin/duplicacy-web

# Create systemd service
sudo tee /etc/systemd/system/duplicacy-web.service << 'EOF'
[Unit]
Description=Duplicacy Web UI
After=network.target

[Service]
Type=simple
User=nobody
ExecStart=/usr/local/bin/duplicacy-web --listen 127.0.0.1:3875
WorkingDirectory=/var/lib/duplicacy-web
Restart=always

[Install]
WantedBy=multi-user.target
EOF

sudo mkdir -p /var/lib/duplicacy-web
sudo chown nobody:nobody /var/lib/duplicacy-web
sudo systemctl enable --now duplicacy-web

Access at http://localhost:3875 — set up a reverse proxy with Nginx for HTTPS:

server {
    server_name duplicacy.yourdomain.com;
    location / {
        proxy_pass http://127.0.0.1:3875;
        proxy_set_header Host $host;
    }
}

Troubleshooting

"Another backup is running" error (stale lock):

# Duplicacy is lock-free — this error means a concurrent backup is actually running
# Check for running processes
ps aux | grep duplicacy

# If no backup is running and error persists, list then delete the chunk lock
duplicacy list -storage default
# Stale locks automatically expire after 24 hours

Upload fails with credential errors:

# Re-set credentials
cd /var/www
duplicacy set -storage default -key s3_id -value "new-access-key"
duplicacy set -storage default -key s3_secret -value "new-secret-key"

# Test with a small backup
duplicacy backup -stats -threads 1

Slow backup performance:

# Increase threads for parallel uploads
duplicacy backup -stats -threads 8

# Increase upload speed limit (default: unlimited)
# The -limit-rate flag is in KB/s
duplicacy backup -stats -limit-rate 0  # 0 = unlimited

Backup running out of disk space:

# Duplicacy writes temporary chunks to /tmp
# Redirect temp directory if /tmp is small
export TMPDIR=/var/tmp
duplicacy backup -stats

Conclusion

Duplicacy's lock-free deduplication model enables multiple systems to safely back up to the same cloud storage repository simultaneously, with shared chunk deduplication reducing storage costs across the fleet. Supporting S3, B2, SFTP, and many other backends with cross-platform compatibility, it serves as an effective enterprise backup solution for Linux environments where multiple servers share common data sets.