Bash Scripts for Server Administration: Practical Automation Examples

Introduction

Bash scripting is the foundational skill for Linux system administration and DevOps automation. While modern tools like Ansible and Terraform excel at complex orchestration, Bash scripts remain essential for quick automations, system maintenance tasks, and situations where installing additional tools isn't practical. Every system administrator should master Bash scripting to automate repetitive tasks, implement monitoring solutions, and manage servers efficiently.

This comprehensive guide presents production-ready Bash scripts for common server administration tasks. Each script includes error handling, logging, and best practices that make them suitable for production environments.

Prerequisites

  • Basic Linux/Unix command line knowledge
  • Understanding of file permissions
  • Familiarity with common Linux commands
  • Text editor (vim, nano, or VS Code)
  • Bash 4.0+ installed

Essential Script Template

#!/bin/bash
set -euo pipefail

# Configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
LOG_FILE="/var/log/$(basename "$0" .sh).log"

# Logging functions
log_info() {
    echo "[INFO] $(date '+%Y-%m-%d %H:%M:%S') - $*" | tee -a "$LOG_FILE"
}

log_error() {
    echo "[ERROR] $(date '+%Y-%m-%d %H:%M:%S') - $*" | tee -a "$LOG_FILE" >&2
}

# Error handling
error_exit() {
    log_error "$1"
    exit "${2:-1}"
}

# Main function
main() {
    log_info "Script started"
    # Your code here
    log_info "Script completed"
}

main "$@"

System Monitoring Scripts

Comprehensive Health Check

#!/bin/bash
# system-health-check.sh

set -euo pipefail

CPU_THRESHOLD=80
MEMORY_THRESHOLD=85
DISK_THRESHOLD=85

get_cpu_usage() {
    top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1
}

get_memory_usage() {
    free | grep Mem | awk '{printf("%.0f", ($3/$2) * 100)}'
}

get_disk_usage() {
    df -h / | tail -1 | awk '{print $5}' | sed 's/%//'
}

check_metrics() {
    local alerts=()

    cpu=$(get_cpu_usage)
    if (( $(echo "$cpu > $CPU_THRESHOLD" | bc -l) )); then
        alerts+=("CPU: ${cpu}%")
    fi

    memory=$(get_memory_usage)
    if (( memory > MEMORY_THRESHOLD )); then
        alerts+=("Memory: ${memory}%")
    fi

    disk=$(get_disk_usage)
    if (( disk > DISK_THRESHOLD )); then
        alerts+=("Disk: ${disk}%")
    fi

    if [ ${#alerts[@]} -gt 0 ]; then
        echo "ALERTS: ${alerts[*]}" | mail -s "System Alert" [email protected]
    fi
}

check_metrics

Automated Backup Script

#!/bin/bash
# automated-backup.sh

set -euo pipefail

BACKUP_SOURCE="/var/www /etc /home"
BACKUP_DEST="/backup"
RETENTION_DAYS=7
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_NAME="backup_${TIMESTAMP}"

create_backup() {
    mkdir -p "${BACKUP_DEST}/${BACKUP_NAME}"

    for dir in $BACKUP_SOURCE; do
        if [ -d "$dir" ]; then
            tar -czf "${BACKUP_DEST}/${BACKUP_NAME}/$(basename $dir).tar.gz" \
                "$dir" 2>&1 | tee -a /var/log/backup.log
        fi
    done

    # MySQL backup
    mysqldump --all-databases | gzip > "${BACKUP_DEST}/${BACKUP_NAME}/mysql.sql.gz"

    # Create final archive
    cd "$BACKUP_DEST"
    tar -czf "${BACKUP_NAME}.tar.gz" "$BACKUP_NAME"
    rm -rf "$BACKUP_NAME"
}

cleanup_old_backups() {
    find "$BACKUP_DEST" -name "backup_*.tar.gz" -mtime +$RETENTION_DAYS -delete
}

main() {
    echo "Starting backup at $(date)"
    create_backup
    cleanup_old_backups
    echo "Backup completed at $(date)"
}

main

User Management Scripts

User Provisioning

#!/bin/bash
# user-provisioning.sh

set -euo pipefail

create_user() {
    local username=$1
    local fullname=$2
    local ssh_key=$3

    if id "$username" &>/dev/null; then
        echo "User $username already exists"
        return 1
    fi

    useradd -m -s /bin/bash -c "$fullname" "$username"
    usermod -aG sudo,developers "$username"

    # Setup SSH
    mkdir -p "/home/$username/.ssh"
    echo "$ssh_key" > "/home/$username/.ssh/authorized_keys"
    chmod 700 "/home/$username/.ssh"
    chmod 600 "/home/$username/.ssh/authorized_keys"
    chown -R "$username:$username" "/home/$username/.ssh"

    echo "Created user: $username"
}

# Usage: ./user-provisioning.sh username "Full Name" "ssh-rsa AAA..."
create_user "$@"

Security Scripts

Security Audit

#!/bin/bash
# security-audit.sh

set -euo pipefail

REPORT="/var/log/security-audit-$(date +%Y%m%d).txt"

{
    echo "Security Audit - $(date)"
    echo "================================"

    echo -e "\nUsers without password:"
    awk -F: '($2 == "") {print $1}' /etc/shadow

    echo -e "\nUsers with UID 0:"
    awk -F: '($3 == 0) {print $1}' /etc/passwd

    echo -e "\nWorld-writable files:"
    find / -xdev -type f -perm -0002 2>/dev/null | head -20

    echo -e "\nSUID files:"
    find / -xdev -type f -perm -4000 2>/dev/null

    echo -e "\nListening ports:"
    ss -tuln

    echo -e "\nFirewall status:"
    ufw status verbose 2>/dev/null || iptables -L -n

} | tee "$REPORT"

echo "Audit complete: $REPORT"

Log Management

Log Rotation Script

#!/bin/bash
# log-rotation.sh

set -euo pipefail

LOG_DIRS="/var/log/nginx /var/log/application"
COMPRESS_AGE=7
DELETE_AGE=30

for dir in $LOG_DIRS; do
    if [ -d "$dir" ]; then
        # Compress old logs
        find "$dir" -name "*.log" -mtime +$COMPRESS_AGE ! -name "*.gz" -exec gzip {} \;

        # Delete very old logs
        find "$dir" -name "*.gz" -mtime +$DELETE_AGE -delete
    fi
done

echo "Log rotation completed"

Performance Monitoring

Resource Monitor

#!/bin/bash
# resource-monitor.sh

set -euo pipefail

ALERT_EMAIL="[email protected]"

check_resources() {
    cpu=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)
    memory=$(free | grep Mem | awk '{printf("%.1f"), ($3/$2) * 100}')
    disk=$(df -h / | tail -1 | awk '{print $5}' | sed 's/%//')

    report="Resource Report for $(hostname)
    CPU: ${cpu}%
    Memory: ${memory}%
    Disk: ${disk}%"

    echo "$report"

    # Alert if thresholds exceeded
    if (( $(echo "$cpu > 80" | bc -l) )); then
        echo "$report" | mail -s "High CPU Alert" "$ALERT_EMAIL"
    fi
}

check_resources

Deployment Scripts

Application Deployment

#!/bin/bash
# deploy-app.sh

set -euo pipefail

APP_DIR="/opt/myapp"
BACKUP_DIR="/opt/backups"
REPO_URL="https://github.com/company/app.git"
BRANCH="main"

deploy() {
    # Backup current version
    if [ -d "$APP_DIR" ]; then
        tar -czf "${BACKUP_DIR}/app-$(date +%Y%m%d_%H%M%S).tar.gz" "$APP_DIR"
    fi

    # Deploy new version
    if [ -d "$APP_DIR/.git" ]; then
        cd "$APP_DIR"
        git fetch origin
        git checkout "$BRANCH"
        git pull origin "$BRANCH"
    else
        git clone -b "$BRANCH" "$REPO_URL" "$APP_DIR"
    fi

    # Install dependencies
    cd "$APP_DIR"
    npm install --production

    # Restart service
    systemctl restart myapp

    # Verify deployment
    sleep 5
    if curl -f http://localhost:3000/health; then
        echo "Deployment successful"
    else
        echo "Deployment failed, rolling back"
        # Rollback logic here
        exit 1
    fi
}

deploy

Network Tools

Connection Monitor

#!/bin/bash
# network-monitor.sh

set -euo pipefail

HOSTS=("8.8.8.8" "1.1.1.1" "google.com")
LOG_FILE="/var/log/network-monitor.log"

check_connectivity() {
    for host in "${HOSTS[@]}"; do
        if ping -c 3 -W 2 "$host" &>/dev/null; then
            echo "[$(date)] OK: $host" >> "$LOG_FILE"
        else
            echo "[$(date)] FAILED: $host" >> "$LOG_FILE"
            echo "Network issue detected for $host" | mail -s "Network Alert" [email protected]
        fi
    done
}

check_connectivity

Database Maintenance

Database Backup and Optimization

#!/bin/bash
# db-maintenance.sh

set -euo pipefail

DB_USER="backup_user"
DB_PASS="password"
BACKUP_DIR="/backup/mysql"
RETENTION=7

# Backup all databases
backup_databases() {
    mkdir -p "$BACKUP_DIR"

    mysql -u"$DB_USER" -p"$DB_PASS" -e "SHOW DATABASES;" | \
        grep -Ev "Database|information_schema|performance_schema|mysql" | \
        while read -r db; do
            mysqldump -u"$DB_USER" -p"$DB_PASS" \
                --single-transaction \
                "$db" | gzip > "${BACKUP_DIR}/${db}-$(date +%Y%m%d).sql.gz"
            echo "Backed up: $db"
        done
}

# Optimize tables
optimize_tables() {
    mysql -u"$DB_USER" -p"$DB_PASS" -e "SHOW DATABASES;" | \
        grep -Ev "Database|information_schema|performance_schema|mysql" | \
        while read -r db; do
            mysqlcheck -u"$DB_USER" -p"$DB_PASS" --optimize "$db"
        done
}

# Cleanup old backups
cleanup() {
    find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION -delete
}

main() {
    backup_databases
    optimize_tables
    cleanup
}

main

Best Practices

Error Handling

# Always use strict mode
set -euo pipefail

# Custom error handler
error_handler() {
    echo "Error on line $1"
    cleanup_function
    exit 1
}

trap 'error_handler $LINENO' ERR

Input Validation

validate_input() {
    local input=$1

    if [[ -z "$input" ]]; then
        echo "Error: Input required"
        return 1
    fi

    if [[ ! "$input" =~ ^[a-zA-Z0-9_-]+$ ]]; then
        echo "Error: Invalid characters"
        return 1
    fi

    return 0
}

Logging

LOG_FILE="/var/log/script.log"

log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
}

log "Starting process"

Conclusion

Bash scripting is essential for system administration automation. These practical examples demonstrate common patterns for monitoring, backup, user management, and maintenance tasks. Always test scripts in non-production environments first, implement proper error handling, and follow security best practices.

Key takeaways:

  • Use set -euo pipefail for safety
  • Implement comprehensive logging
  • Add error handling and validation
  • Schedule with cron or systemd timers
  • Version control your scripts
  • Document your code
  • Test thoroughly before production use

Continue improving your Bash skills by adapting these examples to your specific needs and building a library of reusable automation scripts.