Instatus Status Page Self-Hosted Alternative
Building a self-hosted status page gives you complete control over your uptime communications, subscriber data, and incident history without relying on third-party SaaS. This guide compares the leading self-hosted status page alternatives to Instatus, covers deployment architecture choices, and explains how to integrate status updates with your existing monitoring stack.
Prerequisites
- Ubuntu 20.04/22.04 or CentOS 8/Rocky Linux 8+
- Docker and Docker Compose
- A domain name and SSL certificate
- SMTP credentials for email notifications
- At least 1 GB RAM
Choosing a Self-Hosted Solution
Here is a practical comparison of self-hosted alternatives to Instatus:
| Solution | Stack | Database | Monitoring Built-in | API |
|---|---|---|---|---|
| Uptime Kuma | Node.js | SQLite | Yes | Yes |
| Cachet | PHP/Laravel | MySQL/PostgreSQL | No | Yes |
| Statping-ng | Go | SQLite/PostgreSQL | Yes | Yes |
| cState | Hugo (static) | None | No | No |
| Gatus | Go | None | Yes | Yes |
Recommendation matrix:
- Want monitoring + status page in one tool: Use Uptime Kuma or Statping-ng
- Want polished incident management + API: Use Cachet
- Want zero-maintenance static page: Use cState with GitHub Pages
- Want API-first with alerting: Use Gatus
Deploy Uptime Kuma with Status Page
Uptime Kuma is the most popular self-hosted option, combining monitoring and status page in one tool:
# Create project directory
mkdir -p /opt/uptime-kuma && cd /opt/uptime-kuma
cat > docker-compose.yml << 'EOF'
version: '3.7'
services:
uptime-kuma:
image: louislam/uptime-kuma:latest
container_name: uptime-kuma
ports:
- "3001:3001"
volumes:
- kuma_data:/app/data
restart: unless-stopped
security_opt:
- no-new-privileges:true
volumes:
kuma_data:
EOF
docker-compose up -d
# Check logs
docker-compose logs -f uptime-kuma
Access the setup wizard at http://your-server:3001. Create an admin account, then configure monitors:
# Uptime Kuma status pages are configured via the UI:
# 1. Add monitors: Dashboard > Add New Monitor
# - Monitor Type: HTTP(s), TCP Port, Ping, DNS, etc.
# - URL: https://your-service.com
# - Heartbeat Interval: 60 seconds
# - Retry: 3 times before marking as down
#
# 2. Create status page: Status Pages > New Status Page
# - Slug: status (creates status.example.com/status)
# - Add monitors to the status page groups
# - Enable subscriber notifications
# Uptime Kuma API (basic auth required)
KUMA_URL="http://localhost:3001"
# Use the web interface for setup; API is primarily for reading stats
Deploy Gatus for an API-first approach:
mkdir -p /opt/gatus && cd /opt/gatus
# Create Gatus configuration
cat > config.yaml << 'EOF'
web:
port: 8080
storage:
type: sqlite
path: /data/gatus.db
endpoints:
- name: API Gateway
url: "https://api.example.com/health"
interval: 30s
conditions:
- "[STATUS] == 200"
- "[RESPONSE_TIME] < 300"
alerts:
- type: slack
failure-threshold: 2
success-threshold: 1
- name: Main Website
url: "https://example.com"
interval: 60s
conditions:
- "[STATUS] == 200"
- "[CERTIFICATE_EXPIRATION] > 72h"
alerting:
slack:
webhook-url: "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
default-alert:
failure-threshold: 2
success-threshold: 2
EOF
cat > docker-compose.yml << 'EOF'
version: '3.7'
services:
gatus:
image: twinproduction/gatus:latest
ports:
- "8080:8080"
volumes:
- ./config.yaml:/config/config.yaml
- gatus_data:/data
restart: unless-stopped
volumes:
gatus_data:
EOF
docker-compose up -d
API-Driven Status Updates
Update your status page programmatically from scripts or monitoring tools:
# === Uptime Kuma API (via unofficial API) ===
# Uptime Kuma uses a Socket.IO-based API
# For HTTP-based automation, use the push monitor type
# Create a "Push" monitor in Uptime Kuma:
# Monitor Type: Push
# This generates a push URL like:
# http://kuma.example.com/api/push/<token>?status=up&msg=OK&ping=1
# Send a heartbeat from your application
KUMA_PUSH_URL="http://localhost:3001/api/push/your-token"
curl -s "${KUMA_PUSH_URL}?status=up&msg=OK&ping=50"
# Mark as down
curl -s "${KUMA_PUSH_URL}?status=down&msg=Service+check+failed"
# === Cachet API for status updates ===
API_URL="https://status.example.com/api/v1"
API_TOKEN="your-cachet-token"
# Automated component status script
update_component_status() {
local component_id="$1"
local status="$2" # 1=Operational, 2=Degraded, 3=Partial, 4=Major
curl -s -X PUT "${API_URL}/components/${component_id}" \
-H "X-Cachet-Token: ${API_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"status\": ${status}}"
}
# Check a service and update Cachet
check_and_update() {
local url="$1"
local component_id="$2"
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" --max-time 10 "$url")
if [ "$HTTP_CODE" = "200" ]; then
update_component_status "$component_id" 1
else
update_component_status "$component_id" 4
# Create an incident
curl -s -X POST "${API_URL}/incidents" \
-H "X-Cachet-Token: ${API_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"name\": \"Service Outage\", \"message\": \"HTTP ${HTTP_CODE} from ${url}\", \"status\": 1, \"component_id\": ${component_id}, \"component_status\": 4, \"visible\": 1}"
fi
}
# Run every minute from cron
check_and_update "https://api.example.com/health" 1
check_and_update "https://example.com" 2
Subscriber Notifications
Configure email notifications in Uptime Kuma:
# Via Uptime Kuma UI:
# Settings > Notifications > Setup Notification
# Type: Email (SMTP)
# SMTP Host: smtp.sendgrid.net
# SMTP Port: 587
# Username: apikey
# Password: your-sendgrid-key
# To Email: [email protected]
# From Name: Status Monitor
# Test the notification from the Notifications settings page
# For Slack notifications in Uptime Kuma:
# Settings > Notifications > Add New
# Type: Slack
# Webhook URL: https://hooks.slack.com/services/...
# For PagerDuty integration:
# Settings > Notifications > Add New
# Type: PagerDuty
# Integration Key: your-pagerduty-routing-key
Integration with Monitoring Stacks
Connect your status page with Prometheus/Alertmanager or other monitoring tools:
# Prometheus Alertmanager webhook → Cachet
# Create a webhook receiver in alertmanager.yml
cat >> /etc/prometheus/alertmanager.yml << 'EOF'
receivers:
- name: cachet-webhook
webhook_configs:
- url: http://localhost:5000/alertmanager-to-cachet
send_resolved: true
EOF
# Create a simple translation webhook (Python Flask)
cat > /opt/alert-bridge/app.py << 'EOF'
from flask import Flask, request, jsonify
import requests
app = Flask(__name__)
CACHET_URL = "https://status.example.com/api/v1"
CACHET_TOKEN = "your-token"
COMPONENT_MAP = {
"ApiHighLatency": 1,
"DatabaseDown": 2,
"WebsiteDown": 3
}
@app.route("/alertmanager-to-cachet", methods=["POST"])
def handle_alert():
data = request.json
for alert in data.get("alerts", []):
alert_name = alert["labels"].get("alertname", "")
status = alert.get("status", "")
component_id = COMPONENT_MAP.get(alert_name)
if component_id:
component_status = 4 if status == "firing" else 1
requests.put(
f"{CACHET_URL}/components/{component_id}",
headers={"X-Cachet-Token": CACHET_TOKEN},
json={"status": component_status}
)
return jsonify({"status": "ok"})
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
EOF
Maintenance Windows
Schedule maintenance windows via the API or UI:
# Cachet maintenance window via API
API_URL="https://status.example.com/api/v1"
API_TOKEN="your-token"
# Schedule a maintenance window
curl -X POST "${API_URL}/schedule" \
-H "X-Cachet-Token: ${API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"name": "Database Server Upgrade",
"message": "We will be upgrading the database to PostgreSQL 16. Write operations will be unavailable for approximately 5 minutes.",
"status": 1,
"scheduled_at": "2024-02-01 02:00:00",
"completed_at": "2024-02-01 04:00:00",
"components": {"1": 4, "2": 4},
"notify": true
}'
# For Uptime Kuma:
# Maintenance > Add New Maintenance
# Set start/end time
# Select affected monitors
# Optionally notify subscribers
Nginx Reverse Proxy and SSL
# Install Nginx and Certbot
sudo apt install -y nginx certbot python3-certbot-nginx
# Create status page vhost
cat > /etc/nginx/sites-available/status << 'EOF'
server {
listen 80;
server_name status.example.com;
# Redirect all HTTP to HTTPS
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name status.example.com;
ssl_certificate /etc/letsencrypt/live/status.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/status.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://127.0.0.1:3001; # Uptime Kuma port
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (required for Uptime Kuma)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
}
EOF
sudo ln -s /etc/nginx/sites-available/status /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
# Obtain SSL certificate
sudo certbot --nginx -d status.example.com
Troubleshooting
Uptime Kuma shows monitors as offline after restart:
# Check container logs
docker-compose logs uptime-kuma --tail 50
# Verify data volume is intact
docker volume inspect uptime-kuma_kuma_data
# Restart container
docker-compose restart uptime-kuma
Notifications not sending:
# Test notification from Settings > Notifications > Test
# Check DNS resolution from container
docker exec uptime-kuma ping smtp.sendgrid.net
# Verify SMTP credentials
docker exec -it uptime-kuma node -e "
const nodemailer = require('nodemailer');
const t = nodemailer.createTransport({host:'smtp.sendgrid.net', port:587, auth:{user:'apikey', pass:'your-key'}});
t.verify((err,ok) => console.log(err||'OK'));
"
High memory usage:
# Limit container memory
# In docker-compose.yml, add:
# deploy:
# resources:
# limits:
# memory: 512M
# For SQLite: run VACUUM periodically
docker exec uptime-kuma sqlite3 /app/data/kuma.db "VACUUM;"
Conclusion
Self-hosting your status page gives you full ownership of incident data and subscriber information without per-seat costs. Uptime Kuma is the best single-tool solution combining monitoring and status page, Cachet excels at API-driven incident management, and cState is ideal for teams wanting a zero-maintenance static page. Integrate with your monitoring stack via webhooks and APIs to automate status updates and reduce manual intervention during incidents.


