FastAPI Application Deployment on Linux
FastAPI is a modern Python web framework for building APIs with high performance, and deploying it in production on Linux requires Uvicorn or Gunicorn as the ASGI server, Nginx as a reverse proxy, and systemd for process management. This guide covers production deployment of FastAPI applications including server configuration, SSL termination, Docker deployment, and performance optimization.
Prerequisites
- Ubuntu 20.04+ or CentOS/Rocky 8+ with root access
- Python 3.8+
- Domain name pointed to your server (for SSL)
- Your FastAPI application code
Setting Up the Application Environment
# Install Python and pip
sudo apt update && sudo apt install -y python3 python3-pip python3-venv # Ubuntu
sudo dnf install -y python3 python3-pip # CentOS/Rocky
# Create application user (don't run as root)
sudo useradd -r -m -d /opt/myapi -s /bin/bash myapi
# Switch to app directory
sudo mkdir -p /opt/myapi/app
sudo chown -R myapi:myapi /opt/myapi
# As the myapi user, create virtual environment
sudo -u myapi bash -c "cd /opt/myapi && python3 -m venv venv"
# Install FastAPI and server dependencies
sudo -u myapi bash -c "
cd /opt/myapi
source venv/bin/activate
pip install fastapi uvicorn[standard] gunicorn
"
Create a sample FastAPI app (or deploy your own):
sudo tee /opt/myapi/app/main.py << 'EOF'
from fastapi import FastAPI
app = FastAPI(title="My API", version="1.0.0")
@app.get("/")
async def root():
return {"message": "Hello World"}
@app.get("/health")
async def health():
return {"status": "healthy"}
EOF
sudo chown myapi:myapi /opt/myapi/app/main.py
Create environment file:
sudo tee /etc/myapi/env << 'EOF'
ENV=production
DATABASE_URL=postgresql://user:pass@localhost/mydb
SECRET_KEY=your-secret-key-here
EOF
sudo chmod 600 /etc/myapi/env
sudo chown myapi:myapi /etc/myapi/env
sudo mkdir -p /etc/myapi
Running with Uvicorn
Uvicorn is the ASGI server that runs FastAPI:
# Test run (as myapi user)
sudo -u myapi bash -c "
cd /opt/myapi
source venv/bin/activate
uvicorn app.main:app --host 0.0.0.0 --port 8000
"
# Uvicorn options:
# --workers 4 # Multiple workers (use with care in async)
# --reload # Auto-reload on code changes (dev only)
# --log-level info # Logging level
# --access-log # Enable access logging
# --proxy-headers # Trust X-Forwarded-For from proxy
Production Setup with Gunicorn and Uvicorn Workers
Gunicorn manages multiple Uvicorn worker processes for true multi-process concurrency:
# Create Gunicorn config
sudo tee /opt/myapi/gunicorn.conf.py << 'EOF'
# Gunicorn configuration for FastAPI
import multiprocessing
# Server socket
bind = "127.0.0.1:8000"
backlog = 2048
# Worker processes
# Rule of thumb: (2 * CPU cores) + 1
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = "uvicorn.workers.UvicornWorker"
worker_connections = 1000
timeout = 30
keepalive = 2
# Logging
accesslog = "/var/log/myapi/access.log"
errorlog = "/var/log/myapi/error.log"
loglevel = "info"
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s" %(D)s'
# Process naming
proc_name = "myapi"
# Graceful shutdown
graceful_timeout = 30
# Preload app for faster worker spawning
preload_app = True
EOF
sudo chown myapi:myapi /opt/myapi/gunicorn.conf.py
sudo mkdir -p /var/log/myapi
sudo chown myapi:myapi /var/log/myapi
Test Gunicorn startup:
sudo -u myapi bash -c "
cd /opt/myapi
source venv/bin/activate
gunicorn -c gunicorn.conf.py app.main:app
"
systemd Service Configuration
sudo tee /etc/systemd/system/myapi.service << 'EOF'
[Unit]
Description=FastAPI Application (myapi)
After=network.target
Wants=network.target
[Service]
Type=simple
User=myapi
Group=myapi
WorkingDirectory=/opt/myapi
# Load environment variables
EnvironmentFile=/etc/myapi/env
# Start command
ExecStart=/opt/myapi/venv/bin/gunicorn \
-c /opt/myapi/gunicorn.conf.py \
app.main:app
# Graceful reload (sends SIGHUP)
ExecReload=/bin/kill -s HUP $MAINPID
# Restart on failure
Restart=on-failure
RestartSec=5s
StartLimitBurst=3
StartLimitIntervalSec=60s
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ReadWritePaths=/var/log/myapi /tmp
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=myapi
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable --now myapi.service
sudo systemctl status myapi.service
Manage the service:
# View logs
journalctl -u myapi.service -f
journalctl -u myapi.service -n 100
# Zero-downtime reload (sends SIGHUP to Gunicorn)
sudo systemctl reload myapi.service
# Full restart
sudo systemctl restart myapi.service
# Check if running
curl http://127.0.0.1:8000/health
Nginx Reverse Proxy
sudo apt install nginx # Ubuntu
sudo dnf install nginx # CentOS/Rocky
sudo tee /etc/nginx/sites-available/myapi << 'EOF'
upstream fastapi_backend {
server 127.0.0.1:8000;
keepalive 64;
}
server {
listen 80;
server_name api.example.com;
# Redirect HTTP to HTTPS
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
# Security headers
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options DENY;
add_header X-XSS-Protection "1; mode=block";
# Proxy settings
location / {
proxy_pass http://fastapi_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 60s;
proxy_connect_timeout 10s;
}
# Serve FastAPI docs (optional - disable in production if not needed)
location /docs {
proxy_pass http://fastapi_backend;
proxy_set_header Host $host;
}
# Health check endpoint
location /health {
proxy_pass http://fastapi_backend;
access_log off;
}
}
EOF
# Enable the site (Ubuntu)
sudo ln -s /etc/nginx/sites-available/myapi /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
SSL with Let's Encrypt
# Install Certbot
sudo apt install certbot python3-certbot-nginx # Ubuntu
sudo dnf install certbot python3-certbot-nginx # CentOS/Rocky
# Obtain certificate
sudo certbot --nginx -d api.example.com
# Test automatic renewal
sudo certbot renew --dry-run
# Certbot auto-renews via systemd timer (check it's active)
sudo systemctl status certbot.timer
Docker Deployment
# Create Dockerfile
sudo tee /opt/myapi/Dockerfile << 'EOF'
FROM python:3.11-slim
WORKDIR /app
# Install dependencies first (layer caching)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY app/ ./app/
COPY gunicorn.conf.py .
# Create non-root user
RUN useradd -r -u 1001 appuser
USER appuser
EXPOSE 8000
CMD ["gunicorn", "-c", "gunicorn.conf.py", "app.main:app"]
EOF
# Create requirements.txt
sudo tee /opt/myapi/requirements.txt << 'EOF'
fastapi==0.111.0
uvicorn[standard]==0.30.0
gunicorn==22.0.0
EOF
# Build and run
cd /opt/myapi
docker build -t myapi:latest .
docker run -d \
--name myapi \
--restart unless-stopped \
-p 127.0.0.1:8000:8000 \
--env-file /etc/myapi/env \
myapi:latest
# Check logs
docker logs myapi -f
Troubleshooting
Gunicorn workers dying with timeout errors:
# Increase timeout in gunicorn.conf.py
# timeout = 60 (seconds)
# Or use async-friendly timeout strategy:
# worker_class = "uvicorn.workers.UvicornWorker"
# timeout = 120
sudo systemctl restart myapi.service
502 Bad Gateway from Nginx:
# Check if FastAPI is running
sudo systemctl status myapi.service
curl http://127.0.0.1:8000/health
# Check Nginx error log
sudo tail -50 /var/log/nginx/error.log
# Verify socket is listening
ss -tlnp | grep 8000
App slow or CPU high under load:
# Increase worker count
# Edit gunicorn.conf.py: workers = 8
# Monitor worker performance
sudo systemctl reload myapi.service
journalctl -u myapi.service -f
# Profile the app (add to startup)
# Use py-spy for production profiling
pip install py-spy
sudo py-spy top --pid $(pgrep -f gunicorn | head -1)
Environment variables not loading:
sudo systemctl show myapi.service | grep Environment
# Verify EnvironmentFile path and permissions
sudo cat /etc/myapi/env
sudo systemctl daemon-reload && sudo systemctl restart myapi.service
Conclusion
Deploying FastAPI in production on Linux combines Gunicorn (process manager) with Uvicorn workers (ASGI runtime), Nginx (reverse proxy and SSL termination), and systemd (service lifecycle management). The number of Gunicorn workers ((2 * cores) + 1) balances CPU usage with concurrency. Use systemctl reload for zero-downtime configuration changes and monitor the application with journalctl -u myapi.service -f to catch issues early.


