n8n Workflow Automation Advanced Configuration
n8n is a self-hosted workflow automation platform that connects apps and services through a visual node-based interface. This guide covers advanced n8n configuration including custom nodes, webhook triggers, credential management, error handling, sub-workflows, queue mode, and production scaling on Linux.
Prerequisites
- Docker and Docker Compose installed
- PostgreSQL (recommended over SQLite for production)
- Redis (required for queue mode)
- A domain name with SSL for webhook endpoints
Docker Deployment for Production
# docker-compose.yml
version: '3'
services:
postgres:
image: postgres:15-alpine
environment:
POSTGRES_USER: n8n
POSTGRES_PASSWORD: n8n-db-password
POSTGRES_DB: n8n
volumes:
- pg_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U n8n"]
interval: 5s
retries: 10
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
n8n:
image: n8nio/n8n:latest
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
ports:
- "5678:5678"
environment:
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: postgres
DB_POSTGRESDB_PORT: 5432
DB_POSTGRESDB_DATABASE: n8n
DB_POSTGRESDB_USER: n8n
DB_POSTGRESDB_PASSWORD: n8n-db-password
N8N_ENCRYPTION_KEY: "your-32-char-random-encryption-key"
N8N_HOST: n8n.yourdomain.com
N8N_PORT: 5678
N8N_PROTOCOL: https
WEBHOOK_URL: https://n8n.yourdomain.com
EXECUTIONS_MODE: queue
QUEUE_BULL_REDIS_HOST: redis
QUEUE_BULL_REDIS_PORT: 6379
N8N_BASIC_AUTH_ACTIVE: "true"
N8N_BASIC_AUTH_USER: admin
N8N_BASIC_AUTH_PASSWORD: "SecureAdminPassword!"
EXECUTIONS_DATA_PRUNE: "true"
EXECUTIONS_DATA_MAX_AGE: 336 # 14 days in hours
volumes:
- n8n_data:/home/node/.n8n
- n8n_custom:/home/node/.n8n/nodes # Custom nodes
restart: unless-stopped
# Worker process for queue mode
n8n-worker:
image: n8nio/n8n:latest
depends_on:
- n8n
command: worker
environment:
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: postgres
DB_POSTGRESDB_DATABASE: n8n
DB_POSTGRESDB_USER: n8n
DB_POSTGRESDB_PASSWORD: n8n-db-password
N8N_ENCRYPTION_KEY: "your-32-char-random-encryption-key"
EXECUTIONS_MODE: queue
QUEUE_BULL_REDIS_HOST: redis
volumes:
- n8n_data:/home/node/.n8n
restart: unless-stopped
volumes:
pg_data:
n8n_data:
n8n_custom:
docker compose up -d
docker compose logs n8n --tail 50 -f
Custom Nodes
Create custom nodes for integrations n8n doesn't support out of the box:
# Create custom node directory structure
mkdir -p /home/node/.n8n/nodes/n8n-nodes-mycompany/nodes/MyService
# Or work locally and map into the container
mkdir -p ./custom-nodes/n8n-nodes-myservice
cd ./custom-nodes/n8n-nodes-myservice
Create a basic custom node:
// nodes/MyService/MyService.node.ts
import {
IExecuteFunctions,
INodeExecutionData,
INodeType,
INodeTypeDescription,
NodeOperationError,
} from 'n8n-workflow';
export class MyService implements INodeType {
description: INodeTypeDescription = {
displayName: 'My Service',
name: 'myService',
icon: 'fa:plug',
group: ['transform'],
version: 1,
description: 'Interact with My Service API',
defaults: { name: 'My Service' },
inputs: ['main'],
outputs: ['main'],
credentials: [
{
name: 'myServiceApi',
required: true,
},
],
properties: [
{
displayName: 'Operation',
name: 'operation',
type: 'options',
options: [
{ name: 'Get Data', value: 'getData' },
{ name: 'Send Data', value: 'sendData' },
],
default: 'getData',
},
{
displayName: 'Resource ID',
name: 'resourceId',
type: 'string',
default: '',
displayOptions: {
show: { operation: ['getData'] },
},
},
],
};
async execute(this: IExecuteFunctions): Promise<INodeExecutionData[][]> {
const operation = this.getNodeParameter('operation', 0) as string;
const credentials = await this.getCredentials('myServiceApi');
if (operation === 'getData') {
const resourceId = this.getNodeParameter('resourceId', 0) as string;
const response = await this.helpers.request({
method: 'GET',
url: `${credentials.apiUrl}/resources/${resourceId}`,
headers: { Authorization: `Bearer ${credentials.apiKey}` },
json: true,
});
return [[{ json: response }]];
}
throw new NodeOperationError(this.getNode(), `Unknown operation: ${operation}`);
}
}
# Build and install the custom node
cd ./custom-nodes/n8n-nodes-myservice
npm install
npm run build
# Copy to n8n custom nodes directory
cp -r . /path/to/n8n/nodes/
# Restart n8n to load the new node
docker compose restart n8n
Webhook Triggers
n8n provides two types of webhook triggers:
Standard Webhook
The webhook node listens for incoming HTTP requests:
- Add a Webhook node to your workflow
- Set HTTP Method (POST, GET, etc.)
- Set Path (e.g.,
my-workflow) - Toggle workflow to Active — n8n registers the webhook
# Trigger the webhook
curl -X POST "https://n8n.yourdomain.com/webhook/my-workflow" \
-H "Content-Type: application/json" \
-d '{"event": "user.signup", "userId": 42, "email": "[email protected]"}'
# Webhook URL format:
# https://n8n.yourdomain.com/webhook/{path}
# https://n8n.yourdomain.com/webhook-test/{path} (test mode)
Respond to Webhook Node
Add a Respond to Webhook node for synchronous responses:
// Respond to Webhook node configuration:
{
"respondWith": "json",
"responseBody": "={{ {\"status\": \"processed\", \"id\": $json[\"userId\"]} }}"
}
Credential Management
# Credentials are stored encrypted using N8N_ENCRYPTION_KEY
# Manage via UI: Credentials → Add Credential
# Export credentials (for migration/backup)
# Settings → n8n API → Create API key
export N8N_API_KEY="your-n8n-api-key"
# List credentials
curl "https://n8n.yourdomain.com/api/v1/credentials" \
-H "X-N8N-API-KEY: $N8N_API_KEY"
# Environment variable injection for sensitive credentials
# In docker-compose.yml:
# environment:
# MY_SERVICE_API_KEY: "${MY_SERVICE_API_KEY_FROM_ENV}"
Credential types commonly needed:
# HTTP Header Auth - for APIs with bearer tokens
# OAuth2 - for Google, GitHub, etc.
# Database credentials
# SSH credentials for server operations
Error Handling and Retry Logic
Workflow-Level Error Handling
- Open workflow settings (top right) → Error Workflow
- Create a separate error-handling workflow
- The error workflow receives the failed execution details
// In an error handler workflow, use a Function node:
const { execution, workflow } = $input.item.json;
// Send alert
const message = `Workflow "${workflow.name}" failed\n` +
`Execution ID: ${execution.id}\n` +
`Error: ${execution.error?.message}`;
return [{ json: { message, workflowId: workflow.id } }];
Node-Level Error Handling
Right-click any node → Add Error Output to handle node failures:
# The error output provides:
# $error.message - error description
# $error.name - error type
# $json - the item that failed
Retry on Failure
For HTTP Request and other nodes that support retries:
// HTTP Request node settings:
// - Max retries: 3
// - Retry interval: 1000ms (exponential backoff supported)
// Or use a Wait node + If node for custom retry logic:
// 1. HTTP Request → If (on error) → Wait (5s) → HTTP Request (retry)
Sub-Workflows
Call other workflows from within a workflow for reusability:
- Create a reusable workflow (e.g., "Send Slack Notification")
- Start it with a When Called By Another Workflow trigger
- In the calling workflow, add an Execute Workflow node
- Select the target workflow
// Pass data to sub-workflow:
// Execute Workflow node → Input Data:
{
"channel": "#alerts",
"message": "={{ $json.message }}",
"severity": "high"
}
# Sub-workflows can be run synchronously (wait for result)
# or asynchronously (fire and forget)
# Execute Workflow node → Mode: "Run in sub-process"
Queue Mode for Production Scaling
Queue mode uses Redis to distribute workflow executions across multiple worker processes:
# Already configured in the docker-compose.yml above with:
# EXECUTIONS_MODE=queue
# QUEUE_BULL_REDIS_HOST=redis
# Scale workers
docker compose up -d --scale n8n-worker=3
# Monitor queue status
docker compose exec redis redis-cli llen bull:jobs:wait
# Queue configuration options in environment:
# QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD=10000
# QUEUE_RECOVERY_INTERVAL=60
# QUEUE_WORKER_TIMEOUT=30
Concurrency Control
# Environment variables to control execution concurrency
environment:
EXECUTIONS_PROCESS: "main" # or "own" (each in subprocess)
EXECUTIONS_CONCURRENCY_PRODUCTION_LIMIT: 10 # Max concurrent executions
Troubleshooting
Webhooks not receiving requests:
# Verify workflow is active
# Check WEBHOOK_URL environment variable matches your public URL
echo $WEBHOOK_URL
# Test connectivity
curl -I https://n8n.yourdomain.com/webhook/test
# Check Nginx is proxying WebSockets
# n8n requires Upgrade header support
Execution stuck / not processing:
# Check Redis connectivity
docker compose exec n8n redis-cli -h redis ping
# Check worker processes
docker compose ps n8n-worker
# View execution queue
docker compose exec redis redis-cli llen bull:n8n-jobs:wait
docker compose exec redis redis-cli llen bull:n8n-jobs:active
Custom node not appearing:
# Check node is in correct directory
docker compose exec n8n ls /home/node/.n8n/nodes/
# Check for TypeScript compilation errors
docker compose logs n8n 2>&1 | grep -i "error\|custom node"
# Verify package.json has n8n in dependencies
cat ./custom-nodes/package.json
Database grows too large:
# Enable execution data pruning
# EXECUTIONS_DATA_PRUNE=true
# EXECUTIONS_DATA_MAX_AGE=168 # 7 days in hours
# Manually prune old executions
docker compose exec postgres psql -U n8n -d n8n \
-c "DELETE FROM execution_entity WHERE \"stoppedAt\" < NOW() - INTERVAL '14 days';"
Conclusion
n8n's combination of visual workflow design, custom node support, and queue-based scaling makes it one of the most powerful self-hosted automation platforms available. By running in queue mode with multiple workers, it handles high-throughput webhook processing and complex multi-step automation reliably. Pairing n8n with Postgres, Redis, and proper error handling workflows creates a production-grade automation infrastructure that replaces dozens of custom integration scripts.


