Docker Overlay Network for Multi-Host
Overlay networks enable secure, encrypted communication between containers across multiple Docker hosts, forming the foundation of distributed container deployments. This comprehensive guide covers Docker Swarm overlay network creation, encryption, service discovery, distributed network management, troubleshooting, and advanced configurations. Understanding overlay networks is essential for scaling containerized applications beyond single-host deployments.
Table of Contents
- Understanding Overlay Networks
- Swarm Overlay Network Setup
- Creating and Managing Overlays
- Network Encryption
- Service Discovery
- Multi-Host Communication
- Attachable Networks
- Network Troubleshooting
- Performance Optimization
- Conclusion
Understanding Overlay Networks
Overlay networks virtualize network connections across multiple hosts using VXLAN (Virtual Extensible LAN) encapsulation, creating an abstraction layer above physical networks.
Network types:
- Bridge: Single-host container networking
- Host: Direct host network access
- Overlay: Multi-host container networking
- IPVLAN: IP-based virtual networking
- Macvlan: MAC-based virtual networking
VXLAN mechanism:
- Creates virtual Layer 2 network over Layer 3 infrastructure
- Encapsulates container traffic in UDP packets
- Default VXLAN port: 4789/udp
- Uses MAC address learning and flooding
# Check Swarm status (required for overlay networks)
docker swarm status
# List networks including overlay
docker network ls
# Inspect overlay network
docker network inspect <overlay-name>
# Network driver: overlay
# Scope: swarm
# Custom: false/true
Benefits of overlay networks:
- Container communication across hosts transparent
- No special routing configuration needed
- Encrypted data transmission (optional)
- Service discovery with load balancing
- Network isolation between overlays
- Scales to hundreds of hosts
Swarm Overlay Network Setup
Create and configure overlay networks in Docker Swarm.
Initialize Docker Swarm:
# Initialize Swarm on manager node
docker swarm init --advertise-addr 192.168.1.10
# Get manager join token
docker swarm join-token manager
# Get worker join token
docker swarm join-token worker
# On worker nodes, join cluster
docker swarm join \
--token SWMTKN-1-abc... \
192.168.1.10:2377
# Verify cluster
docker node ls
Create overlay network:
# Create basic overlay network
docker network create \
--driver overlay \
--subnet 10.0.0.0/24 \
mynetwork
# Verify creation
docker network ls | grep overlay
docker network inspect mynetwork
# Network details show:
# Driver: overlay
# Scope: swarm
# Subnet: 10.0.0.0/24
# Gateway: 10.0.0.1
Network with options:
# Create with custom options
docker network create \
--driver overlay \
--subnet 10.0.0.0/24 \
--gateway 10.0.0.1 \
--opt com.docker.network.driver.mtu=1500 \
--opt com.docker.network.driver.overlay.vxlanid=100 \
backend-network
# VXLAN ID (VNI): identifies virtual network (default: auto-assigned)
# MTU: Maximum transmission unit
Creating and Managing Overlays
Deploy services on overlay networks.
Deploy service on overlay:
# Create service on overlay network
docker service create \
--name web \
--network mynetwork \
--replicas 3 \
-p 80:80 \
nginx:latest
# All replicas can communicate via overlay
# Load balanced across replicas
# Verify service network
docker service inspect web | grep -A 5 Networks
Connect service to multiple networks:
# Create multiple overlay networks
docker network create --driver overlay frontend
docker network create --driver overlay backend
# Deploy service connected to both
docker service create \
--name api \
--network frontend \
--network backend \
--replicas 2 \
api:latest
# Service accessible from both networks
Service discovery names:
# Services have two DNS names in overlay
# 1. Service VIP (stable load-balanced IP)
# Resolves to: 10.0.0.x (virtual IP)
# Example: api.mynetwork = 10.0.0.2
# 2. Round-robin (container-specific)
# Resolves to: Each container individually
# Example: api.mynetwork = 10.0.1.3, 10.0.1.4, 10.0.1.5
# Test from container
docker exec <container-id> nslookup api
docker exec <container-id> getent hosts api
# Both resolve to VIP for service discovery
Network Encryption
Secure overlay network traffic with encryption.
Enable encryption:
# Create encrypted overlay network
docker network create \
--driver overlay \
--opt encrypted \
secure-network
# Enables encryption of:
# - Container-to-container traffic
# - Container-to-service traffic
# - Data plane (application traffic)
# Control plane (Swarm management) always encrypted
# Verify encryption
docker network inspect secure-network | grep encrypted
Encryption mechanism:
# IPSec encryption (AES-GCM)
# Uses IKEv2 for key exchange
# Encrypt specific networks
docker network create \
--driver overlay \
--opt encrypted \
--subnet 10.0.0.0/24 \
encrypted-backend
# Leave non-sensitive networks unencrypted
docker network create \
--driver overlay \
--subnet 10.1.0.0/24 \
public-network
# Performance impact: 5-10% throughput reduction
# Security benefit: Protects sensitive data in transit
Service Discovery
Understand how DNS and load balancing work in overlay networks.
DNS resolution in overlays:
# Internal DNS resolver: 127.0.0.11:53
# Service name resolution:
# <service-name>.<network-name> = VIP
# Example from within container
docker exec api-container nslookup db
# Resolves to:
# Name: db.backend
# Address: 10.0.0.5 (VIP)
# VIP routes to all service replicas via load balancer
Load balancing configuration:
# Default: IPVS (IP Virtual Server)
# Mode: Round-robin with connection tracking
# No explicit configuration needed
# Automatic load balancing across replicas
# Test load balancing
docker service create \
--name backend \
--network mynetwork \
--replicas 3 \
--env HOSTNAME=backend \
alpine sleep 1000
# Each connection to backend.mynetwork balances
# across the 3 replicas
Sticky sessions configuration:
# Docker overlay doesn't support sticky sessions directly
# Solutions:
# 1. Application-level session management
# 2. Use external load balancer
# 3. Configure reverse proxy with persistence
# Example: Nginx ingress with sticky sessions
docker service create \
--name frontend \
--network overlay \
--replicas 1 \
-p 80:80 \
-e BACKEND_POOL=backend.mynetwork \
nginx-lb:latest
Multi-Host Communication
Enable communication between containers on different hosts.
Network spanning multiple hosts:
# Create overlay spanning all swarm nodes
docker network create \
--driver overlay \
--subnet 10.0.0.0/24 \
cluster-wide
# Deploy replicas across hosts
docker service create \
--name distributed-app \
--network cluster-wide \
--replicas 6 \
app:latest
# Docker automatically places replicas across hosts
# Verify placement
docker service ps distributed-app
# Container from host A communicates with host B
# transparently via overlay network
Cross-host network traffic flow:
# Traffic between hosts uses VXLAN encapsulation
# Host A (10.1.0.1):
# Container A (10.0.0.2) -> Network packet
# Overlay encapsulation:
# VXLAN header + IP header with Host A/B IPs + packet
# Host B (10.1.0.2):
# Decapsulates VXLAN
# Delivers to Container B (10.0.0.3)
# Transparent to applications
# TCP/IP works as if on same network
Attachable Networks
Create networks that external containers can join.
Create attachable overlay:
# Create attachable network
docker network create \
--driver overlay \
--attachable \
shared-network
# Attachable allows:
# - Services to connect (Swarm mode)
# - Standalone containers to connect (non-Swarm)
# - Manual container network connection
# Connect standalone container
docker run -d \
--name standalone-app \
--network shared-network \
app:latest
# Connect service to same network
docker service create \
--name service-app \
--network shared-network \
service:latest
# Both can communicate
docker exec standalone-app ping service-app
Network Troubleshooting
Diagnose and resolve overlay network issues.
Connectivity troubleshooting:
# Test DNS resolution
docker exec <container-id> nslookup <service-name>
# Test ping between containers
docker exec <container1-id> ping <container2-ip>
# Test service endpoint
docker exec <container-id> curl http://<service-name>:<port>
# Check network connectivity from host
ip netns exec <ns-id> ip route show
ip netns exec <ns-id> ip link show
Network inspection:
# Get detailed network info
docker network inspect mynetwork
# Shows:
# - Containers connected
# - Subnet and gateway
# - Driver options
# - Network scope
# Check service network
docker service inspect servicename | grep -A 20 Networks
# View VXLAN details (if encrypted)
docker network inspect encrypted-network | grep -E "com.docker|Encrypted"
Common issues and fixes:
# Issue: Containers can't reach each other
# Solution: Verify network connectivity
docker network ls
docker service ps service-name
# Issue: DNS resolution failing
# Solution: Check DNS configuration
docker exec container-id cat /etc/resolv.conf
# Should show: nameserver 127.0.0.11
# Issue: Encryption overhead high
# Solution: Consider disabling if data not sensitive
docker network create --driver overlay --opt encrypted=false fast-network
# Issue: VXLAN port blocked
# Solution: Ensure 4789/udp open between all hosts
telnet <remote-host> 4789
sudo ufw allow 4789/udp
Performance Optimization
Optimize overlay network performance.
VXLAN MTU tuning:
# Default MTU: 1500 bytes
# VXLAN overhead: 50 bytes
# Optimal: Set MTU to 1450 on physical network
# Or increase container MTU:
docker network create \
--driver overlay \
--opt com.docker.network.driver.mtu=1450 \
optimized-network
# Test MTU
docker exec container-id ping -M do -s 1400 <target>
# If fails, reduce MTU
Reduce encapsulation overhead:
# Use local bridge networks when possible
# (less overhead, same host only)
docker network create \
--driver bridge \
local-net
# Use host network for performance-critical apps
docker run -d \
--network host \
--name perf-critical \
app:latest
# Trade-off: Port bindings, isolation
Monitoring overlay performance:
# Monitor network metrics
docker stats --format="table {{.Container}}\t{{.NetIO}}"
# Check VXLAN throughput
docker exec container-id iperf3 -c <target>
# Monitor packet loss
docker exec container-id ping -c 100 <target> | grep loss
# Acceptable: < 0.1% packet loss
# Good: < 0.01% packet loss
Conclusion
Docker overlay networks provide the foundation for secure, scalable multi-host container deployments. By understanding VXLAN encapsulation, service discovery mechanisms, and encryption options, you build robust distributed systems that scale transparently across infrastructure. Start with basic overlay networks for non-sensitive workloads, add encryption for data protection, and implement comprehensive monitoring for operational visibility. As your containerized infrastructure grows across multiple data centers or cloud regions, overlay networks become increasingly critical to application performance and reliability. Combine overlay networks with Swarm management, load balancing, and health checks for production-grade distributed container deployments.


