Microservices Communication Patterns
Microservices architectures require careful design of how services communicate—synchronous patterns like REST and gRPC work well for request-response, while asynchronous messaging via queues and events handles decoupled workflows. This guide covers implementing synchronous REST and gRPC communication, asynchronous messaging with RabbitMQ, the saga pattern for distributed transactions, event sourcing, and API gateway configuration on Linux.
Prerequisites
- Linux server (Ubuntu 20.04+ or CentOS/Rocky 8+)
- Docker and Docker Compose installed
- Basic knowledge of REST APIs and message queues
- A microservices application to instrument
Synchronous Communication: REST
REST over HTTP is the most common synchronous pattern. Key concerns for inter-service communication:
# Example: service A calling service B via REST
# Using curl for testing; production uses HTTP client libraries
# Service-to-service call with timeout and retry
curl --retry 3 \
--retry-delay 1 \
--retry-max-time 30 \
--max-time 10 \
-H "Content-Type: application/json" \
-H "X-Request-ID: $(uuidgen)" \
-H "X-Service: order-service" \
http://user-service:8080/api/users/123
Nginx upstream configuration with circuit-breaker-like behavior:
sudo tee /etc/nginx/conf.d/microservices.conf << 'EOF'
# Upstream for user service (multiple instances)
upstream user_service {
least_conn;
server user-service-1:8080 max_fails=3 fail_timeout=30s;
server user-service-2:8080 max_fails=3 fail_timeout=30s;
# Health check (NGINX Plus feature; in OSS, use passive health checks above)
keepalive 32;
}
# Upstream for order service
upstream order_service {
server order-service-1:8081 max_fails=3 fail_timeout=30s;
server order-service-2:8081 max_fails=3 fail_timeout=30s;
keepalive 32;
}
server {
listen 80;
server_name api.example.com;
# Route to user service
location /api/users/ {
proxy_pass http://user_service;
proxy_connect_timeout 5s;
proxy_read_timeout 30s;
proxy_next_upstream error timeout http_500 http_502 http_503;
proxy_next_upstream_tries 2;
}
# Route to order service
location /api/orders/ {
proxy_pass http://order_service;
proxy_connect_timeout 5s;
proxy_read_timeout 30s;
proxy_next_upstream error timeout http_500 http_502 http_503;
}
}
EOF
Synchronous Communication: gRPC
gRPC provides better performance and type safety for inter-service communication:
# Docker Compose for gRPC microservices
cat > /opt/services/docker-compose.yml << 'EOF'
version: '3.8'
services:
user-service:
image: myorg/user-service:latest
ports:
- "50051:50051"
environment:
- GRPC_PORT=50051
- DB_URL=postgres://postgres:pass@postgres:5432/users
networks:
- services
order-service:
image: myorg/order-service:latest
ports:
- "50052:50052"
environment:
- GRPC_PORT=50052
- USER_SERVICE=user-service:50051
- DB_URL=postgres://postgres:pass@postgres:5432/orders
networks:
- services
envoy:
image: envoyproxy/envoy:v1.29-latest
volumes:
- ./envoy.yaml:/etc/envoy/envoy.yaml:ro
ports:
- "9090:9090"
networks:
- services
networks:
services:
driver: bridge
EOF
Asynchronous Messaging with RabbitMQ
Install RabbitMQ for event-driven, decoupled communication:
# Install RabbitMQ
sudo apt install rabbitmq-server # Ubuntu
sudo dnf install rabbitmq-server # CentOS/Rocky
sudo systemctl enable --now rabbitmq-server
# Enable management plugin (web UI at :15672)
sudo rabbitmq-plugins enable rabbitmq_management
# Create user and vhost
sudo rabbitmqctl add_user myapp secretpassword
sudo rabbitmqctl add_vhost myapp-vhost
sudo rabbitmqctl set_permissions -p myapp-vhost myapp ".*" ".*" ".*"
# Configure exchanges and queues
sudo rabbitmq-admin declare exchange --vhost=myapp-vhost \
--name=orders.exchange \
--type=topic \
--durable=true
# Or via management API
curl -u myapp:secretpassword \
-H "Content-Type: application/json" \
-X PUT http://localhost:15672/api/exchanges/myapp-vhost/orders.exchange \
-d '{"type":"topic","durable":true}'
Publisher example (Node.js/shell):
# Test publishing a message via RabbitMQ management API
curl -u myapp:secretpassword \
-H "Content-Type: application/json" \
-X POST http://localhost:15672/api/exchanges/myapp-vhost/orders.exchange/publish \
-d '{
"properties": {"content_type": "application/json"},
"routing_key": "order.created",
"payload": "{\"orderId\": \"123\", \"userId\": \"456\", \"amount\": 99.99}",
"payload_encoding": "string"
}'
Docker Compose with RabbitMQ:
services:
rabbitmq:
image: rabbitmq:3.12-management-alpine
environment:
RABBITMQ_DEFAULT_USER: myapp
RABBITMQ_DEFAULT_PASS: secretpassword
RABBITMQ_DEFAULT_VHOST: myapp-vhost
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq_data:/var/lib/rabbitmq
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 30s
timeout: 10s
retries: 5
Event-Driven Architecture with Redis Streams
Redis Streams provide a persistent, ordered event log for event sourcing:
# Ensure Redis is installed
sudo apt install redis-server
# Create consumer group for order events
redis-cli XGROUP CREATE order-events notification-service $ MKSTREAM
# Produce an event (your application does this)
redis-cli XADD order-events '*' \
event_type "order.created" \
order_id "123" \
user_id "456" \
amount "99.99" \
timestamp "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
# Read events as a consumer group (use in your service code)
redis-cli XREADGROUP GROUP notification-service consumer-1 \
COUNT 10 BLOCK 5000 STREAMS order-events ">"
# Acknowledge processed messages
redis-cli XACK order-events notification-service "1712345678901-0"
# Check stream health
redis-cli XLEN order-events
redis-cli XINFO GROUPS order-events
redis-cli XPENDING order-events notification-service - + 10
Saga Pattern for Distributed Transactions
The saga pattern handles distributed transactions across multiple services without two-phase commit:
Choreography Saga (event-driven):
# Example flow for order creation:
# 1. Order Service: create order (PENDING) → publish "OrderCreated"
# 2. Inventory Service: reserve items → publish "InventoryReserved" or "InventoryFailed"
# 3. Payment Service: charge customer → publish "PaymentProcessed" or "PaymentFailed"
# 4. Order Service: update to CONFIRMED or trigger compensation
# Track saga state in Redis
redis-cli HSET "saga:order:123" \
state "INVENTORY_RESERVED" \
order_id "123" \
inventory_reservation "res_456" \
started_at "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
# Check saga state
redis-cli HGETALL "saga:order:123"
Orchestration Saga (central coordinator):
# Saga orchestrator tracks state machine
# Example saga state machine in pseudo-code:
cat > /opt/services/saga-states.json << 'EOF'
{
"order_saga": {
"initial": "ORDER_CREATED",
"states": {
"ORDER_CREATED": {
"action": "reserve_inventory",
"on_success": "INVENTORY_RESERVED",
"on_failure": "SAGA_FAILED"
},
"INVENTORY_RESERVED": {
"action": "process_payment",
"on_success": "PAYMENT_PROCESSED",
"on_failure": "COMPENSATE_INVENTORY"
},
"COMPENSATE_INVENTORY": {
"action": "release_inventory",
"on_success": "SAGA_FAILED",
"on_failure": "SAGA_FAILED"
},
"PAYMENT_PROCESSED": {
"action": "confirm_order",
"on_success": "SAGA_COMPLETED",
"on_failure": "COMPENSATE_PAYMENT"
}
}
}
}
EOF
Service Discovery with Consul
# Install Consul
wget https://releases.hashicorp.com/consul/1.17.0/consul_1.17.0_linux_amd64.zip
unzip consul_1.17.0_linux_amd64.zip
sudo mv consul /usr/local/bin/
consul --version
# Start Consul agent
sudo tee /etc/systemd/system/consul.service << 'EOF'
[Unit]
Description=Consul Service Discovery
After=network-online.target
[Service]
Type=simple
ExecStart=/usr/local/bin/consul agent \
-server \
-bootstrap-expect=1 \
-bind=0.0.0.0 \
-client=0.0.0.0 \
-data-dir=/var/lib/consul \
-ui
Restart=on-failure
User=consul
[Install]
WantedBy=multi-user.target
EOF
sudo useradd -r -s /bin/false consul
sudo mkdir -p /var/lib/consul
sudo chown consul:consul /var/lib/consul
sudo systemctl enable --now consul
Register a service with Consul:
# Register user-service
curl -X PUT http://localhost:8500/v1/agent/service/register \
-H "Content-Type: application/json" \
-d '{
"ID": "user-service-1",
"Name": "user-service",
"Address": "192.168.1.10",
"Port": 8080,
"Tags": ["v1", "production"],
"Check": {
"HTTP": "http://192.168.1.10:8080/health",
"Interval": "10s",
"Timeout": "5s"
}
}'
# Discover the service
curl http://localhost:8500/v1/health/service/user-service?passing=true | \
python3 -m json.tool
API Gateway with Nginx
sudo tee /etc/nginx/sites-available/api-gateway << 'EOF'
# Service upstreams
upstream user_svc { server 127.0.0.1:8001; keepalive 16; }
upstream order_svc { server 127.0.0.1:8002; keepalive 16; }
upstream product_svc { server 127.0.0.1:8003; keepalive 16; }
upstream notification_svc { server 127.0.0.1:8004; keepalive 16; }
# Rate limiting
limit_req_zone $binary_remote_addr zone=gateway:10m rate=200r/m;
server {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
# Common proxy settings
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Request-ID $request_id;
# Apply rate limit globally
limit_req zone=gateway burst=50 nodelay;
# Service routing
location /api/v1/users { proxy_pass http://user_svc; }
location /api/v1/orders { proxy_pass http://order_svc; }
location /api/v1/products { proxy_pass http://product_svc; }
# Internal services not exposed externally
location /api/internal {
allow 10.0.0.0/8;
deny all;
proxy_pass http://notification_svc;
}
# Health aggregation endpoint
location /health {
access_log off;
return 200 '{"gateway": "healthy"}';
add_header Content-Type application/json;
}
}
EOF
sudo nginx -t && sudo systemctl reload nginx
Troubleshooting
Services can't reach each other by name:
# Check DNS resolution
dig user-service.service.consul # If using Consul DNS
# Or check /etc/hosts or Docker network DNS
docker exec order-service ping user-service
# Check if Consul health check is passing
curl http://localhost:8500/v1/health/service/user-service?passing=true
RabbitMQ messages piling up (consumer not processing):
# Check queue depth
sudo rabbitmqctl list_queues name messages consumers
# Check consumer is connected
sudo rabbitmqctl list_consumers
# Check for unacknowledged messages
sudo rabbitmqctl list_queues name messages_unacknowledged
Saga stuck in intermediate state:
# Check saga state
redis-cli HGETALL "saga:order:123"
# Manual intervention: set state and replay
redis-cli HSET "saga:order:123" state "COMPENSATE_INVENTORY"
# Trigger compensation workflow via admin endpoint
Nginx returning 502 for a service:
# Check upstream connectivity
curl http://127.0.0.1:8001/health
# Check Nginx error log
sudo tail -50 /var/log/nginx/error.log | grep "user_svc"
# Check if service is running
ss -tlnp | grep 8001
Conclusion
The right microservices communication pattern depends on your consistency and latency requirements: use REST or gRPC for synchronous request-response where you need immediate results, and RabbitMQ or Redis Streams for asynchronous event-driven workflows where services can be decoupled. The saga pattern solves distributed transaction consistency without distributed locks, and Nginx as an API gateway provides a single entry point with routing, rate limiting, and SSL termination. Start with direct HTTP communication and introduce message queues only when decoupling is genuinely needed.


