Kubernetes Service Mesh with Linkerd
Linkerd is a lightweight, ultrafast service mesh built specifically for Kubernetes. It provides traffic management, security, and observability with minimal resource overhead. This guide covers Linkerd installation via CLI, automatic sidecar injection, traffic splitting, retry and timeout policies, mutual TLS, and the built-in dashboard for monitoring your VPS and baremetal Kubernetes infrastructure.
Table of Contents
- Linkerd Overview
- Installation
- Sidecar Injection
- Traffic Management
- Security with mTLS
- Linkerd Dashboard
- Debugging and Observability
- Practical Examples
- Conclusion
Linkerd Overview
Why Linkerd?
Linkerd is designed for production Kubernetes with:
- Lightweight: Minimal resource usage compared to Istio
- Fast: Written in Rust, ultrafast proxy
- Kubernetes-native: Leverages Kubernetes conventions
- Simple: Easy to understand and operate
- Observable: Built-in metrics and visualization
Architecture
Data Plane: Linkerd proxies (micro-proxies)
Control Plane: destination, identity, tap services
Observability: Live tap, metrics, dashboards
Linkerd vs Istio
| Feature | Linkerd | Istio |
|---|---|---|
| Resource Usage | Low | High |
| Proxy Size | 10MB | 100+MB |
| Learning Curve | Easy | Complex |
| Features | Core | Comprehensive |
| Best For | Simple meshes | Complex deployments |
Installation
Prerequisites
- Kubernetes v1.20+
- kubectl configured
- Helm 3+
Installing linkerd CLI
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
# Add to PATH
export PATH=$PATH:$HOME/.linkerd2/bin
# Verify installation
linkerd version
Pre-Installation Checks
linkerd check --pre
Installing Linkerd Control Plane
# Install Linkerd
linkerd install | kubectl apply -f -
# Wait for control plane
kubectl rollout status -n linkerd deployment/linkerd-controller --timeout=5m
# Verify installation
linkerd check
Helm Installation
# Add Helm repository
helm repo add linkerd https://helm.linkerd.io
helm repo update
# Install Linkerd
helm install linkerd-crds linkerd/linkerd-crds \
-n linkerd-crds \
--create-namespace
helm install linkerd linkerd/linkerd-control-plane \
-n linkerd \
--create-namespace
# Verify
linkerd check
Install Linkerd Viz (Dashboard)
linkerd viz install | kubectl apply -f -
kubectl rollout status -n linkerd-viz deployment/web --timeout=5m
Sidecar Injection
Automatic Injection
Label namespace for automatic injection:
kubectl annotate namespace production linkerd.io/inject=enabled
Verify annotation:
kubectl get namespace production -o yaml | grep inject
Manual Injection
For non-annotated namespaces:
linkerd inject deployment.yaml | kubectl apply -f -
Verify Injection
# Check if proxy is running
kubectl get pods -n production -o jsonpath='{.items[0].spec.containers[*].name}'
# Should show: app linkerd-proxy
Disabling Injection per Pod
apiVersion: v1
kind: Pod
metadata:
name: no-mesh
spec:
annotations:
linkerd.io/inject: disabled
containers:
- name: app
image: myapp:1.0
Traffic Management
Service Profile
Define traffic policies with ServiceProfile:
apiVersion: linkerd.io/v1alpha1
kind: ServiceProfile
metadata:
name: api
namespace: production
spec:
service:
name: api
port: 8080
routes:
- name: GET /api/users
condition:
method: GET
pathRegex: /api/users
timeoutMs: 5000
retries:
limit: 3
backoff:
minMs: 10
maxMs: 1000
jitterFraction: 0.1
isRetryable: true
- name: POST /api/users
condition:
method: POST
pathRegex: /api/users
timeoutMs: 10000
retries:
limit: 1
backoff:
minMs: 10
maxMs: 100
jitterFraction: 0.1
isRetryable: false
Create ServiceProfile:
kubectl apply -f service-profile.yaml
Traffic Split
Distribute traffic between service versions:
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: api-split
namespace: production
spec:
service: api
backends:
- service: api-v1
weight: 80
- service: api-v2
weight: 20
Retries and Timeouts
Configure in ServiceProfile:
routes:
- name: api-call
condition:
method: GET
pathRegex: /api/.*
retries:
limit: 3
backoff:
minMs: 10
maxMs: 1000
jitterFraction: 0.1
timeoutMs: 5000
isRetryable: true
Automatic Retries
Retries are automatically configured for safe operations:
- GET requests: Retried by default
- POST requests: Not retried unless explicitly marked
Security with mTLS
Automatic mTLS
Linkerd automatically enables mTLS between services:
# Verify mTLS is enabled
linkerd check
# Should show: "mTLS" status
Checking mTLS Status
# Tap into traffic to see encryption
linkerd tap deployment/api -n production
# Output will show encrypted connections
Certificate Management
Linkerd manages certificates automatically. View certificate details:
# Get identity certificate
kubectl get secret -n linkerd-identity linkerd-identity-issuer -o yaml
# Check certificate expiration
kubectl get secret -n linkerd-identity linkerd-identity-issuer -o json | \
jq '.data."tls.crt"' -r | base64 -d | openssl x509 -text -noout | grep -A 2 "Validity"
Policy Server (Authorization)
Install policy server for advanced authorization:
linkerd policy install | kubectl apply -f -
Create authorization policy:
apiVersion: policy.linkerd.io/v1alpha1
kind: AuthorizationPolicy
metadata:
name: api-auth
namespace: production
spec:
targetRef:
group: ""
kind: Service
name: api
rules:
- from:
- podSelector:
matchLabels:
app: frontend
to:
- ports: ["8080"]
Linkerd Dashboard
Accessing Dashboard
Port-forward to dashboard:
linkerd viz dashboard &
Automatically opens at http://localhost:50750
Or manually:
kubectl port-forward -n linkerd-viz svc/web 50750:8084
Dashboard Features
- Overview: Cluster health and metrics
- Namespaces: Namespace details and traffic
- Deployments: Deployment metrics and status
- Pods: Pod-level metrics and logs
- Traffic: Live traffic visualization
- Tap: Real-time traffic inspection
Golden Metrics
Linkerd automatically tracks four golden signals:
- Latency: Response time
- Success Rate: Percentage of successful requests
- Traffic: Requests per second
- Errors: Error rate
View via dashboard or CLI:
linkerd stat deployment -n production
linkerd stat service -n production
Debugging and Observability
Tap Feature
Inspect live traffic:
# Tap all traffic to a pod
linkerd tap pod/api-0 -n production
# Tap with filters
linkerd tap deployment/api -n production --to pod/frontend -n production
# Tap specific paths
linkerd tap deploy/api -n production --path /api/users
Metrics
View metrics for deployments:
# Get metrics
linkerd stat deployment -n production
# Show success rates
linkerd stat deploy -n production --show-tls
# With detailed metrics
linkerd stat pod -n production -o json
Logs
View Linkerd control plane logs:
# Destination controller
kubectl logs -n linkerd deployment/linkerd-destination
# Identity service
kubectl logs -n linkerd deployment/linkerd-identity
# Proxy logs
kubectl logs -n production <pod-name> -c linkerd-proxy
Debugging Connectivity
# Check service discovery
linkerd endpoints service api -n production
# Validate traffic routing
linkerd routes deploy/web -n production
# Check TLS status
linkerd check --proxy
Practical Examples
Example: Service Profile with Multiple Routes
---
# Create two services
apiVersion: v1
kind: Service
metadata:
name: api
namespace: production
spec:
type: ClusterIP
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: api
---
# ServiceProfile with routes and policies
apiVersion: linkerd.io/v1alpha1
kind: ServiceProfile
metadata:
name: api
namespace: production
spec:
service:
name: api
port: 8080
routes:
- name: GET /health
condition:
method: GET
pathRegex: /health
timeoutMs: 1000
retries:
limit: 2
isRetryable: true
- name: GET /api/data
condition:
method: GET
pathRegex: /api/data
timeoutMs: 5000
retries:
limit: 3
backoff:
minMs: 10
maxMs: 100
jitterFraction: 0.1
isRetryable: true
- name: POST /api/data
condition:
method: POST
pathRegex: /api/data
timeoutMs: 10000
retries:
limit: 1
isRetryable: false
- name: DELETE /api/data
condition:
method: DELETE
pathRegex: /api/data.*
timeoutMs: 15000
isRetryable: false
Example: Canary Deployment with Traffic Split
---
# Current version
apiVersion: v1
kind: Service
metadata:
name: web-stable
namespace: production
spec:
selector:
app: web
version: stable
ports:
- port: 80
targetPort: 8080
---
# Canary version
apiVersion: v1
kind: Service
metadata:
name: web-canary
namespace: production
spec:
selector:
app: web
version: canary
ports:
- port: 80
targetPort: 8080
---
# Traffic split: 95% stable, 5% canary
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-canary
namespace: production
spec:
service: web
backends:
- service: web-stable
weight: 950
- service: web-canary
weight: 50
---
# ServiceProfile for web
apiVersion: linkerd.io/v1alpha1
kind: ServiceProfile
metadata:
name: web
namespace: production
spec:
service:
name: web
port: 80
routes:
- name: GET /
condition:
method: GET
pathRegex: /
timeoutMs: 5000
retries:
limit: 3
isRetryable: true
Example: Production Configuration
#!/bin/bash
# Install and configure Linkerd
# 1. Check prerequisites
linkerd check --pre
# 2. Install control plane
linkerd install | kubectl apply -f -
# 3. Wait for control plane
kubectl rollout status -n linkerd deployment/linkerd-controller --timeout=5m
# 4. Install viz
linkerd viz install | kubectl apply -f -
# 5. Enable namespace for automatic injection
kubectl annotate namespace production linkerd.io/inject=enabled --overwrite
# 6. Verify everything
linkerd check
# 7. Get dashboard access
linkerd viz dashboard
Conclusion
Linkerd provides a lightweight, production-ready service mesh for Kubernetes deployments on VPS and baremetal infrastructure. By leveraging automatic mTLS, implementing ServiceProfiles for traffic management, and using the built-in dashboard for observability, you create a resilient microservices platform with minimal operational complexity. Start with basic installation and automatic injection, add ServiceProfiles for traffic policies, and gradually implement advanced features like traffic splitting and authorization policies. Linkerd's simplicity and low resource overhead make it ideal for organizations looking to adopt a service mesh without excessive complexity.


