Telepresence for Remote Kubernetes Development

Telepresence lets you develop Kubernetes services locally while connecting to a remote cluster, enabling you to debug services with real cluster traffic and dependencies. This guide covers installing Telepresence on Linux, intercepting traffic from remote services, mounting remote volumes locally, and working with multi-cluster environments.

Prerequisites

  • kubectl configured with a Kubernetes cluster
  • Cluster-admin access for initial Traffic Manager installation
  • Linux, macOS, or WSL2
  • The service you want to intercept already deployed in the cluster

Install Telepresence

# Linux (amd64)
sudo curl -fL https://app.getambassador.io/download/tel2oss/releases/download/v2.17.0/telepresence-linux-amd64 \
  -o /usr/local/bin/telepresence
sudo chmod a+x /usr/local/bin/telepresence

# Verify installation
telepresence version

# macOS
brew install datawire/blackbird/telepresence

Connect to a Cluster

On first connect, Telepresence installs the Traffic Manager into your cluster:

# Connect to the cluster (installs Traffic Manager on first run)
telepresence connect

# Verify connection
telepresence status

# You can now reach cluster services by DNS name
curl http://my-service.my-namespace.svc.cluster.local

# Disconnect
telepresence quit

Install Traffic Manager to a specific namespace:

# The Traffic Manager is installed in the ambassador namespace by default
# To use a custom namespace:
telepresence helm install --namespace telepresence

# Or with Helm directly:
helm repo add datawire https://app.getambassador.io
helm install traffic-manager datawire/telepresence \
  --namespace telepresence \
  --create-namespace

Traffic Interception

Intercepting a service routes cluster traffic to your local process:

# List interceptable services
telepresence list

# Full intercept: ALL traffic to the service goes to your local machine
telepresence intercept my-service --port 8080:http

# Run your local service on port 8080
# All traffic destined for my-service in the cluster now hits localhost:8080

# Check active intercepts
telepresence list --intercepts

# Leave the intercept
telepresence leave my-service

Intercept a specific port with environment:

# Intercept and export the remote pod's environment variables
telepresence intercept my-service \
  --port 8080:http \
  --env-file /tmp/my-service.env

# The env file contains all environment variables from the remote pod
cat /tmp/my-service.env
# DATABASE_URL=postgres://...
# REDIS_URL=redis://...
# API_KEY=...

# Use the env file to run your local service with remote config
source /tmp/my-service.env
./my-service --port 8080

Intercept with Docker:

# Run local service in Docker with remote environment
telepresence intercept my-service \
  --port 8080:http \
  --docker-run \
  -- --rm -it my-org/my-service:dev

# Telepresence handles the networking between Docker and the cluster

Personal Intercepts and Preview URLs

Personal intercepts let multiple developers intercept the same service simultaneously using header-based routing:

# Requires Ambassador Cloud (free tier available)
telepresence login

# Create a personal intercept (only your traffic is intercepted)
telepresence intercept my-service \
  --port 8080:http \
  --http-header=x-developer=my-name \
  --preview-url=true

# Output includes a preview URL:
# Preview URL: https://xyz123.preview.edgestack.me

# Other developers' traffic continues going to the cluster service
# Only requests with x-developer: my-name header go to your local machine

Manual header-based intercept (without Ambassador Cloud):

# Use a personal header without preview URLs
telepresence intercept my-service \
  --port 8080:http \
  --http-header=x-dev-intercept=alice

Volume Mounts

Mount remote pod volumes to your local filesystem for access to configs and secrets:

# Mount remote volumes during intercept
# Requires FUSE (Linux) or macFUSE (macOS)
sudo apt install -y fuse3   # Ubuntu/Debian

telepresence intercept my-service \
  --port 8080:http \
  --mount /tmp/remote-volumes

# Volumes are accessible locally:
ls /tmp/remote-volumes/
# var/run/secrets/kubernetes.io/serviceaccount/
# etc/config/

# Access secrets that the pod uses
cat /tmp/remote-volumes/var/run/secrets/kubernetes.io/serviceaccount/token

Read-only config access:

# Mount remote filesystem read-only for config inspection
telepresence intercept my-service \
  --port 8080:http \
  --mount /tmp/pod-fs \
  --env-file .env

# Run service with remote config files
DATABASE_CONFIG=/tmp/pod-fs/etc/config/database.yaml ./my-service

Local Development Workflow

A typical development workflow combining all Telepresence features:

# 1. Connect to the cluster
telepresence connect

# 2. Intercept the service you're working on
telepresence intercept payment-service \
  --port 3000:http \
  --env-file .env.remote \
  --mount /tmp/payment-volumes

# 3. Start your local service with remote environment
source .env.remote
npm run dev   # or go run . or python app.py

# 4. Your local service now receives real cluster traffic
# Other services call payment-service -> request routes to your localhost:3000

# 5. Edit code locally - changes are instant (no rebuild/redeploy cycle)
# 6. Debug with your local debugger at localhost:9229 etc.

# 7. When done, leave the intercept
telepresence leave payment-service

Helper script:

#!/bin/bash
# scripts/dev-intercept.sh
SERVICE=${1:-my-service}
PORT=${2:-8080}

echo "Intercepting $SERVICE on port $PORT..."

telepresence intercept $SERVICE \
  --port $PORT:http \
  --env-file .env.cluster \
  --mount /tmp/$SERVICE-volumes

echo "Intercept active. Press Ctrl+C to stop."

Multi-Cluster Development

# Telepresence supports multiple kubecontexts
# Switch between clusters
kubectl config get-contexts

# Connect to a specific context
telepresence connect --context production-cluster

# List intercepts across clusters
telepresence list --all-namespaces

# Working with multiple namespaces
telepresence intercept my-service \
  --namespace staging \
  --port 8080:http

# Resolve services across namespaces while connected
curl http://database.prod-infra.svc.cluster.local:5432
curl http://redis.prod-infra.svc.cluster.local:6379

Troubleshooting

"Traffic Manager not found" error:

# Install Traffic Manager explicitly
telepresence helm install

# Verify Traffic Manager pods
kubectl get pods -n ambassador

# Check Traffic Manager logs
kubectl logs -n ambassador -l app=traffic-manager --tail=50

Intercept hangs or fails:

# Check if the service has the correct port name (must be named "http" or "https")
kubectl get svc my-service -o yaml | grep -A 5 "ports:"

# Telepresence works best with named ports
# Update service if needed:
kubectl patch svc my-service --patch '{"spec":{"ports":[{"name":"http","port":80,"targetPort":8080}]}}'

DNS not resolving cluster services:

# Verify Telepresence is connected
telepresence status

# Check if you can reach the cluster DNS
nslookup my-service.default.svc.cluster.local

# If DNS fails, try restarting the daemon
telepresence quit && telepresence connect

Volume mount fails on Linux:

# Install FUSE support
sudo apt install -y fuse3 libfuse3-dev

# Ensure FUSE is loaded
sudo modprobe fuse

# Allow non-root FUSE mounts
echo "user_allow_other" | sudo tee -a /etc/fuse.conf

Permission denied for intercept:

# Check RBAC for the Telepresence service account
kubectl auth can-i create deployments --namespace default

# Grant necessary permissions
kubectl create clusterrolebinding telepresence-admin \
  --clusterrole=cluster-admin \
  --serviceaccount=ambassador:traffic-manager

Conclusion

Telepresence eliminates the slow rebuild-push-deploy cycle by letting you run services locally while they participate in the real cluster environment. Traffic interception, remote volume mounts, and environment variable injection give your local process the same context as the deployed pod, making debugging complex distributed systems dramatically faster. For teams building microservices, Telepresence is one of the most effective tools for reducing the gap between local development and production behavior.