Teleport Secure Infrastructure Access

Teleport is an open-source zero-trust infrastructure access platform that provides secure SSH, Kubernetes, database, and application access through certificate-based authentication. By replacing static credentials with short-lived certificates, Teleport enables RBAC, session recording, and full audit trails across your entire infrastructure.

Prerequisites

  • Ubuntu 22.04/20.04 or CentOS/Rocky Linux 8+ (64-bit)
  • A domain name with DNS control (e.g., teleport.example.com)
  • Valid TLS certificate (Let's Encrypt works)
  • Minimum 2 CPU cores, 4 GB RAM for the Auth/Proxy server
  • Open ports: 443 (web/proxy), 3025 (auth), 3022 (SSH node), 3026 (Kubernetes), 3027 (database)

Installing Teleport

Add the Teleport repository and install:

# Add Teleport APT repository (Ubuntu/Debian)
curl https://deb.releases.teleport.dev/teleport-pubkey.asc | sudo apt-key add -
echo "deb https://deb.releases.teleport.dev/ stable main" | sudo tee /etc/apt/sources.list.d/teleport.list
sudo apt-get update
sudo apt-get install -y teleport

# For CentOS/Rocky Linux
sudo yum-config-manager --add-repo https://rpm.releases.teleport.dev/teleport.repo
sudo yum install -y teleport

# Verify installation
teleport version

Configuring the Auth and Proxy Service

Teleport uses a single binary that runs different services. Create the main configuration:

# Create Teleport data directory
sudo mkdir -p /var/lib/teleport

# Generate default configuration
sudo teleport configure \
  --cluster-name=teleport.example.com \
  --public-addr=teleport.example.com:443 \
  --cert-file=/etc/letsencrypt/live/teleport.example.com/fullchain.pem \
  --key-file=/etc/letsencrypt/live/teleport.example.com/privkey.pem \
  > /etc/teleport.yaml

Edit /etc/teleport.yaml for production use:

teleport:
  nodename: teleport.example.com
  data_dir: /var/lib/teleport
  log:
    output: stderr
    severity: INFO

auth_service:
  enabled: yes
  cluster_name: teleport.example.com
  listen_addr: 0.0.0.0:3025
  tokens:
    - "node:your-node-join-token"
    - "kube:your-kube-join-token"
    - "db:your-db-join-token"
  session_recording: node  # or 'proxy' for enhanced recording

proxy_service:
  enabled: yes
  listen_addr: 0.0.0.0:3023
  web_listen_addr: 0.0.0.0:443
  public_addr: teleport.example.com:443
  https_keypairs:
    - key_file: /etc/letsencrypt/live/teleport.example.com/privkey.pem
      cert_file: /etc/letsencrypt/live/teleport.example.com/fullchain.pem
  kube_listen_addr: 0.0.0.0:3026

ssh_service:
  enabled: yes
  listen_addr: 0.0.0.0:3022

Enable and start Teleport:

sudo systemctl enable teleport
sudo systemctl start teleport

# Check status
sudo systemctl status teleport
sudo journalctl -u teleport -f

Create the first admin user:

# Create admin user and get invite link
sudo tctl users add admin --roles=editor,access --logins=root,ubuntu

# The command outputs a URL to set up MFA and password

Adding SSH Nodes

Install Teleport on each server you want to manage and join it to the cluster:

# Install Teleport on the node (same steps as above)
# Then configure as a node-only service

cat > /etc/teleport.yaml <<EOF
teleport:
  nodename: web-server-01
  data_dir: /var/lib/teleport
  auth_token: your-node-join-token
  auth_servers:
    - teleport.example.com:3025

auth_service:
  enabled: no

proxy_service:
  enabled: no

ssh_service:
  enabled: yes
  listen_addr: 0.0.0.0:3022
  labels:
    env: production
    role: webserver
EOF

sudo systemctl enable teleport
sudo systemctl start teleport

Connect to nodes via tsh:

# Install tsh locally
# Login to Teleport cluster
tsh login --proxy=teleport.example.com --user=admin

# List available nodes
tsh ls

# SSH to a node
tsh ssh ubuntu@web-server-01

# SSH using node labels
tsh ssh --label env=production ubuntu@all

Kubernetes Access

Deploy the Kubernetes service to expose cluster access through Teleport:

# Deploy Teleport Kubernetes service using Helm
helm repo add teleport https://charts.releases.teleport.dev
helm repo update

# Install as a Kubernetes service joined to your cluster
helm install teleport-kube-agent teleport/teleport-kube-agent \
  --set proxyAddr=teleport.example.com:443 \
  --set authToken=your-kube-join-token \
  --set kubeClusterName=production-cluster \
  --namespace teleport \
  --create-namespace

Access Kubernetes through Teleport:

# List registered Kubernetes clusters
tsh kube ls

# Login to a cluster (sets kubeconfig context)
tsh kube login production-cluster

# Use kubectl normally
kubectl get pods -A

# Direct exec via tsh
tsh kube exec -it pod-name -- /bin/bash

Database Proxying

Register databases for secure, audited access:

# Configure database service in teleport.yaml
cat >> /etc/teleport.yaml <<EOF

db_service:
  enabled: yes
  databases:
    - name: postgres-prod
      description: "Production PostgreSQL"
      protocol: postgres
      uri: postgres-host.internal:5432
      static_labels:
        env: production
    - name: mysql-prod
      description: "Production MySQL"
      protocol: mysql
      uri: mysql-host.internal:3306
EOF

sudo systemctl restart teleport

Connect to databases through Teleport:

# List available databases
tsh db ls

# Login to database (gets temporary certs)
tsh db login postgres-prod

# Connect using standard client
tsh db connect postgres-prod --db-user=appuser --db-name=mydb

# Or get connection string for psql
eval $(tsh db env postgres-prod)
psql

RBAC Roles and Permissions

Create granular roles to control access:

# Create a developer role - read-only Kubernetes, SSH to staging only
cat > developer-role.yaml <<EOF
kind: role
version: v5
metadata:
  name: developer
spec:
  allow:
    logins: ['ubuntu', 'ec2-user']
    node_labels:
      env: ['staging']
    kubernetes_groups: ['view']
    kubernetes_labels:
      env: ['staging']
    db_labels:
      env: ['staging']
    db_users: ['readonly']
    db_names: ['*']
  deny:
    node_labels:
      env: ['production']
EOF

sudo tctl create -f developer-role.yaml

# Assign role to user
sudo tctl users update developer-alice --set-roles=developer

# Create production role with time-limited access
cat > prod-role.yaml <<EOF
kind: role
version: v5
metadata:
  name: prod-access
spec:
  options:
    max_session_ttl: 4h
    require_session_mfa: yes
  allow:
    logins: ['ubuntu']
    node_labels:
      env: ['production']
    kubernetes_groups: ['system:masters']
EOF

sudo tctl create -f prod-role.yaml

Session Recording and Audit

Configure session recording and review audit logs:

# View current audit log
sudo tctl audit --start=2024-01-01 --end=2024-12-31

# Export audit events
sudo tctl audit --format=json > audit-export.json

# Watch live sessions
tsh recordings ls

# Play back a recorded session
tsh play <session-id>

# Configure enhanced session recording (requires BPF kernel support)
# In teleport.yaml under ssh_service:
# enhanced_recording:
#   enabled: true
#   command_buffer_size: 8
#   disk_buffer_size: 300
#   network_buffer_size: 8
#   cgroup_path: /cgroup2

SSO Integration

Integrate with GitHub OAuth for SSO:

# Create GitHub SSO connector
cat > github-connector.yaml <<EOF
kind: github
version: v3
metadata:
  name: github
spec:
  client_id: your-github-oauth-app-client-id
  client_secret: your-github-oauth-app-client-secret
  display: GitHub
  redirect_url: https://teleport.example.com/v1/webapi/github/callback
  teams_to_logins:
    - organization: your-org
      team: devops
      logins:
        - ubuntu
      kubernetes_groups:
        - system:masters
    - organization: your-org
      team: developers
      logins:
        - ubuntu
      kubernetes_groups:
        - view
EOF

sudo tctl create -f github-connector.yaml

# For OIDC (e.g., Okta, Google)
cat > oidc-connector.yaml <<EOF
kind: oidc
version: v2
metadata:
  name: okta
spec:
  issuer_url: https://your-org.okta.com
  client_id: your-client-id
  client_secret: your-client-secret
  redirect_url: https://teleport.example.com/v1/webapi/oidc/callback
  scope: ['openid', 'profile', 'email', 'groups']
  claims_to_roles:
    - claim: groups
      value: devops
      roles: ['editor']
EOF

sudo tctl create -f oidc-connector.yaml

Troubleshooting

Node not showing up in tsh ls:

# Check node service status on the node
sudo systemctl status teleport
sudo journalctl -u teleport --since "1 hour ago"

# Verify the auth token is correct and not expired
sudo tctl tokens ls

# Check network connectivity from node to auth server
nc -zv teleport.example.com 3025

Certificate errors on login:

# Re-issue certificates
tsh logout
tsh login --proxy=teleport.example.com --user=admin

# Check certificate validity
tsh status

Session recording not working:

# Verify storage backend has write permissions
ls -la /var/lib/teleport/log/

# Check disk space
df -h /var/lib/teleport/

Database connection refused:

# Test direct connectivity to database from Teleport node
nc -zv postgres-host.internal 5432

# Check database service logs
sudo journalctl -u teleport | grep db_service

Conclusion

Teleport provides a comprehensive zero-trust access platform that eliminates static credentials and enforces certificate-based authentication across SSH, Kubernetes, and databases. With RBAC, session recording, and SSO integration, it gives you full visibility and control over who accesses your infrastructure and what they do. Start with a single auth/proxy node and expand to cover your entire fleet as your needs grow.