K3s Lightweight Kubernetes Installation
K3s is a lightweight, fully certified Kubernetes distribution designed for resource-constrained environments, IoT devices, edge computing, and development. This guide covers single-node and multi-node K3s installation, agent configuration, built-in features like Traefik and local-path storage, and deployment considerations for your VPS and baremetal infrastructure.
Table of Contents
- K3s Overview
- Single-Node Installation
- Multi-Node Installation
- Built-in Components
- Configuration and Customization
- Troubleshooting
- Practical Examples
- Conclusion
K3s Overview
What is K3s?
K3s is a lightweight Kubernetes distribution that:
- Runs in < 512MB RAM
- Only requires 200MB disk space (base binary)
- Fully certified Kubernetes conformance
- Single binary installation
- Includes batteries (Traefik, local-path storage)
K3s vs Full Kubernetes
| Feature | K3s | Full Kubernetes |
|---|---|---|
| Size | ~50MB | ~5GB |
| Memory | 256MB+ | 2GB+ |
| Setup Time | < 1 minute | 30+ minutes |
| Simplicity | Very high | Complex |
| Scalability | Good | Excellent |
| Enterprise Features | Core features | Full suite |
Architecture
Server (Control Plane): kube-apiserver, etcd, scheduler, controller-manager
Agent (Worker): kubelet, kube-proxy
Built-in: Traefik ingress, local-path storage, CoreDNS
Single-Node Installation
Prerequisites
- Linux kernel v3.10+
- 512MB+ RAM
- 200MB+ disk space
- Internet connectivity
Quick Installation
curl -sfL https://get.k3s.io | sh -
This installs K3s server with defaults.
Wait for ready:
kubectl get nodes
kubectl get pods -A
Verify Installation
# Check k3s service
sudo systemctl status k3s
# Test cluster
kubectl cluster-info
kubectl get nodes
kubectl get pods -A
Configure kubectl Access
K3s automatically creates kubeconfig:
sudo cat /etc/rancher/k3s/k3s.yaml
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# Or as regular user
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
chmod 600 ~/.kube/config
Multi-Node Installation
Server Installation
On first server node:
# Install K3s server
curl -sfL https://get.k3s.io | sh -
# Get server token
cat /var/lib/rancher/k3s/server/node-token
Agent Installation
On worker nodes:
# Export server token and IP
export K3S_URL=https://server-ip:6443
export K3S_TOKEN=server-token-from-above
# Install K3s agent
curl -sfL https://get.k3s.io | sh -
Or with explicit parameters:
curl -sfL https://get.k3s.io | K3S_URL=https://server-ip:6443 K3S_TOKEN=token sh -
Verify Multi-Node Cluster
# On server, view all nodes
kubectl get nodes
# Expected output: server and agents showing Ready
High Availability K3s
For production, use external database instead of embedded etcd:
# On server with external database
curl -sfL https://get.k3s.io | \
K3S_DATASTORE_ENDPOINT=postgres://user:pass@db-ip:5432/k3s \
K3S_DATASTORE_CAFILE=/path/to/ca.crt \
K3S_DATASTORE_CERTFILE=/path/to/cert.crt \
K3S_DATASTORE_KEYFILE=/path/to/key.key \
sh -
Supported databases:
- PostgreSQL
- MySQL
- etcd (external)
Built-in Components
Traefik Ingress Controller
K3s includes Traefik by default. Deploy ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
namespace: default
spec:
ingressClassName: traefik
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp
port:
number: 80
Access Traefik dashboard:
kubectl port-forward -n kube-system svc/traefik 9000:9000
# Visit http://localhost:9000/dashboard/
Disable Traefik:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -
Local-Path Storage
K3s includes local-path-provisioner for persistent storage:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: busybox
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-data
CoreDNS
DNS is included and configured by default.
Check DNS:
kubectl get pods -n kube-system | grep coredns
Configuration and Customization
Server Configuration
curl -sfL https://get.k3s.io | sh -s - \
--disable traefik \
--disable servicelb \
--disable metrics-server \
--docker \
--kubelet-arg "max-pods=200" \
--kube-apiserver-arg "feature-gates=EphemeralContainers=true" \
--data-dir /var/lib/rancher/k3s
Configuration File
Create K3s configuration:
# /etc/rancher/k3s/config.yaml
server: https://server-ip:6443
token: server-token
resolveConf: /etc/resolv.conf
disable:
- traefik
disable-helm-self-chart: false
node-label:
- "environment=production"
Start with config file:
K3S_CONFIG_FILE=/etc/rancher/k3s/config.yaml curl -sfL https://get.k3s.io | sh -
Custom Manifests
Place manifests in /var/lib/rancher/k3s/server/manifests/:
# Create custom deployment
cat > /var/lib/rancher/k3s/server/manifests/custom-app.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-app
spec:
replicas: 1
selector:
matchLabels:
app: custom-app
template:
metadata:
labels:
app: custom-app
spec:
containers:
- name: app
image: myapp:1.0
EOF
# K3s will automatically apply on startup
Troubleshooting
Service Won't Start
# Check service status
sudo systemctl status k3s
# View logs
sudo journalctl -u k3s -n 100
# Check if port 6443 is already in use
sudo netstat -tlnp | grep 6443
Node Join Issues
# Verify token is correct
cat /var/lib/rancher/k3s/server/node-token
# Check agent logs
sudo journalctl -u k3s-agent -n 100
# Verify connectivity to server
ping server-ip
curl https://server-ip:6443/ping
Storage Issues
# Check local-path provisioner
kubectl get pods -n local-path-storage
# View PVC status
kubectl get pvc
kubectl describe pvc my-data
# Check node storage
df -h /var/lib/rancher/k3s
Practical Examples
Example: Single-Node K3s Setup
#!/bin/bash
echo "=== Installing K3s Server ==="
curl -sfL https://get.k3s.io | sh -
echo "=== Waiting for K3s to be ready ==="
sleep 10
echo "=== Setting up kubeconfig ==="
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
echo "=== Verifying cluster ==="
kubectl cluster-info
kubectl get nodes
kubectl get pods -A
echo "=== K3s installed successfully ==="
echo "Deploy apps with: kubectl apply -f manifest.yaml"
Example: Multi-Node K3s Cluster
#!/bin/bash
# Setup server node
echo "=== Installing K3s Server ==="
SERVER_IP="192.168.1.100"
curl -sfL https://get.k3s.io | sh -
TOKEN=$(cat /var/lib/rancher/k3s/server/node-token)
echo "Server token: $TOKEN"
echo "Server IP: $SERVER_IP"
# Setup agent nodes
AGENTS=("192.168.1.101" "192.168.1.102" "192.168.1.103")
for agent_ip in "${AGENTS[@]}"; do
echo "=== Installing agent on $agent_ip ==="
ssh root@$agent_ip << EOF
curl -sfL https://get.k3s.io | K3S_URL=https://$SERVER_IP:6443 K3S_TOKEN=$TOKEN sh -
EOF
done
# Verify cluster
kubectl get nodes
Example: K3s with Custom Configuration
#!/bin/bash
# Install K3s with custom settings
curl -sfL https://get.k3s.io | sh -s - \
--cluster-init \
--disable traefik \
--disable servicelb \
--kubelet-arg "max-pods=250" \
--kubelet-arg "eviction-hard=memory.available<100Mi" \
--kube-apiserver-arg "feature-gates=EphemeralContainers=true" \
--node-label "node-role=master" \
--node-taint "master=true:NoSchedule"
# Deploy custom ingress
kubectl apply -f - << 'EOF'
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
annotations:
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- example.com
secretName: example-tls
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example
port:
number: 80
EOF
Conclusion
K3s is an excellent choice for Kubernetes deployments on resource-constrained VPS and baremetal servers. Its lightweight nature, quick installation, and built-in components make it ideal for development, edge computing, and small production deployments. Start with a single-node K3s cluster for development, expand to multi-node for production workloads, and leverage the built-in Traefik and local-path storage for immediate functionality. K3s provides the full Kubernetes experience in a minimal package, making it perfect for teams looking to adopt Kubernetes without the operational overhead of full distributions.


