Kubernetes Installation with kubeadm: Complete Production Guide

Kubeadm is the official tool for bootstrapping production-grade Kubernetes clusters. This comprehensive guide walks through installing Kubernetes using kubeadm on Linux systems, covering single-node and multi-node cluster setups with best practices for production deployments.

Table of Contents

Introduction

Kubeadm automates the Kubernetes cluster setup process, handling certificate generation, control plane component deployment, and cluster bootstrapping. It's the recommended method for creating production Kubernetes clusters on-premises or in cloud environments.

What is Kubeadm?

Kubeadm performs essential cluster bootstrapping tasks:

  • Initializes the control plane
  • Manages certificates and kubeconfig files
  • Deploys core Kubernetes components
  • Simplifies node joining process
  • Supports cluster upgrades

Cluster Architecture

Control Plane Node(s)
├── API Server
├── Scheduler
├── Controller Manager
└── etcd

Worker Nodes
├── kubelet
├── kube-proxy
└── Container Runtime

Prerequisites

System Requirements

Control Plane Node:

  • 2+ CPUs
  • 4GB+ RAM
  • 20GB+ disk space
  • Ubuntu 20.04/22.04, Debian 11, CentOS 8, Rocky Linux 8

Worker Nodes:

  • 1+ CPU
  • 2GB+ RAM
  • 20GB+ disk space
  • Same OS as control plane

Network Requirements

  • Full network connectivity between all nodes
  • Unique hostname, MAC address, and product_uuid for each node
  • Required ports open (see firewall section)
  • Internet access for pulling images

Verify Requirements

# Check CPU cores
nproc

# Check memory
free -h

# Check disk space
df -h

# Check hostname
hostname

# Check MAC address
ip link

# Check product_uuid
sudo cat /sys/class/dmi/id/product_uuid

Pre-Installation Setup

Disable Swap

Kubernetes requires swap to be disabled:

# Disable swap immediately
sudo swapoff -a

# Disable swap permanently
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# Verify swap is off
free -h

Configure Kernel Modules

# Load required modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# Verify modules are loaded
lsmod | grep br_netfilter
lsmod | grep overlay

Configure Sysctl Parameters

# Set up required sysctl params
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params
sudo sysctl --system

# Verify settings
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

Configure Firewall

Control Plane Node:

# UFW (Ubuntu/Debian)
sudo ufw allow 6443/tcp    # Kubernetes API server
sudo ufw allow 2379:2380/tcp  # etcd server client API
sudo ufw allow 10250/tcp   # Kubelet API
sudo ufw allow 10259/tcp   # kube-scheduler
sudo ufw allow 10257/tcp   # kube-controller-manager

# firewalld (CentOS/Rocky)
sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=2379-2380/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=10259/tcp
sudo firewall-cmd --permanent --add-port=10257/tcp
sudo firewall-cmd --reload

Worker Nodes:

# UFW
sudo ufw allow 10250/tcp   # Kubelet API
sudo ufw allow 30000:32767/tcp  # NodePort Services

# firewalld
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=30000-32767/tcp
sudo firewall-cmd --reload

Installing Container Runtime

Kubernetes requires a container runtime. We'll install containerd (recommended).

Install containerd

Ubuntu/Debian:

# Install dependencies
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release

# Add Docker GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add Docker repository
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install containerd
sudo apt-get update
sudo apt-get install -y containerd.io

CentOS/Rocky Linux:

# Install required packages
sudo dnf install -y dnf-utils

# Add Docker repository
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# Install containerd
sudo dnf install -y containerd.io

Configure containerd

# Create default configuration
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

# Configure systemd cgroup driver
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

# Restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd

# Verify containerd is running
sudo systemctl status containerd

Installing kubeadm kubectl kubelet

Add Kubernetes Repository

Ubuntu/Debian:

# Add Kubernetes GPG key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# Add Kubernetes repository
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Update package list
sudo apt-get update

CentOS/Rocky Linux:

# Add Kubernetes repository
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

Install Kubernetes Components

Ubuntu/Debian:

# Install kubelet, kubeadm, and kubectl
sudo apt-get install -y kubelet kubeadm kubectl

# Hold packages at current version
sudo apt-mark hold kubelet kubeadm kubectl

# Enable kubelet service
sudo systemctl enable kubelet

CentOS/Rocky Linux:

# Install components
sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

# Enable kubelet
sudo systemctl enable kubelet

Verify Installation

# Check versions
kubeadm version
kubelet --version
kubectl version --client

# Kubelet will be in crashloop until cluster is initialized
sudo systemctl status kubelet

Initializing the Control Plane

Initialize Cluster

Basic initialization:

# Initialize control plane
sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --apiserver-advertise-address=<CONTROL_PLANE_IP>

# Example with Flannel CNI (requires 10.244.0.0/16)
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Advanced initialization:

# With custom configuration
sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --apiserver-advertise-address=192.168.1.100 \
  --apiserver-cert-extra-sans=k8s.example.com \
  --control-plane-endpoint=k8s.example.com \
  --kubernetes-version=v1.28.0 \
  --node-name=$(hostname -s)

Configure kubectl

After initialization, configure kubectl for the current user:

# Set up kubeconfig for regular user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Or for root user
export KUBECONFIG=/etc/kubernetes/admin.conf

Verify Control Plane

# Check nodes
kubectl get nodes

# Check system pods
kubectl get pods -n kube-system

# Check component status
kubectl get --raw='/readyz?verbose'

Installing Pod Network Add-on

Kubernetes requires a pod network add-on for pod communication. Choose one:

Flannel (Simple, Recommended for Learning)

# Install Flannel
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

# Verify Flannel pods
kubectl get pods -n kube-flannel

# Wait for all pods to be Running
kubectl get pods -A

Calico (Production, Advanced Networking)

# Install Calico operator
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml

# Download custom resources
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml -O

# Edit CIDR if needed (default 192.168.0.0/16)
kubectl create -f custom-resources.yaml

# Verify Calico
kubectl get pods -n calico-system

Cilium (eBPF-based, High Performance)

# Install Cilium CLI
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-amd64.tar.gz{,.sha256sum}
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}

# Install Cilium
cilium install --version 1.14.0

# Verify
cilium status --wait

Wait for Network Ready

# Watch pods until all are Running
watch kubectl get pods -A

# Check node status (should be Ready)
kubectl get nodes

Joining Worker Nodes

Get Join Command

On control plane node:

# Generate join command
kubeadm token create --print-join-command

# Output example:
# kubeadm join 192.168.1.100:6443 --token abc123.xyz789 --discovery-token-ca-cert-hash sha256:hash...

Join Worker Nodes

On each worker node:

# Run the join command from above
sudo kubeadm join 192.168.1.100:6443 \
  --token abc123.xyz789 \
  --discovery-token-ca-cert-hash sha256:hash...

Verify Worker Nodes

On control plane:

# Check all nodes
kubectl get nodes

# Should see all nodes in Ready state
NAME          STATUS   ROLES           AGE   VERSION
k8s-master    Ready    control-plane   10m   v1.28.0
k8s-worker1   Ready    <none>          5m    v1.28.0
k8s-worker2   Ready    <none>          5m    v1.28.0

Verification and Testing

Deploy Test Application

# Create nginx deployment
kubectl create deployment nginx --image=nginx

# Expose as service
kubectl expose deployment nginx --port=80 --type=NodePort

# Check deployment
kubectl get deployments
kubectl get pods
kubectl get services

# Test connectivity
curl http://<NODE_IP>:<NODE_PORT>

Complete Test Suite

# test-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: test-service
spec:
  selector:
    app: test
  ports:
  - port: 80
    targetPort: 80
  type: LoadBalancer
# Apply test
kubectl apply -f test-deployment.yaml

# Verify
kubectl get all

# Cleanup
kubectl delete -f test-deployment.yaml

Production Configuration

High Availability Setup

For production, use multiple control plane nodes:

# Initialize first control plane
sudo kubeadm init \
  --control-plane-endpoint="load-balancer-dns:6443" \
  --upload-certs \
  --pod-network-cidr=10.244.0.0/16

# Join additional control planes
sudo kubeadm join load-balancer-dns:6443 \
  --token <token> \
  --discovery-token-ca-cert-hash sha256:<hash> \
  --control-plane \
  --certificate-key <cert-key>

External etcd

# Create etcd cluster configuration
# etcd-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
etcd:
  external:
    endpoints:
    - https://etcd1:2379
    - https://etcd2:2379
    - https://etcd3:2379
    caFile: /etc/kubernetes/pki/etcd/ca.crt
    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key

Resource Management

# Configure kubelet with custom resources
# /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 110
evictionHard:
  memory.available: "200Mi"
  nodefs.available: "10%"
  nodefs.inodesFree: "5%"
  imagefs.available: "15%"

Security Hardening

# Enable audit logging
# Add to /etc/kubernetes/manifests/kube-apiserver.yaml
--audit-log-path=/var/log/kubernetes/audit.log
--audit-log-maxage=30
--audit-log-maxbackup=10
--audit-log-maxsize=100
--audit-policy-file=/etc/kubernetes/audit-policy.yaml

# Enable pod security
--enable-admission-plugins=NodeRestriction,PodSecurity

Troubleshooting

Common Issues

Kubelet Not Starting

# Check kubelet status
sudo systemctl status kubelet

# View logs
sudo journalctl -u kubelet -f

# Common issues:
# - Swap not disabled
# - Container runtime not running
# - Port conflicts

Pods Not Starting

# Check pod status
kubectl get pods -A

# Describe problematic pod
kubectl describe pod <pod-name> -n <namespace>

# Check logs
kubectl logs <pod-name> -n <namespace>

Node Not Ready

# Check node details
kubectl describe node <node-name>

# Common causes:
# - CNI not installed/working
# - Kubelet issues
# - Resource constraints

Network Issues

# Check CNI pods
kubectl get pods -n kube-system | grep -E 'flannel|calico|cilium'

# Test pod connectivity
kubectl run test --image=busybox --rm -it -- sh
# Inside pod:
# ping google.com
# nslookup kubernetes.default

Reset Cluster

# Reset node (warning: destroys cluster)
sudo kubeadm reset

# Clean up
sudo rm -rf /etc/cni/net.d
sudo rm -rf $HOME/.kube/config

# Restart containerd
sudo systemctl restart containerd

Conclusion

You've successfully installed a Kubernetes cluster using kubeadm. This production-ready setup provides a foundation for deploying containerized applications at scale.

Key Takeaways

  • Kubeadm simplifies Kubernetes cluster bootstrapping
  • Container Runtime (containerd) is required before installation
  • Pod Network add-on enables pod communication
  • Worker Nodes join cluster using generated tokens
  • Production requires HA control plane and external etcd

Quick Command Reference

# Installation
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>

# Configuration
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# Verification
kubectl get nodes
kubectl get pods -A
kubectl cluster-info

# Reset
sudo kubeadm reset

Next Steps

  1. Deploy Applications: Create deployments and services
  2. Storage: Configure persistent volumes
  3. Ingress: Set up ingress controller
  4. Monitoring: Install Prometheus and Grafana
  5. Logging: Deploy ELK or Loki stack
  6. Security: Implement RBAC and network policies
  7. Backup: Configure etcd backups

Your Kubernetes cluster is ready for deploying production workloads!