Kubernetes Storage Classes and CSI Drivers

Storage Classes define how persistente almacenamiento is provisioned in Kubernetes, while Container Storage Interface (CSI) drivers enable dynamic provisioning with various almacenamiento backends. Esta guía cubre StorageClass parameters, dynamic provisioning, NFS and Ceph CSI drivers, reclaim policies, and best practices for managing persistente volumens in your VPS and baremetal Kubernetes infrastructure.

Tabla de contenidos

Clases de almacenamiento Fundamentals

What is a StorageClass?

A StorageClass is a Kubernetes object that describes the type and properties of almacenamiento that can be provisioned. It acts as a template for creating PersistentVolumes (PVs).

Benefits

  • Dynamic Provisioning: Automatically create PVs when PVCs are created
  • Multiple Storage Types: Support different almacenamiento backends in one clúster
  • Automation: Reduce manual volumen creation
  • Flexibility: Different QoS levels for different applications

Default Storage Classes

Check existing almacenamiento classes:

kubectl get storageclass
kubectl describe storageclass standard

StorageClass Definition

Basic StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp3
  iops: "3000"
  throughput: "125"
  encrypted: "true"
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

StorageClass Parameters

Common parameters vary by provisioner:

AWS EBS:

provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp3        # Volume type
  iops: "3000"     # I/O operations per second
  throughput: "125" # MB/s
  encrypted: "true" # Enable encryption
  kmsKeyId: "arn:aws:kms:..." # Custom KMS key

vSphere:

provisioner: csi.vsphere.vmware.com
parameters:
  storagepolicyname: "gold"
  datastoreurl: "ds:///vmfs/volumes/.../"

Local Storage:

provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Volume Binding Modes

Immediate: Bind PVC immediately

volumeBindingMode: Immediate

WaitForFirstConsumer: Bind when pod uses PVC

volumeBindingMode: WaitForFirstConsumer

Topology-Aware Provisioning

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: topology-aware-storage
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  iops: "3000"
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: topology.kubernetes.io/zone
    values:
    - us-east-1a
    - us-east-1b

Dynamic Provisioning

Creating PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-data
  namespace: production
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: fast-ssd
  resources:
    requests:
      storage: 50Gi

Crea el PVC:

kubectl apply -f pvc.yaml
kubectl get pvc -n production
kubectl describe pvc app-data -n production

Using PVC in Pod

apiVersion: v1
kind: Pod
metadata:
  name: app
  namespace: production
spec:
  containers:
  - name: app
    image: my-app:1.0
    volumeMounts:
    - name: data
      mountPath: /var/data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: app-data

Expanding Volumes

PVCs can be expanded if StorageClass allows:

# Edit PVC to increase size
kubectl patch pvc app-data -n production -p '{"spec":{"resources":{"requests":{"storage":"100Gi"}}}}'

# Verify expansion
kubectl get pvc -n production

CSI Drivers

CSI Overview

CSI (Container Storage Interface) is a standard for almacenamiento plugins in Kubernetes.

Common CSI Drivers

  • AWS EBS: ebs.csi.aws.com
  • NFS: nfs.csi.k8s.io
  • Ceph: rbd.csi.ceph.com
  • iSCSI: iscsi.csi.alibabacloud.com
  • Local: local.csi.linode.com

Instalaing CSI Driver

Generic installation pattern:

# Add Helm repository
helm repo add driver-name https://charts.example.com
helm repo update

# Install CSI driver
helm install csi-driver driver-name/csi-driver \
  -n kube-system \
  -f values.yaml

Verifica la instalación:

kubectl get daemonsets -n kube-system | grep csi
kubectl get statefulsets -n kube-system | grep csi

NFS CSI Driver

Instalaing NFS CSI

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update

helm install nfs-subdir-external-provisioner \
  nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  -n kube-system \
  --set nfs.server=nfs-server.example.com \
  --set nfs.path=/exports

NFS StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
provisioner: k8s.io/minikube-hostpath
parameters:
  server: nfs-server.example.com
  path: "/exports/kubernetes"
  fsType: "nfs4"
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate

NFS PVC Example

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-data
  namespace: production
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs-storage
  resources:
    requests:
      storage: 100Gi

NFS Mounting

# Get NFS mount info
kubectl get pv -o yaml | grep -A5 nfs

# Mount on pod
kubectl run -it --rm debug --image=ubuntu -- mount -t nfs nfs-server.example.com:/exports/kubernetes /mnt

Ceph CSI Driver

Instalaing Ceph

First, ensure Ceph clúster is running:

# Check Ceph cluster status
ceph status
ceph osd tree
ceph pool ls

Instalaing Ceph CSI

helm repo add ceph-csi https://ceph.github.io/csi-charts
helm repo update

helm install ceph-csi-rbd ceph-csi/ceph-csi-rbd \
  -n ceph-csi-rbd \
  --create-namespace \
  --set configMapName=ceph-config

Ceph StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-rbd-fast
provisioner: rbd.csi.ceph.com
parameters:
  clusterID: a7f64e5b-3b65-46eb-bdc2-6ce22c642d0e
  pool: kubernetes-fast
  imageFeatures: layering
  csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
  csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-rbd
  csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-rbd
  csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
  csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-rbd
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate

Creating Ceph Secret

# Get Ceph admin key
ceph auth get-key client.admin | base64

# Create secret
kubectl create secret generic csi-rbd-secret \
  -n ceph-csi-rbd \
  --from-literal=userID=admin \
  --from-literal=userKey=<base64-encoded-key>

Reclaim Policies

Delete Policy

Volume deleted when PVC is deleted:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ephemeral
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete

Retain Policy

Volume retained after PVC deletion:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: archive
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain

Manually delete retained volumen:

kubectl delete pv <pv-name>

Recycle Policy (Deprecated)

Clean volumen before reuse:

reclaimPolicy: Recycle  # Deprecated - use Delete with manual cleanup

Practical Examples

Ejemplo: Multi-Tier Storage Setup

---
# High-performance SSD for databases
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-ssd
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  iops: "5000"
  throughput: "250"
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
# Standard storage for general use
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  iops: "3000"
  throughput: "125"
allowVolumeExpansion: true
volumeBindingMode: Immediate
reclaimPolicy: Delete
---
# Archive storage with long retention
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: archive
provisioner: ebs.csi.aws.com
parameters:
  type: sc1
allowVolumeExpansion: false
volumeBindingMode: Immediate
reclaimPolicy: Retain

Ejemplo: Database with Persistent Volume

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-data
  namespace: databases
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: fast-ssd
  resources:
    requests:
      storage: 100Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
  namespace: databases
spec:
  serviceName: postgres
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:15
        ports:
        - containerPort: 5432
          name: postgres
        env:
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-secret
              key: password
        volumeMounts:
        - name: data
          mountPath: /var/lib/postgresql/data
          subPath: postgres
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: fast-ssd
      resources:
        requests:
          storage: 100Gi

Ejemplo: Shared Storage with NFS

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-documents
  namespace: collaboration
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs-storage
  resources:
    requests:
      storage: 500Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: editor-1
  namespace: collaboration
spec:
  containers:
  - name: editor
    image: myeditor:latest
    volumeMounts:
    - name: documents
      mountPath: /documents
  volumes:
  - name: documents
    persistentVolumeClaim:
      claimName: shared-documents
---
apiVersion: v1
kind: Pod
metadata:
  name: editor-2
  namespace: collaboration
spec:
  containers:
  - name: editor
    image: myeditor:latest
    volumeMounts:
    - name: documents
      mountPath: /documents
  volumes:
  - name: documents
    persistentVolumeClaim:
      claimName: shared-documents

Conclusión

Storage Classes and CSI drivers are fundamental to managing persistente data in Kubernetes. By implementing multiple almacenamiento classes for different rendimiento requirements, properly configuring reclaim policies, and choosing appropriate CSI drivers for your infrastructure, you create a flexible and reliable almacenamiento layer. Start with a single default almacenamiento class, expand to multiple tiers as your needs grow, and regularly monitor almacenamiento usage and rendimiento. Whether using cloud-native almacenamiento like AWS EBS, open-source solutions like Ceph, or NFS for shared almacenamiento, proper configuration ensures your applications have reliable, performant access to persistente data on your VPS and baremetal infrastructure.