OpenEBS Storage for Kubernetes
OpenEBS is a leading open-source container-attached storage (CAS) solution for Kubernetes that runs storage controllers as pods, giving each workload its own dedicated storage stack. With multiple storage engines including LocalPV, Jiva, cStor, and the high-performance Mayastor engine, OpenEBS adapts to diverse workload requirements from simple local storage to distributed, replicated volumes.
Prerequisites
- Kubernetes 1.23+
kubectlconfigured with cluster access- Helm 3.x
- For Mayastor: Linux kernel 5.13+, NVMe drives, and nodes with Huge Pages enabled
- For cStor: Raw block devices (unformatted) available on nodes
Check node capabilities:
# Verify iSCSI support (needed for cStor/Jiva)
sudo apt-get install -y open-iscsi # Ubuntu/Debian
sudo yum install -y iscsi-initiator-utils # CentOS/Rocky
sudo systemctl enable --now iscsid
# Check for available raw block devices (for cStor)
lsblk -d -o NAME,SIZE,TYPE | grep disk
# For Mayastor - enable Huge Pages
echo 'vm.nr_hugepages = 1024' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Installing OpenEBS
# Add OpenEBS Helm repository
helm repo add openebs https://openebs.github.io/charts
helm repo update
# Install OpenEBS with all engines
helm install openebs openebs/openebs \
--namespace openebs \
--create-namespace \
--set engines.local.lvm.enabled=true \
--set engines.replicated.mayastor.enabled=false # Enable if kernel >= 5.13
# For a minimal LocalPV-only install
helm install openebs openebs/openebs \
--namespace openebs \
--create-namespace \
--set engines.replicated.mayastor.enabled=false \
--set engines.replicated.jiva.enabled=false
# Verify all pods are running
kubectl -n openebs get pods
# List available StorageClasses
kubectl get storageclass | grep openebs
LocalPV Storage Engine
LocalPV is the simplest engine, using local node storage for best performance:
# LocalPV Hostpath - uses a directory on the host
cat > localpv-hostpath-sc.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-hostpath
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer # Important: binds to scheduled node
parameters:
storageType: hostpath
basePath: /var/openebs/local
EOF
kubectl apply -f localpv-hostpath-sc.yaml
# LocalPV LVM - uses LVM volumes for better management
# First, create a VG on nodes
sudo pvcreate /dev/sdb
sudo vgcreate openebs-lvmvg /dev/sdb
# Install LVM LocalPV
helm install lvm-localpv openebs/lvm-localpv \
--namespace openebs \
--create-namespace
cat > localpv-lvm-sc.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-lvm
provisioner: local.csi.openebs.io
allowVolumeExpansion: true
parameters:
storage: "lvm"
volgroup: "openebs-lvmvg"
fstype: ext4
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
EOF
kubectl apply -f localpv-lvm-sc.yaml
# Create a PVC
cat > lvm-pvc.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: openebs-lvm
resources:
requests:
storage: 10Gi
EOF
kubectl apply -f lvm-pvc.yaml
Jiva Replicated Storage
Jiva provides replicated storage suitable for scenarios where multiple replicas across nodes are needed:
# Jiva is installed as part of OpenEBS base
# Create a Jiva StorageClass with 3 replicas
cat > jiva-sc.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-jiva-csi-3r
provisioner: jiva.csi.openebs.io
allowVolumeExpansion: true
parameters:
cas-type: jiva
policy: openebs-jiva-default-policy
reclaimPolicy: Delete
volumeBindingMode: Immediate
EOF
kubectl apply -f jiva-sc.yaml
# Create a JivaVolumePolicy for replica configuration
cat > jiva-policy.yaml <<EOF
apiVersion: openebs.io/v1alpha1
kind: JivaVolumePolicy
metadata:
name: openebs-jiva-default-policy
namespace: openebs
spec:
replicaSC: openebs-hostpath
target:
replicationFactor: 3
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
replica:
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
EOF
kubectl apply -f jiva-policy.yaml
# Check Jiva volume status
kubectl get jivavolumes -n openebs
cStor Storage Pools
cStor uses dedicated block devices for production-grade distributed storage:
# Install cStor
helm install cstor openebs/cstor \
--namespace openebs \
--create-namespace
# List available block devices
kubectl get blockdevices -n openebs
# Create a CStorPoolCluster using available block devices
cat > cspc.yaml <<EOF
apiVersion: cstor.openebs.io/v1
kind: CStorPoolCluster
metadata:
name: cstor-disk-pool
namespace: openebs
spec:
pools:
- nodeSelector:
kubernetes.io/hostname: "worker-01"
dataRaidGroups:
- blockDevices:
- blockDeviceName: "blockdevice-xxxx-worker-01" # Get from kubectl get bd
poolConfig:
dataRaidGroupType: "stripe"
- nodeSelector:
kubernetes.io/hostname: "worker-02"
dataRaidGroups:
- blockDevices:
- blockDeviceName: "blockdevice-xxxx-worker-02"
poolConfig:
dataRaidGroupType: "stripe"
- nodeSelector:
kubernetes.io/hostname: "worker-03"
dataRaidGroups:
- blockDevices:
- blockDeviceName: "blockdevice-xxxx-worker-03"
poolConfig:
dataRaidGroupType: "stripe"
EOF
kubectl apply -f cspc.yaml
# Wait for pool to be healthy
kubectl get cspc -n openebs -w
# Create cStor StorageClass
cat > cstor-sc.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cstor-csi-disk
provisioner: cstor.csi.openebs.io
allowVolumeExpansion: true
parameters:
cas-type: cstor
cstorPoolCluster: cstor-disk-pool
replicaCount: "3"
EOF
kubectl apply -f cstor-sc.yaml
Mayastor High-Performance Engine
Mayastor uses NVMe-oF over RDMA/TCP for ultra-low-latency storage:
# Enable Huge Pages on all storage nodes (must be done before Mayastor install)
# Add to /etc/sysctl.d/mayastor.conf
cat > /etc/sysctl.d/mayastor.conf <<EOF
vm.nr_hugepages = 1024
EOF
sudo sysctl --system
# Install Mayastor via Helm
helm install mayastor openebs/mayastor \
--namespace mayastor \
--create-namespace \
--set etcd.replicaCount=3 \
--set loki-stack.enabled=true
# Verify Mayastor pods
kubectl -n mayastor get pods
# Label nodes as Mayastor storage nodes (nodes with NVMe/fast storage)
kubectl label node worker-01 openebs.io/engine=mayastor
kubectl label node worker-02 openebs.io/engine=mayastor
kubectl label node worker-03 openebs.io/engine=mayastor
# Create a DiskPool pointing to NVMe devices
cat > mayastor-diskpool.yaml <<EOF
apiVersion: openebs.io/v1alpha1
kind: DiskPool
metadata:
name: pool-worker-01
namespace: mayastor
spec:
node: worker-01
disks:
- /dev/nvme0n1
EOF
kubectl apply -f mayastor-diskpool.yaml
# Create Mayastor StorageClass
cat > mayastor-sc.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mayastor-3
parameters:
ioTimeout: "30"
protocol: nvmf
repl: "3"
provisioner: io.openebs.csi-mayastor
EOF
kubectl apply -f mayastor-sc.yaml
Volume Snapshots and Backups
# Enable snapshots for cStor volumes
cat > cstor-snapshot-class.yaml <<EOF
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1
metadata:
name: csi-cstor-snapshotclass
driver: cstor.csi.openebs.io
deletionPolicy: Delete
EOF
kubectl apply -f cstor-snapshot-class.yaml
# Take a snapshot
cat > volume-snapshot.yaml <<EOF
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: app-data-snapshot
spec:
volumeSnapshotClassName: csi-cstor-snapshotclass
source:
persistentVolumeClaimName: app-data
EOF
kubectl apply -f volume-snapshot.yaml
# Check snapshot status
kubectl get volumesnapshot app-data-snapshot -o yaml
# Restore from snapshot
cat > restored-pvc.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data-restored
spec:
storageClassName: cstor-csi-disk
dataSource:
name: app-data-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
kubectl apply -f restored-pvc.yaml
Performance Tuning
# For Jiva - tune IO thread count in JivaVolumePolicy
# For cStor - configure cache size in CSPC
cat > cspc-tuned.yaml <<EOF
apiVersion: cstor.openebs.io/v1
kind: CStorPoolCluster
metadata:
name: cstor-pool-tuned
namespace: openebs
spec:
pools:
- nodeSelector:
kubernetes.io/hostname: "worker-01"
dataRaidGroups:
- blockDevices:
- blockDeviceName: "blockdevice-xxxx"
poolConfig:
dataRaidGroupType: "stripe"
resources:
requests:
memory: 2Gi
cpu: "1"
limits:
memory: 4Gi
cpu: "2"
writeCacheGroupType: "stripe" # Enable ZFS write cache
EOF
# For LocalPV LVM - use XFS for better database performance
cat > lvm-xfs-sc.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-lvm-xfs
provisioner: local.csi.openebs.io
parameters:
storage: "lvm"
volgroup: "openebs-lvmvg"
fstype: xfs
fsopts: "-f -d agcount=4" # Multiple allocation groups
EOF
kubectl apply -f lvm-xfs-sc.yaml
Troubleshooting
PVC stuck in Pending:
# Check events on PVC
kubectl describe pvc <pvc-name>
# Verify StorageClass provisioner is running
kubectl -n openebs get pods | grep provisioner
# For WaitForFirstConsumer - PVC won't bind until pod is scheduled
kubectl get pod -l app=your-app -o wide
Volume not attaching:
# Check iSCSI sessions (Jiva/cStor)
sudo iscsiadm -m session
# Check CSI node driver
kubectl -n openebs get pods -l role=node-agent
# Check CSI driver logs
kubectl -n openebs logs ds/openebs-cstor-csi-node -c cstor-csi-plugin
Mayastor pool not ready:
# Verify Huge Pages
kubectl -n mayastor exec -it ds/mayastor -- cat /proc/meminfo | grep Huge
# Check NVME module
kubectl -n mayastor exec -it ds/mayastor -- nvme list
# View Mayastor logs
kubectl -n mayastor logs ds/mayastor -f
Conclusion
OpenEBS provides a flexible, cloud-native storage platform for Kubernetes with engines suited to every scenario: LocalPV for simplicity and performance, Jiva for lightweight replication, cStor for feature-rich distributed storage, and Mayastor for NVMe-grade performance. Choose the engine that matches your workload requirements and hardware, and leverage volume snapshots and backup capabilities to protect your data.


