How to Add and Mount Additional Disks on Linux: Complete Step-by-Step Guide

Adding additional storage to Linux servers is one of the most common system administration tasks. Whether you're expanding capacity for growing databases, adding backup storage, or provisioning space for new applications, understanding how to properly add and mount disks is essential for every Linux administrator.

This comprehensive guide walks you through the entire process of adding physical or virtual disks to Linux systems, from initial detection through permanent mounting configuration, ensuring reliable and optimized storage expansion.

Introduction to Linux Disk Management

Linux treats all storage devices as files under the /dev directory. When you add a new disk to your system—whether physical hardware, virtual disk in a VM, or cloud block storage—Linux detects it as a block device that must be partitioned, formatted, and mounted before use.

Common Scenarios for Adding Disks

  • Capacity expansion: Adding storage as data grows
  • Performance improvement: Separating data across multiple disks
  • Backup storage: Dedicated disks for backup operations
  • Database storage: Isolated storage for database files
  • Application data: Dedicated storage for specific applications
  • Log storage: Separate disk for system and application logs

Storage Device Types

Different environments use different disk types:

  • Physical servers: SATA, SAS, NVMe drives
  • Virtual machines: VMware VMDK, VirtualBox VDI, KVM qcow2
  • Cloud servers: AWS EBS, Azure Managed Disks, Google Persistent Disks
  • Network storage: iSCSI, NFS, Ceph

Prerequisites

Before adding disks to your Linux system, ensure you have:

  • Root or sudo access to the system
  • Physical or virtual disk attached to the server
  • Basic understanding of Linux filesystem concepts
  • Knowledge of available mount points
  • Complete backup of existing data before any disk operations
  • Understanding of your chosen filesystem type

Critical Safety Warning

WARNING: Incorrectly managing disks can result in data loss. Always:

  1. Verify device names before formatting (double-check /dev/sdb vs /dev/sda)
  2. Ensure you're working on the correct disk
  3. Create backups before any disk operations
  4. Test procedures in non-production environments first
  5. Document all disk configurations
  6. Use UUIDs for permanent mounts to avoid device name changes

Step 1: Physically Attach or Provision the Disk

For Physical Servers

  1. Power down the server (or use hot-swap if supported)
  2. Install the physical disk into available bay
  3. Connect power and data cables
  4. Power on the server

For Virtual Machines

VMware/ESXi

  1. Power off VM (or use hot-add if enabled)
  2. Edit VM settings
  3. Add new hard disk
  4. Configure disk size and type
  5. Power on VM

VirtualBox

  1. Power off VM
  2. Storage settings → Add new disk
  3. Create or attach existing disk
  4. Power on VM

KVM/QEMU

# Create new disk image
qemu-img create -f qcow2 /var/lib/libvirt/images/vm_disk2.qcow2 100G

# Attach to running VM
virsh attach-disk vm_name /var/lib/libvirt/images/vm_disk2.qcow2 vdb --cache none

For Cloud Servers

AWS EC2

# Create volume (via AWS console or CLI)
aws ec2 create-volume --size 100 --availability-zone us-east-1a --volume-type gp3

# Attach to instance
aws ec2 attach-volume --volume-id vol-xxxxx --instance-id i-xxxxx --device /dev/sdf

Azure

# Create and attach managed disk
az vm disk attach --resource-group myResourceGroup --vm-name myVM \
  --name myDataDisk --size-gb 100 --sku Premium_LRS --new

Google Cloud

# Create and attach persistent disk
gcloud compute disks create data-disk-1 --size=100GB --zone=us-central1-a
gcloud compute instances attach-disk instance-1 --disk=data-disk-1 --zone=us-central1-a

Step 2: Detect the New Disk

After physically attaching or provisioning the disk, verify Linux detects it.

List All Block Devices

lsblk

Output example:

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   100G  0 disk
├─sda1   8:1    0     1G  0 part /boot
└─sda2   8:2    0    99G  0 part /
sdb      8:16   0   500G  0 disk

The new disk appears as /dev/sdb (500GB, no partitions).

List Disks with fdisk

sudo fdisk -l

Check Disk Details

sudo lsblk -f

Shows filesystems and UUIDs:

NAME   FSTYPE LABEL UUID                                 MOUNTPOINT
sda
├─sda1 ext4         a1b2c3d4-e5f6-7890-abcd-ef1234567890 /boot
└─sda2 ext4         b2c3d4e5-f6g7-8901-bcde-fg2345678901 /
sdb

Rescan SCSI Bus (if disk not detected)

If the new disk doesn't appear, rescan:

# Rescan all SCSI hosts
echo "- - -" | sudo tee /sys/class/scsi_host/host*/scan

# Or for specific device
echo 1 | sudo tee /sys/class/block/sdb/device/rescan

Identify Disk by Serial Number

sudo lsblk -o NAME,SIZE,SERIAL

Or:

sudo hdparm -I /dev/sdb | grep Serial

Step 3: Partition the New Disk

Most scenarios require creating a partition before formatting.

Using fdisk (MBR or GPT)

sudo fdisk /dev/sdb

Interactive commands:

Command (m for help): g
Created a new GPT disklabel (GUID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX).

Command (m for help): n
Partition number (1-128, default 1): [Press Enter]
First sector: [Press Enter]
Last sector: [Press Enter]

Created a new partition 1 of type 'Linux filesystem' and of size 500 GiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

This creates /dev/sdb1 using the entire disk.

Using parted

# Create GPT partition table
sudo parted /dev/sdb mklabel gpt

# Create partition using entire disk
sudo parted /dev/sdb mkpart primary 0% 100%

# Verify
sudo parted /dev/sdb print

Using gdisk (GPT only)

sudo gdisk /dev/sdb

Interactive commands:

Command (? for help): n
Partition number (1-128, default 1): [Press Enter]
First sector: [Press Enter]
Last sector: [Press Enter]
Hex code or GUID: [Press Enter]

Command (? for help): w
Do you want to proceed? (Y/N): Y

Verify Partition Creation

lsblk

Output:

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   100G  0 disk
├─sda1   8:1    0     1G  0 part /boot
└─sda2   8:2    0    99G  0 part /
sdb      8:16   0   500G  0 disk
└─sdb1   8:17   0   500G  0 part

Notice /dev/sdb1 now exists.

Step 4: Create Filesystem on the Partition

Format the partition with your chosen filesystem.

Create ext4 Filesystem (Most Common)

sudo mkfs.ext4 /dev/sdb1

With label:

sudo mkfs.ext4 -L data_disk /dev/sdb1

With optimizations:

sudo mkfs.ext4 -L data_disk -m 1 -E lazy_itable_init=0 /dev/sdb1

Options:

  • -L data_disk: Set label
  • -m 1: Reserve 1% for root (default 5%)
  • -E lazy_itable_init=0: Initialize immediately

Create XFS Filesystem

sudo mkfs.xfs -L backup_disk /dev/sdb1

Create Btrfs Filesystem

sudo mkfs.btrfs -L container_storage /dev/sdb1

Create ext3 Filesystem

sudo mkfs.ext3 /dev/sdb1

Verify Filesystem Creation

sudo lsblk -f /dev/sdb

Output:

NAME   FSTYPE LABEL      UUID                                 MOUNTPOINT
sdb
└─sdb1 ext4   data_disk  c3d4e5f6-g7h8-9012-cdef-gh3456789012

Get UUID for Later Use

sudo blkid /dev/sdb1

Output:

/dev/sdb1: UUID="c3d4e5f6-g7h8-9012-cdef-gh3456789012" TYPE="ext4" LABEL="data_disk"

Save this UUID for permanent mounting.

Step 5: Create Mount Point

Create a directory where the disk will be mounted.

Common Mount Point Locations

  • /mnt/: Temporary mounts
  • /media/: Removable media
  • /data/: Application data
  • /backup/: Backup storage
  • /var/lib/: Service data (docker, mysql, etc.)

Create Mount Point

sudo mkdir -p /mnt/data

Or for specific purpose:

sudo mkdir -p /var/lib/mysql_data
sudo mkdir -p /backup
sudo mkdir -p /data/applications

Step 6: Mount the Disk Temporarily

Test mounting before making it permanent.

Mount the Partition

sudo mount /dev/sdb1 /mnt/data

Verify Mount

df -h /mnt/data

Output:

Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1       492G   73M  467G   1% /mnt/data

Check Mount Details

mount | grep sdb1

Output:

/dev/sdb1 on /mnt/data type ext4 (rw,relatime)

Test Read/Write Access

# Create test file
sudo touch /mnt/data/test_file
echo "Test content" | sudo tee /mnt/data/test_file

# Verify
cat /mnt/data/test_file

# Remove test file
sudo rm /mnt/data/test_file

Unmount (if needed)

sudo umount /mnt/data

Step 7: Configure Automatic Mounting at Boot

Make the mount permanent by adding to /etc/fstab.

Understanding /etc/fstab

The /etc/fstab file controls automatic filesystem mounting at boot.

Format:

<device>  <mount_point>  <type>  <options>  <dump>  <pass>

Backup Current fstab

CRITICAL: Always backup before editing:

sudo cp /etc/fstab /etc/fstab.backup.$(date +%Y%m%d)

Add Entry Using UUID (Recommended)

Get UUID:

sudo blkid /dev/sdb1

Edit fstab:

sudo nano /etc/fstab

Add line:

UUID=c3d4e5f6-g7h8-9012-cdef-gh3456789012  /mnt/data  ext4  defaults  0  2

Add Entry Using Device Path (Not Recommended)

/dev/sdb1  /mnt/data  ext4  defaults  0  2

Note: Device names can change; UUIDs are more reliable.

Add Entry with Custom Options

UUID=c3d4e5f6-g7h8-9012-cdef-gh3456789012  /mnt/data  ext4  defaults,noatime,errors=remount-ro  0  2

Common mount options:

  • defaults: Default options (rw, suid, dev, exec, auto, nouser, async)
  • noatime: Don't update access times (improves performance)
  • nodiratime: Don't update directory access times
  • errors=remount-ro: Remount read-only on errors
  • nofail: Continue boot if disk unavailable
  • discard: Enable TRIM for SSDs

Understanding Dump and Pass Values

Last two fields in fstab:

Dump field:

  • 0: Don't dump (backup)
  • 1: Dump filesystem

Pass field:

  • 0: Don't check filesystem
  • 1: Check first (root filesystem only)
  • 2: Check after root filesystem

Recommended:

  • Root partition: 0 1
  • Data partitions: 0 2
  • Swap: 0 0

Test fstab Configuration

IMPORTANT: Test before rebooting:

# Unmount if currently mounted
sudo umount /mnt/data

# Test mount all from fstab
sudo mount -a

# Verify
df -h /mnt/data

If errors occur, fstab has issues. Fix before rebooting.

Verify After Reboot

# Reboot system
sudo reboot

# After reboot, verify automatic mount
df -h /mnt/data

Step 8: Set Permissions and Ownership

Configure appropriate permissions for the mounted disk.

Change Ownership

# Change owner to specific user
sudo chown username:username /mnt/data

# Change to web server user
sudo chown www-data:www-data /mnt/data

# Change to database user
sudo chown mysql:mysql /var/lib/mysql_data

Set Permissions

# Full access for owner, read for others
sudo chmod 755 /mnt/data

# Full access for owner only
sudo chmod 700 /mnt/data

# Full access for owner and group
sudo chmod 770 /mnt/data

Set ACLs for Advanced Permissions

# Install ACL utilities if needed
sudo apt install acl

# Set ACL
sudo setfacl -m u:username:rwx /mnt/data
sudo setfacl -m g:groupname:rx /mnt/data

# Verify ACLs
getfacl /mnt/data

Common Scenarios and Examples

Scenario 1: Adding Backup Storage

# After attaching disk
sudo fdisk /dev/sdb  # Create partition
sudo mkfs.ext4 -L backup /dev/sdb1
sudo mkdir /backup
sudo mount /dev/sdb1 /backup

# Get UUID
UUID=$(sudo blkid -s UUID -o value /dev/sdb1)

# Add to fstab
echo "UUID=$UUID /backup ext4 defaults,noatime 0 2" | sudo tee -a /etc/fstab

# Test
sudo mount -a
df -h /backup

Scenario 2: Adding Database Storage

# Create and mount
sudo mkfs.xfs -L mysql_data /dev/sdb1
sudo mkdir /var/lib/mysql_data

# Get UUID
UUID=$(sudo blkid -s UUID -o value /dev/sdb1)

# Add to fstab
echo "UUID=$UUID /var/lib/mysql_data xfs defaults,noatime 0 2" | sudo tee -a /etc/fstab

# Mount and set permissions
sudo mount -a
sudo chown mysql:mysql /var/lib/mysql_data
sudo chmod 750 /var/lib/mysql_data

Scenario 3: Adding Docker Storage

# Create filesystem
sudo mkfs.ext4 -L docker_storage /dev/sdb1

# Stop Docker
sudo systemctl stop docker

# Backup existing data
sudo rsync -avxHAX /var/lib/docker/ /var/lib/docker.backup/

# Mount new disk
sudo mkdir -p /var/lib/docker
UUID=$(sudo blkid -s UUID -o value /dev/sdb1)
echo "UUID=$UUID /var/lib/docker ext4 defaults,noatime 0 2" | sudo tee -a /etc/fstab
sudo mount -a

# Restore data
sudo rsync -avxHAX /var/lib/docker.backup/ /var/lib/docker/

# Start Docker
sudo systemctl start docker

Scenario 4: Adding Application Data Storage

# Create and mount
sudo mkfs.ext4 -L app_data /dev/sdb1
sudo mkdir /data/applications

# Configure
UUID=$(sudo blkid -s UUID -o value /dev/sdb1)
echo "UUID=$UUID /data/applications ext4 defaults,noatime 0 2" | sudo tee -a /etc/fstab
sudo mount -a

# Set permissions for application
sudo chown appuser:appgroup /data/applications
sudo chmod 775 /data/applications

Using Entire Disk Without Partitioning

In some cases, you can format the entire disk without creating partitions.

Format Entire Disk

sudo mkfs.ext4 /dev/sdb

Mount Entire Disk

sudo mkdir /mnt/data
sudo mount /dev/sdb /mnt/data

Add to fstab

UUID=$(sudo blkid -s UUID -o value /dev/sdb)
echo "UUID=$UUID /mnt/data ext4 defaults,noatime 0 2" | sudo tee -a /etc/fstab

Note: Using entire disk (no partitions) is simpler but less flexible.

Verification and Testing

Comprehensive System Check

# List all block devices
lsblk -f

# Show mounted filesystems
df -h

# Show mount details
mount | grep sd

# Verify fstab syntax
sudo findmnt --verify

# Check filesystem
sudo fsck -n /dev/sdb1

Performance Testing

# Write test
sudo dd if=/dev/zero of=/mnt/data/testfile bs=1G count=1 oflag=direct

# Read test
sudo dd if=/mnt/data/testfile of=/dev/null bs=1M

# Clean up
sudo rm /mnt/data/testfile

Monitor Disk Health

# Install smartmontools
sudo apt install smartmontools

# Check disk health
sudo smartctl -a /dev/sdb

# Run short test
sudo smartctl -t short /dev/sdb

# Check test results
sudo smartctl -l selftest /dev/sdb

Troubleshooting Common Issues

Issue: New Disk Not Detected

Cause: System hasn't scanned for new devices.

Solution:

# Rescan SCSI bus
echo "- - -" | sudo tee /sys/class/scsi_host/host*/scan

# Check dmesg for errors
sudo dmesg | grep -i disk

# Verify BIOS/virtualization settings

Issue: "Device or Resource Busy" When Unmounting

Cause: Processes are using the filesystem.

Solution:

# Find processes using the mount
sudo lsof /mnt/data

# Or
sudo fuser -m /mnt/data

# Kill processes if necessary
sudo fuser -km /mnt/data

# Unmount
sudo umount /mnt/data

Issue: System Won't Boot After fstab Changes

Cause: Error in /etc/fstab.

Solution:

  1. Boot into recovery mode or live USB
  2. Mount root filesystem: mount /dev/sda2 /mnt
  3. Edit fstab: nano /mnt/etc/fstab
  4. Fix or remove problematic entry
  5. Add nofail option to prevent boot failure
  6. Reboot

Issue: Mount Point Already Exists with Data

Cause: Mounting over existing directory.

Solution:

# Use different mount point
sudo mkdir /mnt/data2
sudo mount /dev/sdb1 /mnt/data2

# Or move existing data
sudo mv /mnt/data /mnt/data.old
sudo mkdir /mnt/data
sudo mount /dev/sdb1 /mnt/data

Issue: Permission Denied After Mounting

Cause: Incorrect permissions or ownership.

Solution:

# Check current permissions
ls -ld /mnt/data

# Fix ownership
sudo chown -R username:username /mnt/data

# Fix permissions
sudo chmod 755 /mnt/data

Issue: Filesystem Errors on Mount

Cause: Corruption or incomplete formatting.

Solution:

# Unmount
sudo umount /mnt/data

# Check and repair filesystem (ext4)
sudo e2fsck -f -y /dev/sdb1

# Or for XFS
sudo xfs_repair /dev/sdb1

# Remount
sudo mount /dev/sdb1 /mnt/data

Best Practices

1. Always Use UUIDs in fstab

Device names can change; UUIDs remain constant:

# Good
UUID=c3d4e5f6-g7h8-9012-cdef-gh3456789012  /mnt/data  ext4  defaults  0  2

# Avoid
/dev/sdb1  /mnt/data  ext4  defaults  0  2

2. Backup fstab Before Editing

sudo cp /etc/fstab /etc/fstab.backup.$(date +%Y%m%d)

3. Test fstab Before Rebooting

sudo mount -a

4. Use Descriptive Labels

sudo mkfs.ext4 -L mysql_data /dev/sdb1
sudo mkfs.ext4 -L backup_storage /dev/sdc1

5. Add nofail Option for Non-Critical Disks

Prevent boot failure if disk unavailable:

UUID=xxx  /mnt/data  ext4  defaults,nofail  0  2

6. Use noatime for Performance

Disable access time updates:

UUID=xxx  /mnt/data  ext4  defaults,noatime  0  2

7. Document Disk Configuration

Maintain documentation:

# Save configuration
lsblk -f > /root/disk_config_$(date +%Y%m%d).txt
cat /etc/fstab >> /root/disk_config_$(date +%Y%m%d).txt

8. Monitor Disk Space

Set up monitoring:

# Create monitoring script
cat << 'EOF' | sudo tee /usr/local/bin/check_disk_space.sh
#!/bin/bash
df -h | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }' | while read output; do
  usage=$(echo $output | awk '{print $1}' | sed 's/%//g')
  partition=$(echo $output | awk '{print $2}')
  if [ $usage -ge 90 ]; then
    echo "WARNING: $partition at ${usage}%"
  fi
done
EOF

sudo chmod +x /usr/local/bin/check_disk_space.sh

# Add to cron
echo "0 * * * * /usr/local/bin/check_disk_space.sh" | crontab -

9. Use Appropriate Filesystem

Choose based on use case:

  • ext4: General purpose, most compatible
  • XFS: Large files, high performance
  • Btrfs: Advanced features, snapshots

10. Plan for Growth

Leave space for expansion:

  • Don't use 100% of disk capacity
  • Consider LVM for flexibility
  • Plan partition sizes carefully

Advanced Topics

Using LVM for Flexibility

# Create physical volume
sudo pvcreate /dev/sdb1

# Create volume group
sudo vgcreate vg_data /dev/sdb1

# Create logical volume
sudo lvcreate -L 100G -n lv_data vg_data

# Create filesystem
sudo mkfs.ext4 /dev/vg_data/lv_data

# Mount
sudo mkdir /mnt/data
sudo mount /dev/vg_data/lv_data /mnt/data

# Add to fstab
echo "/dev/vg_data/lv_data /mnt/data ext4 defaults 0 2" | sudo tee -a /etc/fstab

Encrypting Additional Disks

# Install cryptsetup
sudo apt install cryptsetup

# Encrypt partition
sudo cryptsetup luksFormat /dev/sdb1

# Open encrypted partition
sudo cryptsetup open /dev/sdb1 encrypted_data

# Create filesystem
sudo mkfs.ext4 /dev/mapper/encrypted_data

# Mount
sudo mkdir /mnt/encrypted
sudo mount /dev/mapper/encrypted_data /mnt/encrypted

Creating Multiple Partitions on One Disk

# Create partitions
sudo parted /dev/sdb mklabel gpt
sudo parted /dev/sdb mkpart primary 0% 50%
sudo parted /dev/sdb mkpart primary 50% 100%

# Format partitions
sudo mkfs.ext4 /dev/sdb1
sudo mkfs.xfs /dev/sdb2

# Mount partitions
sudo mkdir /mnt/data1 /mnt/data2
sudo mount /dev/sdb1 /mnt/data1
sudo mount /dev/sdb2 /mnt/data2

# Add to fstab
UUID1=$(sudo blkid -s UUID -o value /dev/sdb1)
UUID2=$(sudo blkid -s UUID -o value /dev/sdb2)
echo "UUID=$UUID1 /mnt/data1 ext4 defaults,noatime 0 2" | sudo tee -a /etc/fstab
echo "UUID=$UUID2 /mnt/data2 xfs defaults,noatime 0 2" | sudo tee -a /etc/fstab

Disk Addition Checklist

Use this checklist for each disk addition:

  • Attach physical or virtual disk
  • Verify disk detection with lsblk
  • Create partition with fdisk, parted, or gdisk
  • Create filesystem with mkfs.ext4 or appropriate command
  • Create mount point directory
  • Test mount with mount command
  • Verify read/write access
  • Get UUID with blkid
  • Backup /etc/fstab
  • Add entry to /etc/fstab
  • Test with mount -a
  • Set appropriate permissions
  • Verify after reboot
  • Document configuration
  • Configure monitoring

Conclusion

Adding and mounting additional disks is a fundamental Linux administration skill that enables flexible storage management. By following the proper procedures—from disk detection through permanent mounting configuration—you ensure reliable, performant, and maintainable storage expansion.

Key takeaways:

  1. Always verify device names before formatting
  2. Use UUIDs in /etc/fstab for reliability
  3. Test mount configurations before rebooting
  4. Backup /etc/fstab before making changes
  5. Set appropriate permissions for security
  6. Document all disk additions for future reference
  7. Monitor disk space proactively
  8. Choose appropriate filesystems for your workload
  9. Use nofail option for non-critical disks
  10. Test disaster recovery procedures

Remember that proper disk management is not just about adding storage—it's about implementing reliable, maintainable storage infrastructure that supports your applications and data for the long term. By following the best practices and procedures outlined in this guide, you can confidently manage storage expansion on any Linux system.

Whether you're managing physical servers, virtual machines, or cloud instances, the principles remain the same: careful planning, proper configuration, thorough testing, and comprehensive documentation ensure successful storage management operations.