iSCSI Target and Initiator Configuration
iSCSI (Internet SCSI) enables block-level storage access over IP networks, allowing servers to connect to remote storage devices as though they were local. This protocol is fundamental to storage networking in cloud environments, virtualization platforms, and enterprise data centers. This guide covers configuring iSCSI targets using targetcli, setting up initiators for connection, and implementing advanced features like multipath I/O for high availability.
Table of Contents
- iSCSI Architecture Overview
- iSCSI Target Configuration
- LUN and ACL Management
- iSCSI Initiator Setup
- Device Discovery and Login
- Multipath I/O Configuration
- Performance and Optimization
- Monitoring and Troubleshooting
- Conclusion
iSCSI Architecture Overview
iSCSI consists of two primary components:
- Target: Server providing storage (role of iSCSI device)
- Initiator: Client connecting to and consuming storage
Communication occurs over TCP/IP, typically on port 3260. iSCSI Qualified Names (IQNs) uniquely identify targets and initiators using format: iqn.year-month.domain.reverse.identifier
Key concepts:
- LUN (Logical Unit Number): Addressable storage unit within target
- Portal: Network address (IP:port) for target connection
- ACL (Access Control List): Restricts initiator access to specific targets
- TPG (Target Portal Group): Collection of network portals for target
iSCSI Target Configuration
Installing Target Software
# Install targetcli on Ubuntu/Debian
sudo apt-get install -y targetcli-fb
# Or on CentOS/RHEL
sudo yum install -y targetcli
# Verify installation
targetcli --version
# Start target service
sudo systemctl enable target
sudo systemctl start target
# Verify service status
sudo systemctl status target
Target Configuration Structure
Access the targetcli interactive shell:
# Enter targetcli shell
sudo targetcli
# List current configuration
> ls
# Show entire configuration tree
> status
# Get help
> help
# Exit targetcli
> exit
Creating Backstore Storage
Backstores define storage resources backing targets:
# Enter targetcli
sudo targetcli
# Create file-based backstore
> backstores/fileio create disk1 /var/lib/iscsi-disk1.img 10G
# Create block device backstore (preferred for performance)
> backstores/block create disk2 /dev/sdb
# Create RAMDISK backstore (for testing only)
> backstores/ramdisk create ramdisk1 1G
# List backstores
> backstores/
> ls
# Get backstore details
> backstores/block/disk2
> status
Creating iSCSI Target
Define the actual iSCSI target:
# Create target
> iscsi/ create iqn.2024-01.com.example.storage:target1
# View target
> iscsi/iqn.2024-01.com.example.storage:target1
# Create target portal group (TPG)
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1
# Add network portal to TPG
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/portals/ create 0.0.0.0 3260
# Verify configuration
> ls
Enabling Target
# Change to TPG directory
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1
# Enable target
> set attribute authentication=0
> set attribute generate_node_acls=1
> set attribute demo_mode_write_protect=0
# Enable TPG
> enable
# Verify enabled state
> status
LUN and ACL Management
Creating Logical Units
# Create LUN from backstore
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/luns/ create /backstores/block/disk2
# Create additional LUN from different backstore
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/luns/ create /backstores/fileio/disk1
# List LUNs
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/luns/
> ls
# Get LUN details
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/luns/lun0
> status
Access Control Lists (ACLs)
Restrict target access to specific initiators:
# Disable auto node ACL generation
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/set attribute generate_node_acls=0
# Create ACL for specific initiator
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/acls/ create iqn.2024-01.com.example.client:initiator1
# Map LUNs to ACL
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/acls/iqn.2024-01.com.example.client:initiator1/mapped_luns/ create 0 /backstores/block/disk2
# Create multiple mapped LUNs
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/acls/iqn.2024-01.com.example.client:initiator1/mapped_luns/ create 1 /backstores/fileio/disk1
# List ACLs
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/acls/
> ls
Saving Configuration
# Save targetcli configuration (in targetcli shell)
> saveconfig
# Verify saved configuration
> status
# Exit targetcli
> exit
# Verify configuration persistence
sudo ls -la /etc/target/saveconfig.json
iSCSI Initiator Setup
Installing Initiator Software
# Install on Ubuntu/Debian
sudo apt-get install -y open-iscsi open-iscsi-utils
# Or on CentOS/RHEL
sudo yum install -y iscsi-initiator-utils iscsi-initiator-utils-devel
# Verify installation
iscsiadm --version
# Start initiator service
sudo systemctl enable iscsid
sudo systemctl start iscsid
Configuring Initiator IQN
Set a unique initiator identifier:
# Edit initiator configuration
sudo nano /etc/iscsi/initiatorname.iscsi
# Set your initiator name
InitiatorName=iqn.2024-01.com.example.client:initiator1
# Restart iscsid service
sudo systemctl restart iscsid
# Verify initiator name
sudo iscsiadm -m iface -l
Discovery and Connection
# Discover targets on target server
sudo iscsiadm -m discovery -t sendtargets -p 192.168.1.100
# Login to discovered target
sudo iscsiadm -m node -T iqn.2024-01.com.example.storage:target1 -p 192.168.1.100 -l
# List logged-in sessions
sudo iscsiadm -m session -P 0
# Get detailed session information
sudo iscsiadm -m session -P 3
# Check connected iSCSI devices
lsblk | grep iscsi
Persistent Connections
Configure automatic login on boot:
# Set node to automatically connect
sudo iscsiadm -m node -T iqn.2024-01.com.example.storage:target1 -p 192.168.1.100 --op=update --name=node.startup --value=automatic
# Verify automatic startup
sudo iscsiadm -m node -T iqn.2024-01.com.example.storage:target1 -p 192.168.1.100
# Logout from target
sudo iscsiadm -m node -T iqn.2024-01.com.example.storage:target1 -p 192.168.1.100 -u
Multipath I/O Configuration
Installing Multipath Tools
Multipath I/O enables multiple network paths to same storage for redundancy:
# Install multipath tools
sudo apt-get install -y multipath-tools
# Or on CentOS/RHEL
sudo yum install -y device-mapper-multipath
# Start multipath service
sudo systemctl enable multipathd
sudo systemctl start multipathd
# Verify service
sudo systemctl status multipathd
Configuring Multipath
# Edit multipath configuration
sudo nano /etc/multipath.conf
# Example configuration for iSCSI:
cat <<'EOF' | sudo tee -a /etc/multipath.conf
defaults {
user_friendly_names yes
path_grouping_policy multibus
failback immediate
polling_interval 10
}
multipaths {
multipath {
wwn iqn.2024-01.com.example.storage:target1
alias storage-array-1
}
}
EOF
# Reload multipath
sudo multipath -r
# List multipath devices
sudo multipath -ll
# Check multipath status
sudo multipath -c
Creating Device Mappings
# After multipath configuration, mappings appear automatically
# or use:
sudo multipath -a
# Verify mappings
sudo multipath -ll
# Get detailed path information
sudo multipath -l
# List all paths to device
sudo dmsetup table
Filesystem Operations on Multipath Devices
# List multipath device names
sudo multipath -l | grep "^mpath"
# Format multipath device (caution: destructive)
sudo mkfs.ext4 /dev/mapper/storage-array-1
# Create mount point
sudo mkdir -p /mnt/shared-storage
# Mount device
sudo mount /dev/mapper/storage-array-1 /mnt/shared-storage
# Add to fstab for persistence
echo '/dev/mapper/storage-array-1 /mnt/shared-storage ext4 defaults,_netdev 0 2' | sudo tee -a /etc/fstab
# Verify mount
df -h /mnt/shared-storage
Performance and Optimization
iSCSI Session Parameters
# View default session parameters
sudo iscsiadm -m session -P 2 | grep -i "chap\|initial\|max"
# Modify session parameters
sudo iscsiadm -m node -T iqn.2024-01.com.example.storage:target1 \
-p 192.168.1.100 \
--op=update --name=node.session.initial_login_retry_cnt --value=3
# Tune connection parameters for performance
sudo iscsiadm -m node -T iqn.2024-01.com.example.storage:target1 \
-p 192.168.1.100 \
--op=update --name=node.conn[0].iscsi.MaxRecvDataSegmentLength --value=262144
# Enable header and data digests for integrity
sudo iscsiadm -m node -T iqn.2024-01.com.example.storage:target1 \
-p 192.168.1.100 \
--op=update --name=node.conn[0].iscsi.HeaderDigest --value=CRC32C
Target Performance Tuning
# In targetcli, optimize TPG parameters
sudo targetcli
# Set queue depth
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/set attribute queue_depth=32
# Enable immediate data (reduces latency)
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/set attribute immediate_data=1
# Enable bidirectional commands
> iscsi/iqn.2024-01.com.example.storage:target1/tpg1/set attribute default_cmdsn_depth=32
Monitoring and Troubleshooting
Session Monitoring
# View active sessions and connections
sudo iscsiadm -m session -P 3
# Check iSCSI statistics
cat /proc/iscsi_transport
# Monitor multipath status continuously
watch -n 1 'sudo multipath -ll'
# Check for path failures
sudo multipath -ll | grep "failed"
Diagnosing Connection Issues
# Enable debug logging
sudo iscsiadm -d 8 -m discovery -t sendtargets -p 192.168.1.100
# Check iscsid logs
sudo tail -f /var/log/syslog | grep iscsi
# Verify network connectivity to target
ping 192.168.1.100
nc -zv 192.168.1.100 3260
# Check iscsi service status
sudo systemctl status iscsid
sudo systemctl status multipathd
# Rescan for new LUNs
sudo iscsiadm -m node -R
Performance Troubleshooting
# Monitor iSCSI traffic
sudo tcpdump -i eth0 port 3260
# Check device I/O performance
sudo fio --name=seqread --ioengine=libaio --iodepth=32 --rw=read \
--bs=128k --size=1G --filename=/dev/mapper/storage-array-1
# View block device statistics
cat /proc/diskstats | grep iscsi
# Monitor multipath failover
sudo dmsetup status | grep mpath
Conclusion
iSCSI provides cost-effective block storage networking suitable for diverse infrastructure scenarios. By properly configuring targets with appropriate LUNs and ACLs, and deploying multipath I/O for redundancy, you establish a reliable storage infrastructure. Understanding performance tuning parameters and implementing proper monitoring ensures optimal data delivery. Whether deploying for virtual machine storage, shared databases, or backup infrastructure, mastering iSCSI configuration delivers the flexibility and performance required for modern data center environments.


