Network Performance Testing with iperf3

iperf3 is the standard tool for measuring maximum TCP and UDP bandwidth, making it essential for validating network infrastructure, diagnosing performance issues, and optimizing network configurations. With support for multiple concurrent streams, bidirectional testing, and detailed metrics, iperf3 enables comprehensive network performance evaluation. This guide covers installation, operation, and interpretation of network performance results.

Table of Contents

  1. iperf3 Installation and Setup
  2. Basic Network Testing
  3. TCP Performance Measurement
  4. UDP Testing and Loss Analysis
  5. Advanced Configuration
  6. Bidirectional and Concurrent Testing
  7. JSON Output and Analysis
  8. Network Troubleshooting
  9. Conclusion

iperf3 Installation and Setup

Installing iperf3

# Ubuntu/Debian installation
sudo apt-get update
sudo apt-get install -y iperf3

# CentOS/RHEL installation
sudo yum install -y iperf3

# Verify installation
iperf3 --version

# Show available options
iperf3 --help | head -50

Network Connectivity Prerequisites

# Test connectivity between client and server
ping -c 4 server-ip
ping -c 4 192.168.1.20

# Check routing
route -n
ip route show

# Verify firewall allows iperf3 port (default 5201)
# On server:
sudo ufw allow 5201/tcp
sudo ufw allow 5201/udp

# Or firewalld:
sudo firewall-cmd --permanent --add-port=5201/tcp
sudo firewall-cmd --permanent --add-port=5201/udp
sudo firewall-cmd --reload

# Test port accessibility
nc -zv 192.168.1.20 5201

Basic Network Testing

Starting iperf3 Server

# Run in server mode (listen for client connections)
iperf3 -s

# Server runs continuously awaiting clients
# Output shows: Listening on port 5201

# Start server in background
iperf3 -s &

# Start with custom port
iperf3 -s -p 5210

# Server with verbose output
iperf3 -s -v

# Check running iperf3 processes
ps aux | grep iperf3

Running Basic Client Test

# Run 10-second test from client to server
iperf3 -c 192.168.1.20

# Parameters explained:
# -c: Client mode (connect to server IP)
# Default: 10-second test duration

# Typical output:
# Interval           Transfer     Bitrate
# 0.00-1.00  sec   125 MBytes  1.05 Gbps
# 1.00-2.00  sec   132 MBytes  1.11 Gbps
# 2.00-10.00 sec  1.23 GBytes  1.05 Gbps
# - - - - - - - - - - - - - - - - - - - - - - - -
# Total           1.23 GBytes  1.05 Gbps

Custom Test Duration and Window Size

# 30-second test
iperf3 -c 192.168.1.20 -t 30

# 1-minute extended test
iperf3 -c 192.168.1.20 -t 60

# TCP window size tuning (default: 128KB)
iperf3 -c 192.168.1.20 -w 512K

# Very large window for high-bandwidth networks
iperf3 -c 192.168.1.20 -w 2M -t 30

# Show connection details
iperf3 -c 192.168.1.20 -v

TCP Performance Measurement

Single Stream TCP Test

# Baseline single-stream throughput
iperf3 -c 192.168.1.20 -t 30

# Typical single stream results:
# Gigabit Ethernet: ~940 Mbps (94% of wire speed)
# 10 Gbps: ~9.4 Gbps
# Limited by: single CPU core, TCP protocol overhead

# Test with different TCP buffer sizes
iperf3 -c 192.168.1.20 -w 64K -t 30    # Smaller buffer
iperf3 -c 192.168.1.20 -w 1M -t 30     # Large buffer
iperf3 -c 192.168.1.20 -w 4M -t 30     # Very large buffer

Multi-Stream TCP Testing

# Test with multiple parallel streams (4 streams)
iperf3 -c 192.168.1.20 -P 4 -t 30

# Parameters:
# -P: Number of parallel streams (cores)

# Typical results with 4 streams:
# Combined: ~2.5-3.8 Gbps (utilizes multiple cores)
# Each stream: ~625-950 Mbps

# Test with many streams (find saturation point)
for streams in 1 2 4 8 16 32; do
  echo "=== $streams streams ==="
  iperf3 -c 192.168.1.20 -P $streams -t 20
done

# Stream scaling analysis
iperf3 -c 192.168.1.20 -P $(nproc) -t 30 -v

Flow Control and ACK Filtering

# Disable ACK filtering (may improve bidirectional performance)
iperf3 -c 192.168.1.20 --no-delay -t 30

# TCP_NODELAY socket option reduces latency
# Useful for interactive applications

# Test affinity (bind to specific CPU cores)
iperf3 -c 192.168.1.20 -A 0 -t 30      # Bind to core 0

# Multiple streams with CPU affinity
taskset -c 0-3 iperf3 -c 192.168.1.20 -P 4 -t 30

UDP Testing and Loss Analysis

UDP Throughput Testing

# UDP test with default 1Mbps target bitrate
iperf3 -c 192.168.1.20 -u -b 1G -t 30

# Parameters:
# -u: UDP mode
# -b: Target bitrate (1G = 1 Gbps)

# Typical UDP output shows:
# Bitrate, Jitter, Lost/Total datagrams, Loss %

# High-speed UDP test
iperf3 -c 192.168.1.20 -u -b 10G -t 30

# Low-speed for high precision
iperf3 -c 192.168.1.20 -u -b 100M -t 30 -R

Analyzing UDP Loss

# Test UDP loss at various bitrates
for bitrate in 100M 500M 1G 5G; do
  echo "=== Testing $bitrate UDP ==="
  iperf3 -c 192.168.1.20 -u -b $bitrate -t 10
done

# Detailed loss analysis
iperf3 -c 192.168.1.20 -u -b 1G -t 30 -J > udp-results.json

# Parse results
cat udp-results.json | jq '.end_summary'

# Monitor packet loss in real-time
iperf3 -c 192.168.1.20 -u -b 5G -t 60 -R | grep -E "sec|lost"

Datagram Size Effects

# Standard 1500 byte MTU
iperf3 -c 192.168.1.20 -u -b 1G -t 30 -l 1500

# Larger datagrams (jumbo frames)
iperf3 -c 192.168.1.20 -u -b 1G -t 30 -l 9000

# Small datagrams (stress test)
iperf3 -c 192.168.1.20 -u -b 1G -t 30 -l 64

# Compare datagram size impact
for size in 64 256 1500 4096 9000; do
  echo "=== Datagram size: $size bytes ==="
  iperf3 -c 192.168.1.20 -u -b 2G -t 20 -l $size 2>&1 | grep -E "sec|lost"
done

Advanced Configuration

Reverse Testing (Server to Client)

# Reverse direction (download from server perspective)
iperf3 -c 192.168.1.20 -R -t 30

# Useful for asymmetric link testing
# Verifies server's upstream bandwidth

# Reverse with multiple streams
iperf3 -c 192.168.1.20 -R -P 8 -t 30

Bandwidth Limit Testing

# Test network with specific bandwidth limit
# Verify network allows minimum required throughput

# Test sustains 1 Gbps minimum
iperf3 -c 192.168.1.20 -b 1G -t 60

# Lower bandwidth test (WAN links)
iperf3 -c 192.168.1.20 -b 100M -t 60

# Measure burst capacity
iperf3 -c 192.168.1.20 -b 0 -t 10  # No limit (max capacity)

Bidirectional and Concurrent Testing

Simultaneous Bidirectional Test

# Test both directions simultaneously
iperf3 -c 192.168.1.20 --bidir -t 30

# Shows aggregate and individual directional throughput
# Important for: VPN, replication, backup scenarios
# Often lower than unidirectional due to protocol overhead

# Bidirectional with multiple streams
iperf3 -c 192.168.1.20 --bidir -P 4 -t 30

# UDP bidirectional
iperf3 -c 192.168.1.20 --bidir -u -b 1G -t 30

Concurrent Client Testing

# Multiple clients connecting simultaneously
# From different hosts:

# Terminal 1 (Client 1):
iperf3 -c 192.168.1.20 -t 60 &

# Terminal 2 (Client 2):
iperf3 -c 192.168.1.20 -t 60 &

# Terminal 3 (Client 3):
iperf3 -c 192.168.1.20 -t 60 &

# Server shows aggregated load from all clients

# Simulate load with script
cat > multi_client_test.sh <<'EOF'
#!/bin/bash
for i in {1..4}; do
  iperf3 -c 192.168.1.20 -t 60 -P 2 &
done
wait
EOF

chmod +x multi_client_test.sh
./multi_client_test.sh

JSON Output and Analysis

Generating JSON Results

# Output results in JSON format
iperf3 -c 192.168.1.20 -t 30 -J > test_result.json

# Pretty print JSON
cat test_result.json | jq .

# Extract specific metrics
cat test_result.json | jq '.end_summary.sum_sent.bits_per_second'

# Extract from multiple tests
for i in {1..3}; do
  iperf3 -c 192.168.1.20 -t 30 -J >> results.json
  echo "," >> results.json
done

Parsing and Analysis

# Extract throughput from results
jq '.end_summary.sum_received.bits_per_second / 1000000000' test_result.json

# Get jitter (UDP)
jq '.end_summary.mean_jitter' test_result.json

# Extract loss percentage
jq '.end_summary.lost_percent' test_result.json

# Create summary script
cat > analyze_iperf.sh <<'EOF'
#!/bin/bash
for file in *.json; do
  echo "=== $file ==="
  echo "Throughput (Gbps): $(jq '.end_summary.sum_received.bits_per_second / 1000000000' $file)"
  echo "Jitter (ms): $(jq '.end_summary.mean_jitter' $file)"
  echo "Loss: $(jq '.end_summary.lost_percent' $file)%"
done
EOF

chmod +x analyze_iperf.sh
./analyze_iperf.sh

Network Troubleshooting

Identifying Network Issues

# Test reveals network issues:
# 1. Low throughput: Check for contention, errors
# 2. High jitter: Indicates queuing or congestion
# 3. Packet loss: Buffer exhaustion or line errors

# Monitor system during iperf test
# Terminal 1 (server): iperf3 -s
# Terminal 2 (client): iperf3 -c 192.168.1.20 -t 60
# Terminal 3 (monitor):

# Check network errors
watch -n 1 'ethtool -S eth0 | grep -E "error|drop|collision"'

# Monitor CPU during test
watch -n 1 'ps aux | grep iperf3'
top -p $(pgrep iperf3)

# Check TX/RX queues
watch -n 1 'ifconfig | grep -E "RX|TX"'

Diagnosing Throughput Issues

# Test with tracing to identify bottlenecks
# 1. Single stream baseline
iperf3 -c 192.168.1.20 -t 30

# 2. Multiple streams (CPU bound?)
iperf3 -c 192.168.1.20 -P 8 -t 30

# 3. Large window size (buffer limited?)
iperf3 -c 192.168.1.20 -w 4M -t 30

# 4. Specific port selection
# Determine if network has per-port limits
iperf3 -c 192.168.1.20 -p 5201 -t 30
iperf3 -c 192.168.1.20 -p 5202 -t 30

# 5. MTU tuning test
# Check if MTU affects performance
iperf3 -c 192.168.1.20 -t 30
# Then lower MTU and test again
# ip link set dev eth0 mtu 1200
iperf3 -c 192.168.1.20 -t 30

Conclusion

iperf3 remains the gold standard for network performance validation, essential for infrastructure planning and troubleshooting. By understanding TCP and UDP characteristics, multi-stream scaling, and bidirectional performance implications, network engineers make informed decisions about capacity and optimization. Regular baseline testing establishes reference points detecting performance degradation from hardware aging or configuration issues. Whether validating new infrastructure, troubleshooting performance problems, or optimizing network configurations, iperf3 provides the metrics necessary for data-driven network engineering decisions.