Advanced Networking with iproute2: Modern Linux Network Configuration Guide

Introduction

iproute2 represents the modern standard for Linux network configuration, replacing legacy tools like ifconfig, route, and arp with a unified, powerful, and feature-rich networking suite. Developed by Alexey Kuznetsov and maintained by the Linux kernel networking community, iproute2 provides comprehensive control over routing, traffic control, network namespaces, VRF (Virtual Routing and Forwarding), policy-based routing, and advanced networking features essential for enterprise infrastructure.

While legacy net-tools remain familiar to many administrators, they lack support for modern kernel networking capabilities introduced in the past two decades. iproute2's ip command provides access to VxLAN tunnels, MPLS routing, segment routing, network namespaces, multi-path routing, policy-based routing tables, and sophisticated traffic shaping—features impossible to configure with traditional tools.

Organizations building complex network architectures—multi-tenant data centers, software-defined networks, container orchestration platforms, and high-performance trading networks—depend on iproute2 for implementing advanced topologies, optimizing traffic flow, and troubleshooting connectivity issues. Major cloud providers including AWS, Google Cloud, and Azure leverage iproute2 extensively in their underlying infrastructure for implementing VPC networking, overlay networks, and sophisticated routing policies.

System administrators mastering iproute2 gain capabilities far beyond basic network configuration: implementing quality of service policies, building overlay networks, creating complex routing scenarios, troubleshooting performance bottlenecks, and architecting software-defined networking solutions that would be impossible with legacy tooling.

This comprehensive guide explores enterprise-grade iproute2 implementations, covering fundamental commands, advanced routing scenarios, traffic control, network namespaces, tunnel technologies, performance optimization, troubleshooting methodologies, and practical use cases essential for modern Linux networking.

Theory and Core Concepts

iproute2 Architecture

iproute2 consists of several integrated utilities:

ip: Primary interface for network configuration managing addresses, routes, links, tunnels, and rules. Replaces ifconfig, route, and arp.

tc (Traffic Control): Implements quality of service, bandwidth management, and packet scheduling. Configures queuing disciplines, filters, and policers.

ss (Socket Statistics): Displays socket information replacing netstat. Faster and provides more detailed information about network connections.

bridge: Manages Linux bridge devices for Layer 2 forwarding. Essential for virtualization and container networking.

rtmon: Monitors routing table changes in real-time.

ip-netns: Manages network namespaces enabling network isolation for containers and multi-tenancy.

Linux Routing Architecture

Understanding routing fundamentals is essential:

Routing Tables: Linux supports multiple routing tables (0-255). Common tables:

  • Main (table 254): Default routing table used by most applications
  • Local (table 255): Local and broadcast addresses (automatically managed)
  • Default (table 253): Reserved for post-routing defaults

Routing Policy Database (RPDB): Rules determining which routing table to use based on packet characteristics (source address, TOS, fwmark, interface).

Routing Cache: Removed in kernel 3.6, routing decisions now made directly from routing tables with caching in CPU per-flow data structures.

Multi-Path Routing: Distributes traffic across multiple paths based on various algorithms (random, round-robin, weighted).

Traffic Control Framework

Linux traffic control implements sophisticated QoS:

Queuing Disciplines (qdiscs): Control packet queuing and scheduling:

  • pfifo_fast: Default FIFO with three priority bands
  • fq_codel: Fair Queue Controlled Delay (recommended default)
  • htb: Hierarchical Token Bucket for rate limiting
  • sfq: Stochastic Fairness Queuing

Classes: Hierarchical groupings within classful qdiscs enabling complex bandwidth allocation.

Filters: Classify packets into different classes based on criteria (source/dest IP, port, protocol, TOS/DSCP).

Policers/Shapers: Control bandwidth usage by dropping or delaying packets exceeding configured rates.

Network Namespaces

Namespaces provide complete network stack isolation:

Each namespace has independent:

  • Network interfaces
  • Routing tables
  • Firewall rules (iptables/nftables)
  • Socket bindings
  • Network statistics

Critical for container networking, multi-tenancy, and network function virtualization.

Prerequisites

Hardware Requirements

Minimum System Specifications:

  • 2 CPU cores (4+ for traffic control testing)
  • 4GB RAM minimum
  • Multiple network interfaces (for advanced routing scenarios)
  • 10Gbps NICs recommended for high-performance configurations

Software Prerequisites

Installation:

RHEL/Rocky/CentOS:

# iproute2 installed by default, verify/update
dnf install -y iproute iproute-tc

# Verify version (4.0+ recommended)
ip -V

Ubuntu/Debian:

# iproute2 installed by default
apt update
apt install -y iproute2

ip -V

Additional Tools:

# Install helpful utilities
dnf install -y tcpdump ethtool bridge-utils  # RHEL/Rocky
apt install -y tcpdump ethtool bridge-utils  # Ubuntu/Debian

Kernel Requirements

Verify advanced networking support:

# Check kernel modules
lsmod | grep -E "vxlan|gre|ipip|fou"

# Load required modules
modprobe vxlan
modprobe ip_gre
modprobe ip_tunnel

# Make persistent
cat > /etc/modules-load.d/networking.conf << EOF
vxlan
ip_gre
ip_tunnel
EOF

Advanced Configuration

Interface Management

Basic Interface Operations:

# Show all interfaces
ip link show

# Show specific interface
ip link show dev eth0

# Bring interface up/down
ip link set eth0 up
ip link set eth0 down

# Change MTU
ip link set eth0 mtu 9000

# Change MAC address
ip link set eth0 address 00:11:22:33:44:55

# Enable/disable promiscuous mode
ip link set eth0 promisc on

# Set interface alias
ip link set eth0 alias "Management Interface"

VLAN Configuration:

# Create VLAN interface
ip link add link eth0 name eth0.100 type vlan id 100

# Assign IP address
ip addr add 192.168.100.10/24 dev eth0.100

# Bring up
ip link set eth0.100 up

# Remove VLAN
ip link delete eth0.100

Bridge Configuration:

# Create bridge
ip link add br0 type bridge

# Add interfaces to bridge
ip link set eth1 master br0
ip link set eth2 master br0

# Configure bridge
ip link set br0 up
ip addr add 192.168.1.1/24 dev br0

# Show bridge details
bridge link show

# Remove interface from bridge
ip link set eth1 nomaster

# Bridge STP configuration
ip link set br0 type bridge stp_state 1

Bonding/Teaming:

# Create bond interface
ip link add bond0 type bond mode 802.3ad

# Add slaves to bond
ip link set eth0 master bond0
ip link set eth1 master bond0

# Configure bond
ip link set bond0 up
ip addr add 192.168.1.10/24 dev bond0

# View bond status
cat /proc/net/bonding/bond0

# Change bond mode
ip link set bond0 type bond mode active-backup

# Available modes: balance-rr, active-backup, balance-xor,
# broadcast, 802.3ad, balance-tlb, balance-alb

Address Management

IPv4 Address Configuration:

# Add IP address
ip addr add 192.168.1.10/24 dev eth0

# Add multiple addresses
ip addr add 192.168.1.11/24 dev eth0
ip addr add 192.168.1.12/24 dev eth0

# Add address with label
ip addr add 192.168.1.20/24 dev eth0 label eth0:1

# Add broadcast address explicitly
ip addr add 192.168.1.30/24 broadcast 192.168.1.255 dev eth0

# Remove address
ip addr del 192.168.1.10/24 dev eth0

# Flush all addresses from interface
ip addr flush dev eth0

IPv6 Address Configuration:

# Add IPv6 address
ip -6 addr add 2001:db8::10/64 dev eth0

# Add link-local address
ip -6 addr add fe80::1/64 dev eth0

# Disable IPv6 autoconfiguration
echo 0 > /proc/sys/net/ipv6/conf/eth0/autoconf

# Show IPv6 addresses only
ip -6 addr show

Routing Configuration

Basic Routing:

# Show routing table
ip route show

# Add default gateway
ip route add default via 192.168.1.1

# Add specific route
ip route add 10.0.0.0/8 via 192.168.1.254

# Add route via interface
ip route add 172.16.0.0/12 dev eth1

# Delete route
ip route del 10.0.0.0/8

# Replace route
ip route replace 10.0.0.0/8 via 192.168.1.253

Multiple Routing Tables:

# Create custom routing table (edit /etc/iproute2/rt_tables)
echo "100 custom" >> /etc/iproute2/rt_tables

# Add routes to custom table
ip route add default via 192.168.2.1 table custom
ip route add 10.0.0.0/8 via 192.168.2.254 table custom

# Show custom table
ip route show table custom

# Show all tables
ip route show table all

Policy-Based Routing:

# Route based on source address
ip rule add from 192.168.1.0/24 table custom priority 100

# Route based on destination
ip rule add to 10.0.0.0/8 table custom priority 200

# Route based on interface
ip rule add iif eth1 table custom priority 300

# Route based on TOS
ip rule add tos 0x10 table custom priority 400

# Route based on fwmark (set by iptables)
ip rule add fwmark 1 table custom priority 500

# Show rules
ip rule show

# Delete rule
ip rule del from 192.168.1.0/24 table custom

Multi-Path Routing:

# Load balance across multiple gateways
ip route add default scope global \
    nexthop via 192.168.1.1 dev eth0 weight 1 \
    nexthop via 192.168.2.1 dev eth1 weight 1

# Weighted multi-path (favor one path)
ip route add 10.0.0.0/8 \
    nexthop via 192.168.1.254 weight 3 \
    nexthop via 192.168.2.254 weight 1

# Show multi-path routes
ip route show

Tunnel Configuration

GRE Tunnel:

# Create GRE tunnel
ip tunnel add gre1 mode gre \
    local 203.0.113.10 \
    remote 203.0.113.20 \
    ttl 255

# Configure tunnel interface
ip addr add 10.10.10.1/30 dev gre1
ip link set gre1 up

# Add route through tunnel
ip route add 172.16.0.0/16 dev gre1

# Remove tunnel
ip link delete gre1

VxLAN Tunnel:

# Create VxLAN interface
ip link add vxlan100 type vxlan \
    id 100 \
    local 192.168.1.10 \
    remote 192.168.1.20 \
    dev eth0 \
    dstport 4789

# Configure VxLAN interface
ip addr add 10.100.0.1/24 dev vxlan100
ip link set vxlan100 up

# Multicast VxLAN
ip link add vxlan200 type vxlan \
    id 200 \
    group 239.1.1.1 \
    dev eth0 \
    dstport 4789

IPIP Tunnel:

# Create IPIP tunnel
ip tunnel add ipip1 mode ipip \
    local 203.0.113.10 \
    remote 203.0.113.20

ip addr add 10.20.20.1/30 dev ipip1
ip link set ipip1 up

# Add route
ip route add 192.168.0.0/16 dev ipip1

WireGuard Interface (modern VPN):

# Create WireGuard interface
ip link add dev wg0 type wireguard

# Configure interface
ip addr add 10.0.0.1/24 dev wg0
ip link set wg0 up

# Configure via wg tool
wg set wg0 \
    private-key /etc/wireguard/private.key \
    listen-port 51820 \
    peer <PUBLIC_KEY> \
    allowed-ips 10.0.0.2/32 \
    endpoint 203.0.113.20:51820

# Show WireGuard status
wg show wg0

Traffic Control (tc)

Basic Rate Limiting:

# Limit interface to 100Mbit
tc qdisc add dev eth0 root tbf rate 100mbit burst 32kbit latency 400ms

# Show qdisc configuration
tc qdisc show dev eth0

# Remove qdisc
tc qdisc del dev eth0 root

Hierarchical Token Bucket (HTB):

# Create root qdisc
tc qdisc add dev eth0 root handle 1: htb default 30

# Create root class (total bandwidth)
tc class add dev eth0 parent 1: classid 1:1 htb rate 1gbit

# Create child classes
# High priority traffic (50% guaranteed, can use 80%)
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 500mbit ceil 800mbit prio 1

# Medium priority (30% guaranteed, can use 60%)
tc class add dev eth0 parent 1:1 classid 1:20 htb rate 300mbit ceil 600mbit prio 2

# Low priority (20% guaranteed, can use 40%)
tc class add dev eth0 parent 1:1 classid 1:30 htb rate 200mbit ceil 400mbit prio 3

# Add filters to classify traffic
# High priority: SSH, DNS
tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 \
    match ip dport 22 0xffff flowid 1:10

tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 \
    match ip dport 53 0xffff flowid 1:10

# Medium priority: HTTP/HTTPS
tc filter add dev eth0 protocol ip parent 1:0 prio 2 u32 \
    match ip dport 80 0xffff flowid 1:20

tc filter add dev eth0 protocol ip parent 1:0 prio 2 u32 \
    match ip dport 443 0xffff flowid 1:20

# Show statistics
tc -s class show dev eth0
tc -s filter show dev eth0

Fair Queue Controlled Delay (fq_codel):

# Enable fq_codel (recommended default)
tc qdisc add dev eth0 root fq_codel

# Configure parameters
tc qdisc add dev eth0 root fq_codel \
    limit 10240 \
    flows 1024 \
    quantum 1514 \
    target 5ms \
    interval 100ms

# Show statistics
tc -s qdisc show dev eth0

Traffic Shaping Script Example:

#!/bin/bash
# traffic_shaping.sh - Implement QoS

IFACE="eth0"
TOTAL_BW="1000mbit"

# Clear existing rules
tc qdisc del dev $IFACE root 2>/dev/null

# Create HTB root
tc qdisc add dev $IFACE root handle 1: htb default 999

# Root class
tc class add dev $IFACE parent 1: classid 1:1 htb rate $TOTAL_BW

# Interactive traffic (40% guaranteed, can burst to 80%)
tc class add dev $IFACE parent 1:1 classid 1:10 htb \
    rate 400mbit ceil 800mbit prio 0

# Bulk traffic (40% guaranteed, can burst to 60%)
tc class add dev $IFACE parent 1:1 classid 1:20 htb \
    rate 400mbit ceil 600mbit prio 1

# Default/other (20% guaranteed)
tc class add dev $IFACE parent 1:1 classid 1:999 htb \
    rate 200mbit ceil 300mbit prio 2

# Add sfq to leaf classes for fairness
tc qdisc add dev $IFACE parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev $IFACE parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev $IFACE parent 1:999 handle 999: sfq perturb 10

# Classify interactive traffic
tc filter add dev $IFACE protocol ip parent 1:0 prio 1 u32 \
    match ip protocol 6 0xff \
    match ip dport 22 0xffff \
    flowid 1:10

tc filter add dev $IFACE protocol ip parent 1:0 prio 1 u32 \
    match ip protocol 17 0xff \
    match ip dport 53 0xffff \
    flowid 1:10

# Classify bulk traffic
tc filter add dev $IFACE protocol ip parent 1:0 prio 2 u32 \
    match ip protocol 6 0xff \
    match ip sport 80 0xffff \
    flowid 1:20

echo "Traffic shaping configured on $IFACE"
tc -s class show dev $IFACE

Network Namespaces

Create and Configure Namespace:

# Create namespace
ip netns add red

# List namespaces
ip netns list

# Execute command in namespace
ip netns exec red ip link show

# Create veth pair
ip link add veth-host type veth peer name veth-red

# Move one end to namespace
ip link set veth-red netns red

# Configure host side
ip addr add 192.168.1.1/24 dev veth-host
ip link set veth-host up

# Configure namespace side
ip netns exec red ip addr add 192.168.1.2/24 dev veth-red
ip netns exec red ip link set veth-red up
ip netns exec red ip link set lo up

# Add default route in namespace
ip netns exec red ip route add default via 192.168.1.1

# Test connectivity
ip netns exec red ping -c 3 192.168.1.1

# Delete namespace
ip netns del red

Connect Multiple Namespaces via Bridge:

# Create namespaces
ip netns add ns1
ip netns add ns2

# Create bridge
ip link add br0 type bridge
ip link set br0 up
ip addr add 192.168.100.1/24 dev br0

# Create veth pairs
ip link add veth1-host type veth peer name veth1-ns
ip link add veth2-host type veth peer name veth2-ns

# Connect host ends to bridge
ip link set veth1-host master br0
ip link set veth2-host master br0
ip link set veth1-host up
ip link set veth2-host up

# Move namespace ends
ip link set veth1-ns netns ns1
ip link set veth2-ns netns ns2

# Configure namespace 1
ip netns exec ns1 ip addr add 192.168.100.10/24 dev veth1-ns
ip netns exec ns1 ip link set veth1-ns up
ip netns exec ns1 ip link set lo up
ip netns exec ns1 ip route add default via 192.168.100.1

# Configure namespace 2
ip netns exec ns2 ip addr add 192.168.100.20/24 dev veth2-ns
ip netns exec ns2 ip link set veth2-ns up
ip netns exec ns2 ip link set lo up
ip netns exec ns2 ip route add default via 192.168.100.1

# Test inter-namespace connectivity
ip netns exec ns1 ping -c 3 192.168.100.20

Performance Optimization

Interface Optimization

Hardware Offload Features:

# Show current offload settings
ethtool -k eth0

# Enable hardware offloads
ethtool -K eth0 tso on
ethtool -K eth0 gso on
ethtool -K eth0 gro on
ethtool -K eth0 sg on
ethtool -K eth0 rx on
ethtool -K eth0 tx on

# Disable offloads (for troubleshooting)
ethtool -K eth0 tso off gso off gro off

Ring Buffer Tuning:

# Show current ring buffer sizes
ethtool -g eth0

# Increase ring buffers
ethtool -G eth0 rx 4096 tx 4096

Interrupt Coalescing:

# Show current settings
ethtool -c eth0

# Reduce interrupt frequency (increase throughput)
ethtool -C eth0 rx-usecs 50 tx-usecs 50

# Minimal latency (more interrupts)
ethtool -C eth0 rx-usecs 0 tx-usecs 0

Routing Optimization

Route Caching Considerations:

# Modern kernels (3.6+) removed route cache
# Optimization focuses on reducing lookup complexity

# Use specific routes instead of default for frequently accessed destinations
ip route add 8.8.8.8/32 via 192.168.1.1
ip route add 1.1.1.1/32 via 192.168.1.1

# Optimize rule order (most specific first)
ip rule show  # Rules processed in priority order

Traffic Control Optimization

Qdisc Selection for Workload:

# Low latency: pfifo_fast or fq
tc qdisc replace dev eth0 root fq

# General purpose: fq_codel (default recommendation)
tc qdisc replace dev eth0 root fq_codel

# High throughput bulk: sfq or pfifo_fast
tc qdisc replace dev eth0 root sfq perturb 10

# Complex QoS: HTB with appropriate parameters

Monitoring and Observability

Real-Time Link Monitoring**:

# Watch link statistics
ip -s link show eth0

# Continuous monitoring
watch -n 1 'ip -s link show eth0'

# Detailed statistics
ip -s -s link show eth0

# JSON output for scripting
ip -j link show eth0 | jq

Route Monitoring:

# Monitor routing table changes
ip monitor route

# Monitor all events
ip monitor all

# Monitor specific table
ip monitor route table custom

Socket Statistics:

# Show all sockets
ss -tunapl

# Show TCP statistics
ss -ti

# Show socket memory usage
ss -m

# Show specific port
ss -tunapl | grep :80

Namespace Monitoring Script:

#!/bin/bash
# monitor_namespaces.sh

for ns in $(ip netns list | awk '{print $1}'); do
    echo "=== Namespace: $ns ==="
    echo "Interfaces:"
    ip netns exec $ns ip link show
    echo "Addresses:"
    ip netns exec $ns ip addr show
    echo "Routes:"
    ip netns exec $ns ip route show
    echo ""
done

Troubleshooting

Connectivity Issues

Symptom: Cannot reach destination network.

Diagnosis:

# Verify interface status
ip link show eth0

# Check IP configuration
ip addr show eth0

# Verify routing
ip route get 8.8.8.8

# Check ARP resolution
ip neigh show

# Test with ping
ping -c 3 8.8.8.8

# Trace route
ip route get 8.8.8.8

Resolution:

# Bring interface up
ip link set eth0 up

# Add missing route
ip route add default via 192.168.1.1

# Flush ARP cache
ip neigh flush dev eth0

# Add static ARP entry
ip neigh add 192.168.1.1 lladdr 00:11:22:33:44:55 dev eth0

Policy Routing Not Working

Symptom: Traffic not following policy rules.

Diagnosis:

# Show rule order
ip rule show

# Check routing table contents
ip route show table custom

# Verify packet marking (if using fwmark)
iptables -t mangle -L -n -v

# Test with ip route get
ip route get 10.0.0.1 from 192.168.1.10

Resolution:

# Check rule priority (lower number = higher priority)
# Ensure specific rules come before general ones

# Add missing routes to custom table
ip route add default via 192.168.2.1 table custom

# Flush routing cache (if applicable)
ip route flush cache

Traffic Control Not Limiting Bandwidth

Symptom: tc rules configured but bandwidth not limited.

Diagnosis:

# Show qdisc configuration
tc qdisc show dev eth0

# Show statistics
tc -s qdisc show dev eth0
tc -s class show dev eth0

# Check filter rules
tc filter show dev eth0

# Test bandwidth
iperf3 -c remote-host

Resolution:

# Verify correct interface
tc qdisc show dev eth0  # Not eth1

# Check HTB rate limits
tc class show dev eth0  # Verify rate/ceil values

# Ensure filters match traffic
tcpdump -i eth0 -n | grep <dest-port>

# Remove and recreate configuration
tc qdisc del dev eth0 root
# ... recreate rules

Namespace Connectivity Problems

Symptom: Cannot communicate between namespaces or to external network.

Diagnosis:

# Check veth pair status
ip link show | grep veth

# Verify namespace configuration
ip netns exec ns1 ip addr show
ip netns exec ns1 ip route show

# Test connectivity
ip netns exec ns1 ping 192.168.1.1

# Check NAT rules
iptables -t nat -L -n -v

Resolution:

# Verify veth pair correctly connected
ip link | grep -A1 veth

# Ensure both ends are up
ip netns exec ns1 ip link set veth-ns up

# Add default route
ip netns exec ns1 ip route add default via 192.168.1.1

# Enable IP forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward

# Add NAT rule
iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -j MASQUERADE

Conclusion

iproute2 represents the modern standard for Linux network configuration, providing comprehensive control over routing, traffic control, network namespaces, and advanced networking features essential for enterprise infrastructure. Mastery of iproute2 utilities—particularly the ip and tc commands—enables administrators to implement sophisticated network architectures impossible with legacy tooling.

Understanding policy-based routing, multi-path routing, traffic control hierarchies, and network namespace management distinguishes basic network administrators from infrastructure engineers capable of architecting complex, high-performance networking solutions. These capabilities are fundamental to modern infrastructure including container orchestration, software-defined networking, and multi-tenant cloud environments.

Organizations should invest in comprehensive monitoring of network performance, routing behavior, and traffic control effectiveness to validate configuration correctness and troubleshoot issues efficiently. Regular testing of failover scenarios, bandwidth limiting, and routing policies ensures production readiness.

As networking continues evolving toward increasingly complex topologies—overlay networks, segment routing, network function virtualization—iproute2 remains the essential toolkit for implementing these technologies on Linux systems. Engineers mastering these fundamentals position themselves to build next-generation network infrastructure leveraging the full capabilities of the Linux networking stack.