Network Bonding and Teaming on Linux

Network bonding and teaming combine multiple physical network interfaces into a single logical interface for redundancy and increased throughput. This guide covers configuring bonding modes (active-backup, LACP 802.3ad, and round-robin), NetworkManager teaming, and testing failover behavior on Ubuntu and CentOS/Rocky Linux.

Prerequisites

  • Ubuntu 20.04/22.04 or CentOS/Rocky Linux 8+
  • Two or more network interfaces (e.g., eth0, eth1 or ens3, ens4)
  • Root or sudo access
  • For LACP: a managed switch that supports 802.3ad link aggregation

Understanding Bonding Modes

ModeNameDescriptionSwitch Required
0balance-rrRound-robin — transmit packets in orderNo
1active-backupOne active NIC; others are standbyNo
2balance-xorXOR of source/dest MAC for load balancingNo
3broadcastTransmit on all NICs simultaneouslyNo
4802.3ad (LACP)IEEE 802.3ad dynamic link aggregationYes (LACP)
5balance-tlbAdaptive transmit load balancingNo
6balance-albAdaptive load balancing (TX + RX)No

Most common choices:

  • Mode 1 (active-backup): Simple failover; works on any switch
  • Mode 4 (802.3ad): True link aggregation; requires LACP-capable switch
  • Mode 6 (balance-alb): Software-only load balancing; no special switch needed

Kernel Bonding Module Setup

# Load the bonding module
sudo modprobe bonding

# Verify it loaded
lsmod | grep bonding

# Make it persistent
echo "bonding" | sudo tee /etc/modules-load.d/bonding.conf

Configure Bonding with Netplan (Ubuntu)

Ubuntu 18.04+ uses Netplan for network configuration:

# Back up existing config
sudo cp /etc/netplan/00-installer-config.yaml /etc/netplan/00-installer-config.yaml.bak

# Find your interface names
ip link show

Active-backup bonding:

# /etc/netplan/00-bonding.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: false
    eth1:
      dhcp4: false
  bonds:
    bond0:
      interfaces:
        - eth0
        - eth1
      addresses:
        - 192.168.1.100/24
      routes:
        - to: default
          via: 192.168.1.1
      nameservers:
        addresses:
          - 8.8.8.8
          - 8.8.4.4
      parameters:
        mode: active-backup
        primary: eth0
        mii-monitor-interval: 100    # Check link every 100ms
        fail-over-mac-policy: active # Use active slave's MAC
sudo netplan apply

# Verify the bond is active
cat /proc/net/bonding/bond0
ip addr show bond0

Configure Bonding with NetworkManager

# Create the bond master interface
sudo nmcli connection add \
  type bond \
  con-name bond0 \
  ifname bond0 \
  bond.options "mode=active-backup,miimon=100,primary=eth0"

# Assign a static IP to the bond
sudo nmcli connection modify bond0 \
  ipv4.method manual \
  ipv4.addresses 192.168.1.100/24 \
  ipv4.gateway 192.168.1.1 \
  ipv4.dns "8.8.8.8,8.8.4.4"

# Add slave interfaces to the bond
sudo nmcli connection add \
  type ethernet \
  con-name bond0-slave-eth0 \
  ifname eth0 \
  master bond0

sudo nmcli connection add \
  type ethernet \
  con-name bond0-slave-eth1 \
  ifname eth1 \
  master bond0

# Bring up the bond
sudo nmcli connection up bond0
sudo nmcli connection up bond0-slave-eth0
sudo nmcli connection up bond0-slave-eth1

# Verify
nmcli device status
cat /proc/net/bonding/bond0

Configure Bonding on CentOS/Rocky Linux

# Create the bond interface config
sudo tee /etc/sysconfig/network-scripts/ifcfg-bond0 > /dev/null <<'EOF'
DEVICE=bond0
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.1.100
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=8.8.8.8
BONDING_OPTS="mode=active-backup miimon=100 primary=eth0"
EOF

# Configure the first slave
sudo tee /etc/sysconfig/network-scripts/ifcfg-eth0 > /dev/null <<'EOF'
DEVICE=eth0
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
EOF

# Configure the second slave
sudo tee /etc/sysconfig/network-scripts/ifcfg-eth1 > /dev/null <<'EOF'
DEVICE=eth1
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
EOF

# Restart networking
sudo systemctl restart NetworkManager
# or:
sudo ifup bond0

LACP (802.3ad) Bonding

LACP requires switch support. Enable link aggregation (port channel/LAG) on the switch first:

# /etc/netplan/00-lacp.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: false
    eth1:
      dhcp4: false
  bonds:
    bond0:
      interfaces:
        - eth0
        - eth1
      addresses:
        - 192.168.1.100/24
      routes:
        - to: default
          via: 192.168.1.1
      parameters:
        mode: 802.3ad
        lacp-rate: fast           # Send LACPDU every 1s (vs 30s for slow)
        mii-monitor-interval: 100
        transmit-hash-policy: layer3+4   # Hash on IP+port for better distribution
        ad-select: bandwidth      # Select aggregation based on bandwidth
sudo netplan apply

# Verify LACP is negotiating
cat /proc/net/bonding/bond0
# Should show "LACP actor/partner info" sections and both ports as "Aggregator"

# Check LACP status on Cisco switch:
# show etherchannel summary
# show lacp neighbor

Network Teaming with TeamD

NetworkManager teaming is an alternative to kernel bonding with more flexible JSON configuration:

# Create the team master
sudo nmcli connection add \
  type team \
  con-name team0 \
  ifname team0 \
  team.config '{"runner":{"name":"activebackup"},"link_watch":{"name":"ethtool"}}'

# Assign IP to team0
sudo nmcli connection modify team0 \
  ipv4.method manual \
  ipv4.addresses 192.168.1.100/24 \
  ipv4.gateway 192.168.1.1

# Add ports to the team
sudo nmcli connection add \
  type team-slave \
  con-name team0-port1 \
  ifname eth0 \
  master team0

sudo nmcli connection add \
  type team-slave \
  con-name team0-port2 \
  ifname eth1 \
  master team0

# Bring up the team
sudo nmcli connection up team0
sudo nmcli connection up team0-port1
sudo nmcli connection up team0-port2

# Check team status
sudo teamdctl team0 state
sudo teamdctl team0 config dump

Failover Testing

# Start a continuous ping in the background
ping -i 0.2 8.8.8.8 &

# Check current active interface
cat /proc/net/bonding/bond0 | grep "Currently Active Slave"
# Output: Currently Active Slave: eth0

# Simulate a link failure by bringing down the active interface
sudo ip link set eth0 down

# Watch the bond failover (should take ~miimon ms)
cat /proc/net/bonding/bond0 | grep "Currently Active Slave"
# Output: Currently Active Slave: eth1

# Verify no ping drops (or minimal drops with active-backup)
# The ping in the background should continue uninterrupted

# Restore the interface
sudo ip link set eth0 up

# Check that eth0 rejoins as backup
cat /proc/net/bonding/bond0

# Kill the background ping
kill %1

Troubleshooting

Bond not showing in ip link:

# Verify the bonding module is loaded
lsmod | grep bonding

# Check for Netplan/NetworkManager errors
sudo netplan --debug apply 2>&1
sudo journalctl -u NetworkManager -n 50

Slaves not joining the bond:

# Slaves must not have IP addresses configured directly
# Check for conflicting config
ip addr show eth0

# For NetworkManager, verify the slave connection references the master
nmcli connection show bond0-slave-eth0 | grep master

LACP not negotiating:

# Verify LACP is enabled on the switch port
# Check bond status for LACP partner info
cat /proc/net/bonding/bond0 | grep -A 5 "Partner"
# If partner shows all zeros, LACP is not enabled on the switch

# Temporarily test with mode=balance-xor to verify physical connectivity first

Traffic not load-balancing (mode 4):

# Check transmit hash policy
cat /proc/net/bonding/bond0 | grep "Transmit Hash Policy"

# layer2 (default): hashes on MAC addresses only — may not balance well
# layer3+4: hashes on IP + port — better distribution for varied flows
sudo nmcli connection modify bond0 bond.options "mode=802.3ad,xmit_hash_policy=layer3+4"

Conclusion

Network bonding provides a straightforward path to redundancy (active-backup) or higher throughput (LACP) by aggregating multiple NICs into a single interface. Active-backup mode is the safest choice for simple failover without switch configuration, while LACP delivers true link aggregation when your infrastructure supports it. Test failover regularly to confirm the miimon interval meets your recovery time objectives.