VxLAN Overlay Network Configuration

VxLAN (Virtual Extensible LAN) extends Layer 2 networks over Layer 3 infrastructure by encapsulating Ethernet frames in UDP packets. This guide covers configuring VxLAN overlay networks on Linux, setting up VTEPs, using multicast and unicast modes, integrating with Linux bridges, and troubleshooting encapsulation issues.

Prerequisites

  • Ubuntu 20.04/22.04 or CentOS/Rocky Linux 8+
  • Linux kernel 3.7+ (VxLAN support is built-in on modern kernels)
  • Root or sudo access
  • Two or more Linux hosts with L3 connectivity between them
  • For multicast: a multicast-capable network between hosts

VxLAN Concepts

Key terms:

  • VNI (VxLAN Network Identifier): 24-bit segment ID (like a VLAN ID, but 16 million possible values)
  • VTEP (VxLAN Tunnel Endpoint): The Linux interface that performs encapsulation/decapsulation
  • Underlay network: The L3 network carrying VxLAN UDP packets (port 4789)
  • Overlay network: The L2 network created by VxLAN across the underlay
Host A                          Host B
┌─────────────────────┐        ┌─────────────────────┐
│ VM (10.0.100.1)     │        │ VM (10.0.100.2)      │
│      │              │        │      │               │
│  Bridge (br0)       │        │  Bridge (br0)        │
│      │              │        │      │               │
│  VTEP (vxlan100)    │        │  VTEP (vxlan100)     │
│      │              │        │      │               │
│  eth0 (192.168.1.1) │──UDP──▶│  eth0 (192.168.1.2)  │
└─────────────────────┘        └─────────────────────┘
       Underlay: 192.168.1.0/24
       Overlay VNI 100: 10.0.100.0/24

Create a VxLAN Interface

# Create a VxLAN interface with VNI 100 (basic, no remote endpoint yet)
sudo ip link add vxlan100 type vxlan \
  id 100 \
  dstport 4789 \
  local 192.168.1.1     # Local underlay IP (your server's IP)

# Bring it up and assign an overlay IP
sudo ip link set vxlan100 up
sudo ip addr add 10.0.100.1/24 dev vxlan100

# Verify the interface
ip link show vxlan100
ip addr show vxlan100

# View VxLAN details
ip -d link show vxlan100

Unicast VxLAN (Point-to-Point)

In unicast mode, you manually specify the remote VTEP IP:

Host A (192.168.1.1):

# Create VxLAN tunnel pointing to Host B
sudo ip link add vxlan100 type vxlan \
  id 100 \
  dstport 4789 \
  local 192.168.1.1 \
  remote 192.168.1.2    # Host B's underlay IP

sudo ip link set vxlan100 up
sudo ip addr add 10.0.100.1/24 dev vxlan100

# Verify
ip -d link show vxlan100

Host B (192.168.1.2):

sudo ip link add vxlan100 type vxlan \
  id 100 \
  dstport 4789 \
  local 192.168.1.2 \
  remote 192.168.1.1    # Host A's underlay IP

sudo ip link set vxlan100 up
sudo ip addr add 10.0.100.2/24 dev vxlan100

Test connectivity:

# From Host A:
ping 10.0.100.2    # Should reach Host B's overlay IP

# Verify VxLAN encapsulation with tcpdump (on underlay interface)
sudo tcpdump -i eth0 -n udp port 4789 -v
# You should see UDP packets with VxLAN header (inner Ethernet frames)

Multicast VxLAN

Multicast mode learns remote VTEPs automatically via multicast group:

Both hosts:

# Create VxLAN in multicast mode
# All VTEPs in VNI 100 join the same multicast group
sudo ip link add vxlan100 type vxlan \
  id 100 \
  dstport 4789 \
  local 192.168.1.1 \
  group 239.1.1.100 \   # Multicast group (use 239.x.x.x for local scope)
  dev eth0              # Underlay interface for multicast

sudo ip link set vxlan100 up
sudo ip addr add 10.0.100.1/24 dev vxlan100

# Join the multicast group (required on the underlay interface)
sudo ip maddr add 239.1.1.100 dev eth0
# Verify multicast group membership
ip maddr show dev eth0

# Check that the kernel is sending/receiving multicast
netstat -gn | grep 239.1.1.100

Bridge Integration

Connect VMs or containers to the VxLAN overlay via a Linux bridge:

# Create the bridge
sudo ip link add br-vxlan100 type bridge
sudo ip link set br-vxlan100 up

# Attach the VxLAN interface to the bridge
# Remove the IP from vxlan100 — the bridge gets the IP instead
sudo ip addr del 10.0.100.1/24 dev vxlan100 2>/dev/null
sudo ip link set vxlan100 master br-vxlan100

# Assign IP to the bridge
sudo ip addr add 10.0.100.1/24 dev br-vxlan100

# Attach a veth pair (simulating a VM or container)
sudo ip link add veth0 type veth peer name veth0-peer

# Put one end in the bridge, the other in a network namespace (simulates a VM)
sudo ip link set veth0 master br-vxlan100
sudo ip link set veth0 up

# Simulate a container with a network namespace
sudo ip netns add vm1
sudo ip link set veth0-peer netns vm1
sudo ip netns exec vm1 ip addr add 10.0.100.10/24 dev veth0-peer
sudo ip netns exec vm1 ip link set veth0-peer up

# Test from the simulated VM
sudo ip netns exec vm1 ping 10.0.100.2  # Reach Host B's overlay

Static FDB (Forwarding Database) Entries

For unicast mode with multiple VTEPs, static FDB entries control ARP learning:

# Add a static FDB entry: MAC aa:bb:cc:dd:ee:ff is behind VTEP 192.168.1.3
sudo bridge fdb add aa:bb:cc:dd:ee:ff dev vxlan100 dst 192.168.1.3 via eth0

# Add a "flood" entry: unknown MACs go to a specific VTEP (useful for 2-node setups)
# This is the default remote in point-to-point mode
sudo bridge fdb append 00:00:00:00:00:00 dev vxlan100 dst 192.168.1.2

# For multi-VTEP unicast (replace multicast):
# Add all remote VTEPs as flood destinations
sudo bridge fdb append 00:00:00:00:00:00 dev vxlan100 dst 192.168.1.2
sudo bridge fdb append 00:00:00:00:00:00 dev vxlan100 dst 192.168.1.3

# View the FDB table
bridge fdb show dev vxlan100

# Remove an entry
sudo bridge fdb del aa:bb:cc:dd:ee:ff dev vxlan100 dst 192.168.1.3

Persistent VxLAN with Systemd-Networkd

# /etc/systemd/network/20-vxlan100.netdev
sudo tee /etc/systemd/network/20-vxlan100.netdev > /dev/null <<'EOF'
[NetDev]
Name=vxlan100
Kind=vxlan

[VXLAN]
VNI=100
Remote=192.168.1.2
Local=192.168.1.1
DestinationPort=4789
EOF

# /etc/systemd/network/21-vxlan100.network
sudo tee /etc/systemd/network/21-vxlan100.network > /dev/null <<'EOF'
[Match]
Name=vxlan100

[Network]
Address=10.0.100.1/24
EOF

# /etc/systemd/network/22-bridge-vxlan.netdev
sudo tee /etc/systemd/network/22-bridge-vxlan.netdev > /dev/null <<'EOF'
[NetDev]
Name=br-overlay
Kind=bridge
EOF

# /etc/systemd/network/23-bridge-vxlan.network
sudo tee /etc/systemd/network/23-bridge-vxlan.network > /dev/null <<'EOF'
[Match]
Name=br-overlay

[Network]
Address=10.0.100.1/24
ConfigureWithoutCarrier=true
EOF

sudo systemctl enable systemd-networkd
sudo systemctl restart systemd-networkd

# Verify
networkctl status vxlan100

Troubleshooting

VxLAN interface created but no ping:

# Check the underlay is working first
ping 192.168.1.2   # Must succeed before overlay works

# Verify VxLAN traffic is being sent
sudo tcpdump -i eth0 -n udp port 4789 -c 10

# Check firewall allows UDP 4789
sudo ufw status
sudo iptables -L INPUT -n | grep 4789

# UFW: allow VxLAN
sudo ufw allow 4789/udp

Tunnel works but FDB not learning MACs:

# Check FDB entries are being created
watch -n 2 "bridge fdb show dev vxlan100"

# Enable learning on the VxLAN interface
sudo ip link set vxlan100 type vxlan learning

# For bridge: enable forwarding
sudo ip link set br-vxlan100 type bridge learning 1

MTU issues causing packet fragmentation:

# VxLAN adds ~50 bytes overhead to each packet
# If underlay MTU is 1500, set overlay MTU to 1450
sudo ip link set vxlan100 mtu 1450
sudo ip link set br-vxlan100 mtu 1450

# Verify MTU
ip link show vxlan100 | grep mtu

Multicast not working:

# Verify multicast routing is enabled on the underlay
ip route show | grep 239.1.1.100

# Add a multicast route if missing
sudo ip route add 239.0.0.0/8 dev eth0

# Check multicast is not blocked at the network level
sudo tcpdump -i eth0 -n host 239.1.1.100

Conclusion

VxLAN provides a scalable Layer 2 extension mechanism for datacenter and cloud environments, allowing VMs and containers on different physical hosts to communicate as if they were on the same Ethernet segment. Unicast mode with static FDB entries works well for small deployments, while multicast or a control plane (such as evpn via FRRouting) enables dynamic, scalable VTEP discovery for larger environments.