Network Namespace Isolation Advanced Guide
Linux network namespaces provide complete network stack isolation, forming the foundation of container networking in Docker and Kubernetes. This guide covers creating network namespaces, connecting them with veth pairs, bridging namespaces to external networks, applying per-namespace firewall rules, and understanding how container networking is built on these primitives.
Prerequisites
- Ubuntu 20.04/22.04 or CentOS/Rocky Linux 8+
- Linux kernel 3.8+ (namespace support built-in)
- Root or sudo access
iproute2package (providesip netnscommands)iptablesornftables
Create and Manage Namespaces
# Create a new network namespace
sudo ip netns add ns1
sudo ip netns add ns2
# List all network namespaces
ip netns list
# Run a command inside a namespace
sudo ip netns exec ns1 ip link list
# By default, only the loopback interface exists:
# 1: lo: <LOOPBACK> mtu 65536 ...
# Open a shell inside a namespace
sudo ip netns exec ns1 bash
# Inside the namespace, bring up loopback
ip link set lo up
ip addr # Shows only loopback
# Exit the namespace shell
exit
# Delete a namespace
sudo ip netns del ns1
Identify which namespace a process is in:
# Check the network namespace of process PID 1234
sudo ls -la /proc/1234/ns/net
# List all distinct network namespaces currently in use
sudo find /proc -maxdepth 3 -name net -type l 2>/dev/null | \
xargs -I{} readlink {} | sort -u
Connect Namespaces with veth Pairs
A veth (virtual Ethernet) pair is a linked pair of interfaces — packets entering one end exit the other:
# Create a veth pair
sudo ip link add veth0 type veth peer name veth1
# Move one end into ns1
sudo ip link set veth1 netns ns1
# Configure addresses
# Host side (default namespace):
sudo ip addr add 10.0.0.1/24 dev veth0
sudo ip link set veth0 up
# Namespace side:
sudo ip netns exec ns1 ip addr add 10.0.0.2/24 dev veth1
sudo ip netns exec ns1 ip link set veth1 up
sudo ip netns exec ns1 ip link set lo up
# Test connectivity
ping -c 3 10.0.0.2
sudo ip netns exec ns1 ping -c 3 10.0.0.1
Connect two namespaces directly:
# Create ns1 and ns2, connect them directly
sudo ip netns add ns1
sudo ip netns add ns2
# Create a veth pair
sudo ip link add veth-ns1 type veth peer name veth-ns2
# Move each end to its namespace
sudo ip link set veth-ns1 netns ns1
sudo ip link set veth-ns2 netns ns2
# Configure both sides
sudo ip netns exec ns1 ip addr add 10.1.0.1/24 dev veth-ns1
sudo ip netns exec ns1 ip link set veth-ns1 up
sudo ip netns exec ns1 ip link set lo up
sudo ip netns exec ns2 ip addr add 10.1.0.2/24 dev veth-ns2
sudo ip netns exec ns2 ip link set veth-ns2 up
sudo ip netns exec ns2 ip link set lo up
# Test direct connectivity between namespaces
sudo ip netns exec ns1 ping -c 3 10.1.0.2
sudo ip netns exec ns2 ping -c 3 10.1.0.1
Bridge Connectivity
A bridge allows multiple namespaces to share a single L2 segment:
# Create a bridge in the default namespace
sudo ip link add br0 type bridge
sudo ip link set br0 up
sudo ip addr add 10.2.0.1/24 dev br0
# Create namespaces
sudo ip netns add container1
sudo ip netns add container2
sudo ip netns add container3
# Connect each namespace to the bridge via veth pairs
for i in 1 2 3; do
# Create veth pair
sudo ip link add veth-host$i type veth peer name veth-cont$i
# Host side: attach to bridge
sudo ip link set veth-host$i master br0
sudo ip link set veth-host$i up
# Container side: move to namespace and configure
sudo ip link set veth-cont$i netns container$i
sudo ip netns exec container$i ip addr add 10.2.0.$((i+1))/24 dev veth-cont$i
sudo ip netns exec container$i ip link set veth-cont$i up
sudo ip netns exec container$i ip link set lo up
# Add default gateway pointing to bridge
sudo ip netns exec container$i ip route add default via 10.2.0.1
done
# Verify all containers can reach the bridge
for i in 1 2 3; do
sudo ip netns exec container$i ping -c 2 10.2.0.1
done
# Test inter-container connectivity (container1 to container2)
sudo ip netns exec container1 ping -c 2 10.2.0.3
Routing Between Namespaces
Route packets between isolated namespaces using IP forwarding:
# Enable IP forwarding in the default namespace (the router)
sudo sysctl -w net.ipv4.ip_forward=1
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf
# Example: route between two isolated subnet namespaces
# ns-subnet1: 10.10.1.0/24
# ns-subnet2: 10.10.2.0/24
# Default namespace acts as router
sudo ip netns add ns-subnet1
sudo ip netns add ns-subnet2
# Create connections: default ns <-> ns-subnet1
sudo ip link add veth-r1 type veth peer name veth-s1
sudo ip link set veth-s1 netns ns-subnet1
sudo ip addr add 10.10.1.1/24 dev veth-r1
sudo ip link set veth-r1 up
sudo ip netns exec ns-subnet1 ip addr add 10.10.1.2/24 dev veth-s1
sudo ip netns exec ns-subnet1 ip link set veth-s1 up
sudo ip netns exec ns-subnet1 ip route add default via 10.10.1.1
# Create connections: default ns <-> ns-subnet2
sudo ip link add veth-r2 type veth peer name veth-s2
sudo ip link set veth-s2 netns ns-subnet2
sudo ip addr add 10.10.2.1/24 dev veth-r2
sudo ip link set veth-r2 up
sudo ip netns exec ns-subnet2 ip addr add 10.10.2.2/24 dev veth-s2
sudo ip netns exec ns-subnet2 ip link set veth-s2 up
sudo ip netns exec ns-subnet2 ip route add default via 10.10.2.1
# Test routing between the two subnets (goes through default namespace)
sudo ip netns exec ns-subnet1 ping -c 3 10.10.2.2
sudo ip netns exec ns-subnet2 ping -c 3 10.10.1.2
External Network Access with NAT
# Allow namespace containers to reach the internet via NAT
# Enable IP forwarding
sudo sysctl -w net.ipv4.ip_forward=1
# Add MASQUERADE rule for traffic from namespace subnet
# Replace eth0 with your host's external interface
sudo iptables -t nat -A POSTROUTING -s 10.2.0.0/24 -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i br0 -o eth0 -j ACCEPT
sudo iptables -A FORWARD -i eth0 -o br0 -m state --state RELATED,ESTABLISHED -j ACCEPT
# Test internet access from a container namespace
sudo ip netns exec container1 ping -c 3 8.8.8.8
# Add DNS resolver to the namespace
sudo ip netns exec container1 bash -c "echo 'nameserver 8.8.8.8' > /etc/resolv.conf"
sudo ip netns exec container1 ping -c 3 google.com
Per-Namespace Firewall Rules
# iptables rules apply per-namespace
# Run iptables inside the namespace to set namespace-specific rules
# Drop all forwarding in container2 namespace
sudo ip netns exec container2 iptables -P FORWARD DROP
# Allow only HTTP/HTTPS from container1 namespace
sudo ip netns exec container1 iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT
sudo ip netns exec container1 iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT
sudo ip netns exec container1 iptables -A OUTPUT -p udp --dport 53 -j ACCEPT
sudo ip netns exec container1 iptables -P OUTPUT DROP
# Port forwarding: expose container's service to host
# Forward host port 8080 to container1's port 80
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 8080 -j DNAT \
--to-destination 10.2.0.2:80
sudo iptables -A FORWARD -p tcp -d 10.2.0.2 --dport 80 \
-m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
# Verify rules inside a namespace
sudo ip netns exec container1 iptables -L -n -v
Container Networking Internals
Understanding how Docker and containerd use namespaces:
# Find the PID of a running Docker container
CONTAINER_ID=$(docker ps -q --filter name=my-container)
PID=$(docker inspect --format '{{.State.Pid}}' $CONTAINER_ID)
echo "Container PID: $PID"
# Access the container's network namespace via its PID
sudo nsenter -t $PID -n ip addr
sudo nsenter -t $PID -n ip route
# View the veth pair connecting the container to the docker bridge
# Inside container:
sudo nsenter -t $PID -n ip link show
# Outside on host:
ip link show | grep "veth"
# Inspect Docker's bridge network
ip addr show docker0
bridge fdb show dev docker0
# Find which host veth connects to a container
CONTAINER_NS_IFACE=$(sudo nsenter -t $PID -n ip link | grep eth0 | awk '{print $1}' | tr -d ':')
ip link show | grep "if${CONTAINER_NS_IFACE}"
Troubleshooting
Namespace not isolated (seeing host routes):
# Verify you're inside the namespace
sudo ip netns exec ns1 ip link
# Should only show lo and the veth interface
# Check /proc for namespace verification
sudo ip netns exec ns1 readlink /proc/self/ns/net
# Should differ from the host's:
readlink /proc/1/ns/net
Ping fails between namespaces:
# Check IP forwarding
sysctl net.ipv4.ip_forward
# Must be 1
# Check routes in the source namespace
sudo ip netns exec ns1 ip route show
# Check iptables isn't blocking FORWARD
sudo iptables -L FORWARD -n
# Look for DROP rules
# Trace with tcpdump on the veth interface
sudo tcpdump -i veth-host1 -n icmp
veth pair exists but link is down:
# Both ends must be up
sudo ip link set veth0 up
sudo ip netns exec ns1 ip link set veth1 up
# Verify link state
ip link show veth0 | grep "state"
# Should show "state UP"
Conclusion
Linux network namespaces are the fundamental building block of container networking, providing complete network stack isolation with controlled connectivity via veth pairs, bridges, and iptables. Understanding these primitives helps you debug container networking issues, design custom CNI configurations, and build secure multi-tenant network architectures. Every Docker and Kubernetes pod uses these exact mechanisms under the hood.


