Bandwidth Limiting with tc (Traffic Control): Complete Guide
Introduction
Traffic Control (tc) is Linux's powerful framework for managing, shaping, and controlling network bandwidth at the kernel level. As part of the Linux networking stack, tc provides sophisticated mechanisms for Quality of Service (QoS) implementation, bandwidth limiting, traffic prioritization, and network simulation. Whether you need to prevent a single application from consuming all available bandwidth, ensure VoIP calls receive priority over bulk downloads, simulate network conditions for testing, or implement fair bandwidth distribution across multiple users, tc offers the tools and flexibility required for comprehensive traffic management.
Unlike simple bandwidth limiting at the application level, tc operates at the network layer, making it transparent to applications and universally applicable to all network traffic. It can classify packets based on various criteria (source/destination IP, port, protocol, packet marks), assign them to different classes with distinct bandwidth allocations, and apply sophisticated queuing disciplines that determine how packets are transmitted. This granular control makes tc essential for managing congested network links, implementing service-level agreements (SLAs), optimizing network performance, and creating realistic testing environments.
This comprehensive guide covers tc from fundamental concepts to advanced configurations, including queuing disciplines (qdiscs), class-based traffic shaping, filtering mechanisms, practical bandwidth limiting scenarios, performance monitoring, and troubleshooting. Whether you're managing a single server's network usage or implementing complex QoS policies across infrastructure, mastering tc provides the capabilities needed for effective traffic management.
Understanding Traffic Control Concepts
tc Components
1. Queuing Disciplines (qdiscs)
Algorithms that determine how packets are queued and dequeued:
-
Classless qdiscs - Simple, no traffic differentiation
- pfifo_fast (default)
- TBF (Token Bucket Filter)
- SFQ (Stochastic Fairness Queueing)
-
Classful qdiscs - Support traffic classes
- HTB (Hierarchical Token Bucket)
- CBQ (Class Based Queueing)
- PRIO (Priority qdisc)
- HFSC (Hierarchical Fair Service Curve)
2. Classes
Categories within classful qdiscs for traffic differentiation:
- Hierarchical structure
- Bandwidth guarantees and limits
- Priority levels
3. Filters
Rules for classifying packets into classes:
- Based on IP addresses, ports, protocols
- Using firewall marks (fwmark)
- u32 classifier for header matching
4. Actions
Operations performed on packets:
- Accept, drop, redirect
- Mark packets for further processing
How tc Works
Network Interface → qdisc → classes → filters → output
↓
queuing
Packet flow:
- Packet arrives at network interface
- Root qdisc receives packet
- Filters classify packet into appropriate class
- Class's qdisc queues packet according to its rules
- Packet transmitted when bandwidth available
Egress vs Ingress
Egress (outbound):
- Full tc capabilities available
- Can shape, prioritize, and queue traffic
- Most common use case
Ingress (inbound):
- Limited shaping capabilities
- Typically used for policing (dropping excess traffic)
- Cannot queue packets effectively
Prerequisites
Before implementing traffic control, ensure you have:
- Linux system with kernel 2.6+ (3.x+ recommended)
- Root or sudo privileges
- iproute2 package installed (provides tc command)
- Basic understanding of networking concepts
- Knowledge of your network topology and bandwidth limits
- Testing environment before production deployment
Installation
Debian/Ubuntu:
sudo apt update
sudo apt install iproute2 -y
RHEL/CentOS/Rocky Linux:
sudo dnf install iproute-tc -y
Verify installation:
tc -V
Identifying Network Interfaces
# List network interfaces
ip link show
# Check interface speed
ethtool eth0 | grep Speed
# Current qdisc on interface
tc qdisc show dev eth0
Basic tc Commands
Viewing Configuration
# Show qdiscs on all interfaces
tc qdisc show
# Show qdiscs on specific interface
tc qdisc show dev eth0
# Show classes
tc class show dev eth0
# Show filters
tc filter show dev eth0
# Show statistics
tc -s qdisc show dev eth0
tc -s class show dev eth0
Managing Qdiscs
# Add qdisc to interface
sudo tc qdisc add dev eth0 root handle 1: htb default 10
# Replace existing qdisc
sudo tc qdisc replace dev eth0 root handle 1: htb default 10
# Delete qdisc (removes all classes and filters)
sudo tc qdisc del dev eth0 root
# Change qdisc parameters
sudo tc qdisc change dev eth0 root handle 1: htb default 20
Managing Classes
# Add class
sudo tc class add dev eth0 parent 1: classid 1:1 htb rate 10mbit
# Change class parameters
sudo tc class change dev eth0 parent 1: classid 1:1 htb rate 20mbit
# Delete class
sudo tc class del dev eth0 classid 1:1
Managing Filters
# Add filter
sudo tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip dst 192.168.1.100 flowid 1:10
# Delete filter
sudo tc filter del dev eth0 parent 1:0 prio 1
# Delete all filters
sudo tc filter del dev eth0
Simple Bandwidth Limiting
Limit Total Interface Bandwidth (TBF)
Token Bucket Filter provides simple rate limiting:
# Limit eth0 to 10 Mbit/s
sudo tc qdisc add dev eth0 root tbf rate 10mbit burst 32kbit latency 400ms
# Parameters:
# rate - sustained bandwidth limit
# burst - maximum burst size
# latency - maximum packet delay
Understanding TBF parameters:
- rate: Target bandwidth (e.g., 10mbit, 1gbit)
- burst: Size of bucket in bytes (allows short bursts above rate)
- latency: Maximum time a packet can wait in queue
Example with different rates:
# 1 Mbit/s limit
sudo tc qdisc add dev eth0 root tbf rate 1mbit burst 32kbit latency 400ms
# 100 Mbit/s limit
sudo tc qdisc add dev eth0 root tbf rate 100mbit burst 128kbit latency 50ms
# Calculate burst size:
# burst = rate / Hz (where Hz is typically 250)
# For 10mbit: burst = 10000000 / 8 / 250 = 5000 bytes ≈ 5kbytes
Remove Bandwidth Limit
# Delete qdisc to remove limit
sudo tc qdisc del dev eth0 root
# Verify removal
tc qdisc show dev eth0
# Should show default pfifo_fast
HTB (Hierarchical Token Bucket)
HTB is the most popular and flexible traffic shaping method, offering:
- Bandwidth guarantees and limits
- Hierarchical class structure
- Borrowing from parent classes
- Priority levels
Basic HTB Configuration
#!/bin/bash
# Simple HTB setup limiting total bandwidth to 10 Mbit/s
INTERFACE="eth0"
TOTAL_BANDWIDTH="10mbit"
# Delete existing qdisc
tc qdisc del dev $INTERFACE root 2>/dev/null
# Add root HTB qdisc
tc qdisc add dev $INTERFACE root handle 1: htb default 10
# Add root class with total bandwidth
tc class add dev $INTERFACE parent 1: classid 1:1 htb rate $TOTAL_BANDWIDTH
# Add default class
tc class add dev $INTERFACE parent 1:1 classid 1:10 htb rate $TOTAL_BANDWIDTH
echo "Bandwidth limited to $TOTAL_BANDWIDTH on $INTERFACE"
Multi-Class HTB Configuration
#!/bin/bash
# HTB with traffic classes
INTERFACE="eth0"
TOTAL_BW="10mbit"
# Clear existing
tc qdisc del dev $INTERFACE root 2>/dev/null
# Root qdisc
tc qdisc add dev $INTERFACE root handle 1: htb default 30
# Root class - total bandwidth
tc class add dev $INTERFACE parent 1: classid 1:1 htb rate $TOTAL_BW
# High priority class - 5 Mbit/s guaranteed, can borrow up to 8 Mbit/s
tc class add dev $INTERFACE parent 1:1 classid 1:10 htb rate 5mbit ceil 8mbit prio 1
# Medium priority - 3 Mbit/s guaranteed
tc class add dev $INTERFACE parent 1:1 classid 1:20 htb rate 3mbit ceil 7mbit prio 2
# Low priority - 2 Mbit/s guaranteed
tc class add dev $INTERFACE parent 1:1 classid 1:30 htb rate 2mbit ceil 10mbit prio 3
# Add fair queuing to each class
tc qdisc add dev $INTERFACE parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev $INTERFACE parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev $INTERFACE parent 1:30 handle 30: sfq perturb 10
echo "HTB classes configured on $INTERFACE"
HTB class parameters:
- rate: Guaranteed bandwidth
- ceil: Maximum bandwidth (ceiling)
- prio: Priority level (lower number = higher priority)
- burst: Burst size for rate
- cburst: Burst size for ceil
Adding Filters to HTB Classes
Filter by destination port:
# HTTP traffic to high priority class
tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 \
match ip dport 80 0xffff flowid 1:10
# HTTPS traffic to high priority class
tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 \
match ip dport 443 0xffff flowid 1:10
# SSH traffic to medium priority
tc filter add dev eth0 protocol ip parent 1:0 prio 2 u32 \
match ip dport 22 0xffff flowid 1:20
Filter by destination IP:
# Traffic to specific server
tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 \
match ip dst 192.168.1.100 flowid 1:10
# Traffic to subnet
tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 \
match ip dst 192.168.1.0/24 flowid 1:20
Filter by source IP:
# Limit specific client
tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 \
match ip src 192.168.1.50 flowid 1:30
Filter by protocol:
# ICMP traffic
tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 \
match ip protocol 1 0xff flowid 1:20
Practical Bandwidth Limiting Scenarios
Scenario 1: Limit Upload Speed
#!/bin/bash
# Limit upload speed to 1 Mbit/s
INTERFACE="eth0"
UPLOAD_LIMIT="1mbit"
tc qdisc del dev $INTERFACE root 2>/dev/null
tc qdisc add dev $INTERFACE root tbf rate $UPLOAD_LIMIT burst 32kbit latency 400ms
echo "Upload limited to $UPLOAD_LIMIT on $INTERFACE"
Scenario 2: Prioritize VoIP Traffic
#!/bin/bash
# Ensure VoIP gets priority and bandwidth
INTERFACE="eth0"
TOTAL_BW="10mbit"
VOIP_PORTS="5060:5090" # SIP and RTP ports
tc qdisc del dev $INTERFACE root 2>/dev/null
# Root qdisc
tc qdisc add dev $INTERFACE root handle 1: htb default 20
# Root class
tc class add dev $INTERFACE parent 1: classid 1:1 htb rate $TOTAL_BW
# VoIP class - high priority, 3 Mbit/s guaranteed
tc class add dev $INTERFACE parent 1:1 classid 1:10 htb rate 3mbit ceil 5mbit prio 0
# Other traffic - lower priority
tc class add dev $INTERFACE parent 1:1 classid 1:20 htb rate 7mbit ceil 10mbit prio 5
# Add SFQ to classes
tc qdisc add dev $INTERFACE parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev $INTERFACE parent 1:20 handle 20: sfq perturb 10
# Filter VoIP traffic to high priority class
tc filter add dev $INTERFACE protocol ip parent 1:0 prio 1 u32 \
match ip dport 5060 0xffff flowid 1:10
tc filter add dev $INTERFACE protocol ip parent 1:0 prio 1 u32 \
match ip sport 5060 0xffff flowid 1:10
echo "VoIP traffic prioritized on $INTERFACE"
Scenario 3: Fair Bandwidth Distribution
#!/bin/bash
# Distribute bandwidth fairly among multiple users/IPs
INTERFACE="eth0"
TOTAL_BW="100mbit"
PER_USER_BW="10mbit"
# List of user IPs
USERS=("192.168.1.10" "192.168.1.11" "192.168.1.12" "192.168.1.13" "192.168.1.14")
tc qdisc del dev $INTERFACE root 2>/dev/null
# Root qdisc
tc qdisc add dev $INTERFACE root handle 1: htb default 99
# Root class
tc class add dev $INTERFACE parent 1: classid 1:1 htb rate $TOTAL_BW
# Create class for each user
CLASSID=10
for USER_IP in "${USERS[@]}"; do
tc class add dev $INTERFACE parent 1:1 classid 1:$CLASSID htb \
rate $PER_USER_BW ceil $TOTAL_BW
# Add SFQ
tc qdisc add dev $INTERFACE parent 1:$CLASSID handle $CLASSID: sfq perturb 10
# Filter by source IP
tc filter add dev $INTERFACE protocol ip parent 1:0 prio 1 u32 \
match ip src $USER_IP flowid 1:$CLASSID
echo "Class 1:$CLASSID created for $USER_IP with $PER_USER_BW"
CLASSID=$((CLASSID + 1))
done
# Default class for other traffic
tc class add dev $INTERFACE parent 1:1 classid 1:99 htb rate 10mbit
tc qdisc add dev $INTERFACE parent 1:99 handle 99: sfq perturb 10
echo "Fair bandwidth distribution configured"
Scenario 4: Limit Download Speed (Ingress)
#!/bin/bash
# Limit incoming traffic (download)
INTERFACE="eth0"
DOWNLOAD_LIMIT="5mbit"
# Create ifb device for ingress shaping
modprobe ifb
ip link set dev ifb0 up
# Redirect ingress to ifb0
tc qdisc del dev $INTERFACE ingress 2>/dev/null
tc qdisc add dev $INTERFACE handle ffff: ingress
tc filter add dev $INTERFACE parent ffff: protocol ip u32 match u32 0 0 \
action mirred egress redirect dev ifb0
# Apply egress shaping on ifb0
tc qdisc del dev ifb0 root 2>/dev/null
tc qdisc add dev ifb0 root tbf rate $DOWNLOAD_LIMIT burst 32kbit latency 400ms
echo "Download limited to $DOWNLOAD_LIMIT on $INTERFACE"
Scenario 5: Simulate Network Conditions (netem)
#!/bin/bash
# Network emulation for testing
INTERFACE="eth0"
# Add latency (100ms)
tc qdisc add dev $INTERFACE root netem delay 100ms
# Add packet loss (10%)
tc qdisc add dev $INTERFACE root netem loss 10%
# Add latency with jitter
tc qdisc add dev $INTERFACE root netem delay 100ms 20ms
# Combination: latency, jitter, and packet loss
tc qdisc add dev $INTERFACE root netem delay 100ms 20ms loss 5%
# Bandwidth limitation with netem
tc qdisc add dev $INTERFACE root netem rate 1mbit
# Packet corruption (1%)
tc qdisc add dev $INTERFACE root netem corrupt 1%
# Packet duplication (0.5%)
tc qdisc add dev $INTERFACE root netem duplicate 0.5%
# Remove netem
tc qdisc del dev $INTERFACE root
Scenario 6: Rate Limiting by Application
#!/bin/bash
# Limit specific application using iptables marks
INTERFACE="eth0"
# Mark packets from specific application
# Example: Mark traffic from Apache (assuming it uses specific source port range)
iptables -t mangle -A OUTPUT -p tcp --sport 80 -j MARK --set-mark 1
iptables -t mangle -A OUTPUT -p tcp --sport 443 -j MARK --set-mark 1
# Configure tc to use marks
tc qdisc del dev $INTERFACE root 2>/dev/null
tc qdisc add dev $INTERFACE root handle 1: htb default 10
tc class add dev $INTERFACE parent 1: classid 1:1 htb rate 100mbit
tc class add dev $INTERFACE parent 1:1 classid 1:10 htb rate 100mbit
tc class add dev $INTERFACE parent 1:1 classid 1:20 htb rate 10mbit
# Filter marked packets to limited class
tc filter add dev $INTERFACE parent 1:0 protocol ip prio 1 handle 1 fw flowid 1:20
echo "Application bandwidth limited via firewall marks"
Monitoring and Statistics
View Statistics
# Detailed statistics
tc -s qdisc show dev eth0
# Class statistics
tc -s class show dev eth0
# Filter statistics
tc -s filter show dev eth0
# Continuous monitoring
watch -n 1 'tc -s qdisc show dev eth0'
Understanding Statistics Output
$ tc -s class show dev eth0
class htb 1:10 parent 1:1 prio 1 rate 5Mbit ceil 8Mbit burst 1600b cburst 1600b
Sent 1048576 bytes 1024 pkt (dropped 0, overlimits 0 requeues 0)
rate 2Mbit 256pps backlog 0b 0p requeues 0
lended: 512 borrowed: 256 giants: 0
tokens: 204800 ctokens: 204800
Key metrics:
- Sent: Total bytes and packets transmitted
- dropped: Packets dropped due to queue overflow
- overlimits: Times class exceeded rate
- rate: Current transmission rate
- backlog: Queued bytes and packets
- lended: Packets sent at guaranteed rate
- borrowed: Packets sent using borrowed bandwidth
Graphing Statistics
#!/bin/bash
# Collect bandwidth statistics
INTERFACE="eth0"
LOG_FILE="/var/log/tc-stats.log"
while true; do
TIMESTAMP=$(date +%s)
STATS=$(tc -s class show dev $INTERFACE | grep "class htb 1:10" -A 3)
RATE=$(echo "$STATS" | grep "rate" | awk '{print $2}')
echo "$TIMESTAMP $RATE" >> $LOG_FILE
sleep 5
done
Use gnuplot or similar tools to visualize:
gnuplot << EOF
set terminal png size 800,600
set output 'bandwidth.png'
set xlabel 'Time'
set ylabel 'Bandwidth (Mbit/s)'
plot '$LOG_FILE' using 1:2 with lines title 'Bandwidth'
EOF
Troubleshooting
Common Issues
tc commands not working:
# Check if iproute2 is installed
which tc
# Install if missing
sudo apt install iproute2 # Debian/Ubuntu
sudo dnf install iproute-tc # RHEL/CentOS
Changes not taking effect:
# Delete existing qdisc first
sudo tc qdisc del dev eth0 root
# Then add new configuration
sudo tc qdisc add dev eth0 root htb default 10
Cannot delete qdisc:
# Error: RTNETLINK answers: No such file or directory
# Check if qdisc exists
tc qdisc show dev eth0
# Try deleting ingress separately
tc qdisc del dev eth0 ingress
tc qdisc del dev eth0 root
Bandwidth limit not working:
# Verify qdisc is applied
tc qdisc show dev eth0
# Check class configuration
tc class show dev eth0
# Verify filters
tc filter show dev eth0
# Test with iperf
iperf3 -s # On remote server
iperf3 -c server-ip -t 30 # On local machine
High packet drops:
# Check statistics
tc -s qdisc show dev eth0
# Increase queue length
tc qdisc change dev eth0 root htb default 10 r2q 1
# Adjust burst size
tc class change dev eth0 parent 1:1 classid 1:10 htb rate 10mbit burst 15k
Debugging
Verbose output:
# Use verbose mode (not available in tc, use strace)
strace tc qdisc add dev eth0 root htb default 10 2>&1 | less
Test configuration:
#!/bin/bash
# Test bandwidth limit
INTERFACE="eth0"
LIMIT="10mbit"
# Apply limit
tc qdisc add dev $INTERFACE root tbf rate $LIMIT burst 32kbit latency 400ms
# Test with curl
echo "Testing download speed..."
time curl -o /dev/null http://speedtest.tele2.net/100MB.zip
# Remove limit
tc qdisc del dev $INTERFACE root
echo "Testing without limit..."
time curl -o /dev/null http://speedtest.tele2.net/100MB.zip
Advanced Topics
Combining tc with iptables
# Mark packets with iptables
iptables -t mangle -A POSTROUTING -p tcp --dport 80 -j MARK --set-mark 1
iptables -t mangle -A POSTROUTING -p tcp --dport 443 -j MARK --set-mark 2
# Use marks in tc filters
tc qdisc add dev eth0 root handle 1: htb default 30
tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 50mbit
tc class add dev eth0 parent 1:1 classid 1:20 htb rate 30mbit
tc class add dev eth0 parent 1:1 classid 1:30 htb rate 20mbit
tc filter add dev eth0 parent 1: protocol ip prio 1 handle 1 fw flowid 1:10
tc filter add dev eth0 parent 1: protocol ip prio 1 handle 2 fw flowid 1:20
HFSC (Hierarchical Fair Service Curve)
# More precise latency control than HTB
tc qdisc add dev eth0 root handle 1: hfsc default 10
# Real-time service curve for VoIP
tc class add dev eth0 parent 1: classid 1:10 hfsc sc rate 2mbit ul rate 5mbit
# Link-sharing service curve for bulk traffic
tc class add dev eth0 parent 1: classid 1:20 hfsc ls rate 1mbit ul rate 10mbit
CBQ (Class Based Queueing)
# Legacy alternative to HTB
tc qdisc add dev eth0 root handle 1: cbq bandwidth 100mbit avpkt 1000 cell 8
tc class add dev eth0 parent 1: classid 1:1 cbq bandwidth 100mbit \
rate 10mbit weight 1mbit prio 5 allot 1514 cell 8 maxburst 20 avpkt 1000
Persistence
Making Configuration Persistent
Systemd service method:
# Create tc configuration script
sudo nano /usr/local/bin/tc-setup.sh
#!/bin/bash
# /usr/local/bin/tc-setup.sh
INTERFACE="eth0"
tc qdisc del dev $INTERFACE root 2>/dev/null
tc qdisc add dev $INTERFACE root handle 1: htb default 10
tc class add dev $INTERFACE parent 1: classid 1:1 htb rate 100mbit
tc class add dev $INTERFACE parent 1:1 classid 1:10 htb rate 100mbit
# Add more configuration...
# Make executable
sudo chmod +x /usr/local/bin/tc-setup.sh
# Create systemd service
sudo nano /etc/systemd/system/tc-setup.service
[Unit]
Description=Traffic Control Setup
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/tc-setup.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
# Enable and start service
sudo systemctl daemon-reload
sudo systemctl enable tc-setup.service
sudo systemctl start tc-setup.service
Network interface method (Debian/Ubuntu):
# Add to /etc/network/interfaces
auto eth0
iface eth0 inet static
address 192.168.1.100
netmask 255.255.255.0
gateway 192.168.1.1
post-up /usr/local/bin/tc-setup.sh
pre-down tc qdisc del dev eth0 root
Best Practices
1. Test Before Production
# Test in lab environment
# Verify bandwidth limits work as expected
# Monitor for packet drops and latency issues
2. Start Simple
# Begin with basic TBF before complex HTB hierarchies
tc qdisc add dev eth0 root tbf rate 10mbit burst 32kbit latency 400ms
# Gradually add complexity as needed
3. Monitor Statistics
# Regularly check for drops and overlimits
tc -s class show dev eth0
# Adjust burst sizes if seeing drops
4. Document Configuration
# Comment scripts thoroughly
# Maintain network diagrams with bandwidth allocations
# Document reasoning behind class priorities
5. Plan for Peak Usage
# Don't allocate 100% of bandwidth
# Leave headroom for overhead and bursts
# If link is 100mbit, limit to 95mbit total
6. Use Appropriate Burst Sizes
# Calculate burst:
# burst = rate / HZ
# where HZ is typically 250 on Linux
# For 10mbit rate:
# burst = 10,000,000 / 8 / 250 = 5,000 bytes
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 10mbit burst 5kb
7. Prioritize Interactive Traffic
# SSH, DNS, ICMP should get high priority
# Bulk transfers (backup, downloads) lower priority
Conclusion
Traffic Control (tc) provides comprehensive bandwidth management capabilities essential for optimizing network performance, implementing Quality of Service policies, and ensuring fair resource distribution. Whether limiting bandwidth for specific applications, prioritizing time-sensitive traffic like VoIP, or simulating network conditions for testing, tc offers the granular control needed for effective traffic management on Linux systems.
Key takeaways:
- HTB is the most flexible and widely used qdisc for traffic shaping
- Filters classify packets into appropriate classes
- Classes define bandwidth guarantees and limits
- SFQ provides fairness within classes
- netem enables network condition simulation for testing
- Persistence requires scripting and service configuration
- Monitoring ensures policies work as intended
Master tc to gain powerful traffic management capabilities, optimize network resource utilization, implement sophisticated QoS policies, and create realistic testing environments. The skills developed through tc apply across diverse scenarios from single-server bandwidth management to complex enterprise network optimization.
For advanced scenarios, explore integration with routing policies, combining tc with nftables for packet marking, and implementing dynamic bandwidth allocation based on network conditions using automated scripts and monitoring tools.


