Trafficshaping under Linux is as easy and intuitive as changing the drive belt in your car. Fortunately there's the lartc homepage which should be your first stop when you want to learn about advanced networking under linux.
When we talk about traffic shaping we're talking about two things:
- Packet classification
- Packet queueing
When you want to shape traffic you usually want to give some packets and/or flows precedence over others. To classify packets you've got a few options under linux:
- The U32 classifier
- Probably the fastest option when you want to classify traffic. The syntax is extremely user-friendly and intuitive when you happen to know the TCP header specs by heart.
- Much more comfortable than U32 when you're familiar with iptables, but probably noticeable slower when you're pushing large amounts of packets per second.
The packet queuer does the actual work when shaping traffic. It's responsible for deciding which packets get delayed, dropped or sent instantly. It's important that you understand that you can only shape traffic that you send. There are no perfect solutions to shape incoming traffic, although TCP behaves relatively fine when you delay ACK-packets. This is possible with an IMQ device, see the lartc example or the IMQ Homepage.
There are two kinds of queuers, classful and classless. With classful queueing algorithms you can construct trees which share common limits; a classless queueing discipline OTOH only works on a per-device basis without much fine-grained control. Describing this in detail here wouldn't make sense, since it's already extensively documented on the lartc page.
gets it's fair share of the available bandwidth.
- ↑ means TCP/UDP connections
Here's an example which does the following:
- Creates a root node with 100mbit speed
- Creates two leaves:
- one with 80mbit guaranteed minimum bandwidth, burstable to 100mbit, being prioritized
- one with 20mbit guaranteed minimum bandwidth, also burstable to 100mbit, but only when the #4 leaf doesn't need it
- Traffic from the 10.0.1.0/24 subnet gets classified into the 80mbit leaf
- All other traffic is sorted into the 20mbit leaf.
This basically means that every user could use all available bandwidth when the server is idle, but when things should get tight, the users from the 10.0.1.0/24 subnet will get a guaranteed minimum bandwidth of 80mbit.
#!/bin/bash tc=/sbin/tc #Cleaning up $tc qdisc del dev eth0 root handle 1: > /dev/null 2>&1 #Add the root handle, setting the default leaf $tc qdisc add dev eth0 root handle 1: htb default 5 #Set the basic speed of the device $tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit ceil 100mbit #Set up the two leaves $tc class add dev eth0 parent 1:1 classid 1:4 htb rate 80mbit ceil 100mbit prio 1 $tc class add dev eth0 parent 1:1 classid 1:5 htb rate 20mbit ceil 100mbit #Add SFQ queueing disciplines $tc qdisc add dev eth0 parent 1:4 handle 4: sfq perturb 10 $tc qdisc add dev eth0 parent 1:5 handle 5: sfq perturb 10 #prioritize traffic $tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip dst 10.0.1.0/24 flowid 1:4