Tunnels are devices, hence all the power of Linux traffic control applies to them. The simplest (and the most useful in practice) example is limiting tunnel bandwidth. The following command:
tc qdisc add dev tunl0 root tbf \ rate 128Kbit burst 4K limit 10Kwill limit tunneled traffic to 128Kbit with maximal burst size of 4K and queuing not more than 10K.
However, you should remember, that tunnels are virtual devices
implemented in software and true queue management is impossible for them
just because they have no queues. Instead, it is better to create classes
on real physical interfaces and to map tunneled packets to them.
In general case of dynamic routing you should create such classes
on all outgoing interfaces, or, alternatively,
to use option dev DEV
to bind tunnel to a fixed physical device.
In the last case packets will be routed only via specified device
and you need to setup corresponding classes only on it.
Though you have to pay for this convenience,
if routing will change, your tunnel will fail.
Suppose that CBQ class 1:ABC
has been created on device eth0
specially for tunnel Cisco
with endpoints S
and D
.
Now you can select IPIP packets with addresses S
and D
with some classifier and map them to class 1:ABC
. F.e.
it is easy to make with rsvp
classifier:
tc filter add dev eth0 pref 100 proto ip rsvp \ session D ipproto ipip filter S \ classid 1:ABC
If you want to make more detailed classification of sub-flows
transmitted via tunnel, you can build CBQ subtree,
rooted at 1:ABC
and attach to subroot set of rules parsing
IPIP packets more deeply.