Limiting WireGuard Bandwidth

To limit or throttle the bandwidth used by a WireGuard connection, you can use the network traffic-control tools built into your OS (Operating System). This article will show you how, using the standard Linux traffic-control tool, tc.

When a packet is routed out a network interface, it may be queued to wait until the interface can actually process and send it. Packets routed to WireGuard interfaces, however, never actually need to be queued, as WireGuard can always process them immediately. As soon as a WireGuard interface receives an outbound packet, it simply encrypts and wraps the original packet in a new UDP packet, and passes that new packet back to the OS’s network stack to route out some other physical network interface.

However, if you want to deliberately throttle this process, to keep WireGuard’s traffic below a certain threshold of bandwidth, you can add a “queue discipline” to the WireGuard interface. This will keep the interface from processing packets faster than your bandwidth cap allows. A queue discipline (aka “qdisc”) is the set of rules that governs how incoming items are processed. The queue discipline for a network interface controls how fast, and in what order, packets are added to and removed from the interface’s queue of packets.

The best qdisc to use for limiting bandwidth is HFSC (Hierarchical Fair Service Curve). HFSC allows you to define multiple classes of bandwidth usage, using a tree structure to define the overall bandwidth limit at the top of the tree, and the proportion of bandwidth to share among the different classes configured at each successive level in the tree. Packet-matching filters control which class is applied to each packet in the queue, distributing packets to interior classes until they reach a leaf class.

For example, the following diagram shows how an HFSC qdisc can be configured with two basic classes of traffic, using a series of filters that assign packets to one class (with an ID of 1:11) if they match a filter, and assigning the remaining packets to the other class (with an ID of 1:19) by default if they don’t match any filters:

Packet Flow with Hard Limit
Figure 1. Packet flow with qdisc classes

The HFSC qdisc can be configured with a default class, to which any packet not filtered to a leaf class will be assigned. Otherwise, if you don’t configure a default class (or if the default class is not a leaf), the packet will be dropped. The bandwidth allowed each leaf class is constrained by the limits allowed its ancestor classes.

Let’s walk through some scenarios for configuring HFSC:

Outbound Limit

If all we want is to enforce a blanket bandwidth limit on outbound traffic from WireGuard, we only need to set up a single HFSC class. For example, say we want to limit our outbound WireGuard bandwidth to 10 Mbps (10 megabits per second — a megabit is 125 kilobytes, so that’s 125 KB per second). We’d run the following commands to set up an HFSC qdisc on our WireGuard interface (wg0), and set its upper limit to 10mbit:

$ sudo tc qdisc add dev wg0 parent root handle 1: hfsc default 1
$ sudo tc class add dev wg0 parent 1: classid 1:1 hfsc sc rate 10mbit ul rate 10mbit

The first command sets up an HFSC qdisc on our wg0 interface, and creates the root class for it. The root class has an ID (aka “handle”) of 1: — equivalent to 1:0 — meaning the ID for the qdisc (the “major” number) is 1, and the ID for the root class itself (the “minor” number) is 0. Qdisc IDs can be any 32-bit hex number; root class IDs should always be 0. The default 1 part at the end of the first command indicates which leaf class should be assigned to any unclassified packets — in this case, the class with a minor number 1 (equivalent to 1:1 when specified with the qdisc ID).

The second command adds a child to the root class. The child class has an ID of 1:1. Its upper limit is set to 10 MBbs via the ul rate 10mbit part of the command. The sc rate 10mbit part of the command specifies the service curve, which should be the same as the upper limit for a top-level class like this.

Together, the two commands set up a simple HFSC packet flow like the following:

Packet Flow with Outbound Limit
Figure 2. Packet flow with a single child class

Packets controlled by this qdisc will be assigned a leaf class of 1:1 by default. Each leaf class implicitly has its own separate PFIFO (Packet First-In First-Out) qdisc, which applies in addition to the primary HFSC qdisc. You can customize this leaf qdisc if you like — see the Leaf Queue Disciplines section at the end of this article for details. As all packets are assigned to the 1:1 class, the primary HFSC qdisc will ensure that they are dequeued using the upper-limit rate specified by that class: no faster than 10 Mpbs.

You can view the current qdisc settings for all network interfaces with the tc qdisc command:

$ tc qdisc
qdisc noqueue 0: dev lo root refcnt 2
qdisc mq 0: dev eth0 root
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
qdisc hfsc 1: dev wg0 root refcnt 2 default 1

The qdisc we just created is at the end of the list above. (Note that leaf classes’ implicit qdiscs are not included in this list.)

You can view the classes set up for the qdisc of a specific interface with the tc class show dev [interface] command:

$ tc class show dev wg0
class hfsc 1: root
class hfsc 1:1 parent 1: sc m1 0bit d 0us m2 10Mbit ul m1 0bit d 0us m2 10Mbit

Hard Limit for Some Peers

Now say we want to further enforce a bandwidth cap on traffic sent to a particular set of WireGuard peers. For example, say we want to set a hard cap of 2 Mbps to traffic sent to our 10.0.0.2 and 10.0.0.3 peers. We’ll set this up so that when our WireGuard network is fully consuming its allotted bandwidth of 10 Mbs, these two peers can have their full bandwidth of 2 Mbs — but even when the rest of the network is idle, these two peers can never have more than 2 Mbs.

To enable this, we’ll create two more classes, 1:11 and 1:19, and make them children of our top-level 1:1 class. We’ll put a hard upper limit of 2mbit on the 1:11 class, and a shared limit of 8mbit on the 1:19 class:

$ sudo tc class add dev wg0 parent 1:1 classid 1:11 hfsc ls rate 2mbit ul rate 2mbit
$ sudo tc class add dev wg0 parent 1:1 classid 1:19 hfsc ls rate 8mbit

The ul keyword in the above command stands for “Upper Limit” (the hard cap on bandwidth); whereas the ls keyword stands for “Link Share” (its share of bandwidth when bandwidth is contested). The shared limit on the second class (ls rate 8mbit) means that when it and its siblings are trying to use more bandwidth than is available from its parent class (which we limited to 10 Mbps in the Outbound Limit section), the second class gets 8 Mbps of it, and the first class (with its setting of ls rate 2mbit) gets 2 Mbps.

Because their parent class, 1:1, is now an intermediate class instead of a leaf class, we also need to change our default leaf setting on the root class. In the Outbound Limit section we set the default leaf class to 1 (meaning 1:1); now we need to choose a new leaf class. We’ll change it to 19 (meaning 1:19, the second class we created above) with the following command:

$ sudo tc qdisc change dev wg0 parent root handle 1: hfsc default 19

This makes 1:19 the default class for all packets that aren’t otherwise classified with a leaf class via packet filter.

As for the other new leaf class, 1:11, it won’t be used unless we set up a packet filter for it. Since we want to use the 1:11 class for all packets sent to 10.0.0.2 or 1.0.0.3, we’ll set up a two new filters, one for each destination IP address, via the following commands:

$ sudo tc filter add dev wg0 parent 1: protocol ip prio 1 u32 \
    match ip dst 10.0.0.2 \
    classid 1:11
$ sudo tc filter add dev wg0 parent 1: protocol ip prio 1 u32 \
    match ip dst 10.0.0.3 \
    classid 1:11

The first part of each command (tc filter add dev wg0 parent 1: protocol ip prio 1 u32) sets up a “universal” packet filter (u32) for packets on our root 1:0 class. Both filters have a priority number (prio, aka pref) of 1 (meaning they’ll be matched ahead of filters with a larger priority number). The second part of each command (match ip dst 10.0.0.2 and match ip dst 10.0.0.3) matches a specific destination IP address for each packet. The third part of each command (classid 1:11, an synonym for flowid 1:11) assigns matching packets to the 1:11 class.

The packet flow of our class hierarchy now looks like this:

Packet Flow with Hard Limit
Figure 3. Packet flow with two leaf classes

Outgoing packets sent to our wg0 interface with a destination of 10.0.0.2 or 10.0.0.3 will be classified with the 1:11 class; and all other packets will be classified by default with the 1:19 class. This means that outgoing traffic to 10.0.0.2 or 10.0.0.3 (as a group) will always be rate-limited to 2 Mbps; with the combined total of all traffic going out wg0 rate-limited to 10 Mbps. When traffic to 10.0.0.2 or 10.0.0.3 is using its full share, the rest of the outbound wg0 traffic will be limited to 8 Mbps.

Our class list now looks like this:

$ tc class show dev wg0
class hfsc 1: root
class hfsc 1:11 parent 1:1 ls m1 0bit d 0us m2 2Mbit ul m1 0bit d 0us m2 2Mbit
class hfsc 1:19 parent 1:1 ls m1 0bit d 0us m2 8Mbit
class hfsc 1:1 parent 1: sc m1 0bit d 0us m2 10Mbit ul m1 0bit d 0us m2 10Mbit

We can get a nice ASCII tree view of the class list if we add the -graph flag:

$ tc -graph class show dev wg0
+---(1:) hfsc
     +---(1:1) hfsc sc m1 0bit d 0us m2 10Mbit ul m1 0bit d 0us m2 10Mbit
          +---(1:11) hfsc ls m1 0bit d 0us m2 2Mbit ul m1 0bit d 0us m2 2Mbit
          +---(1:19) hfsc ls m1 0bit d 0us m2 6Mbit

And we can view our filter list for the WireGuard interface with the tc filter show dev wg0 command:

$ tc filter show dev wg0
filter parent 1: protocol ip pref 1 u32 chain 0
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800: ht divisor 1
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:11 not_in_hw
  match 0a000002/ffffffff at 16
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800::801 order 2049 key ht 800 bkt 0 flowid 1:11 not_in_hw
  match 0a000003/ffffffff at 16

Behind the scenes, our two filter commands resulted in a somewhat-cryptic filter chain being set up. Passing the -pretty command to the tc filter show command will make it easier to recognize the two filters we added to the end of the chain:

$ tc -pretty filter show dev wg0
filter parent 1: protocol ip pref 1 u32 chain 0
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800: ht divisor 1
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:11 not_in_hw
  match IP dst 10.0.0.2/32
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800::801 order 2049 key ht 800 bkt 0 flowid 1:11 not_in_hw
  match IP dst 10.0.0.3/32

Shared Limit for Some Peers

Now say that we want to further shape the traffic sent out to our WireGuard network such that when traffic is capped out at 10 Mbps, two specific peers (10.0.0.4 and 10.0.0.5) can get a 2 Mbps share of it. When rest of our WireGuard network is idle, this will still allow these two peers to use all bandwidth available (up to the 10 Mbps cap we set on the top-level class in the Outbound Limit section); but when there’s not enough bandwidth to go around, they’ll be shared 2 Mbps of it.

We can use the following command to add a new 1:12 class for these two peers, granting them a shared limit of 2mbit:

$ sudo tc class add dev wg0 parent 1:1 classid 1:12 hfsc ls rate 2mbit

This, combined with the two children we added in the Hard Limit for Some Peers section, means that the 1:1 class now has three children. However, 1:1 has an upper limit of 10 Mbps — and we’ve granted a shared limit of 2 Mbps to one child (1:11), 2 Mbps to this new child (1:12), and 8 Mbps to the third child (1:19) — adding up to 12 Mbps. What happens when these children try to use more than the 10 Mbps limit on 1:1?

The answer is that HFSC actually manages shared limits using percentages, rather than fixed values. When all the children are trying to use their bandwidth limits, the first child (1:11) will get 2/12ths of the bandwidth allotted to the parent (~1.7 Mbps), the second child (1:12) will also get 2/12ths, and the third child (1:19) will get 8/12ths (~6.7 Mbps).

In this case, since we do actually want to grant a full 2 Mbps share to our 1:11 and 1:12 classes, we’ll lower the shared limit of the third child to 6mbit:

$ sudo tc class change dev wg0 parent 1:1 classid 1:19 hfsc ls rate 6mbit

The peers in this class can still use more than 6 Mbps when the peers from the other classes aren’t using their full share — they’ll just be capped at 6 Mbps when those other peers are using their full share.

Now we still need to add some filters to direct packets to our new 1:12 class. We’ll add two new filters, one to match traffic sent to 10.0.0.4, and the other to match traffic sent to 10.0.0.5:

$ sudo tc filter add dev wg0 parent 1: protocol ip prio 1 u32 \
    match ip dst 10.0.0.4 \
    classid 1:12
$ sudo tc filter add dev wg0 parent 1: protocol ip prio 1 u32 \
    match ip dst 10.0.0.5 \
    classid 1:12
Tip

If you want to match all the IP addresses in a proper CIDR block, instead of specifying each address individually, you can use a single filter to match them all. The 10.0.0.4 and 10.0.0.5 addresses actually comprise the 10.0.0.4/31 CIDR block, so we could just use one single filter, instead of the two shown above, to match both addresses:

$ sudo tc filter add dev wg0 parent 1: protocol ip prio 1 u32 \
    match ip dst 10.0.0.4/31 \
    classid 1:12

The packet flow for our new class hierarchy now looks like this:

Packet Flow with Shared Limit
Figure 4. Packet flow with a third leaf class

We can see the class hierarchy for our WireGuard interface by running the following command:

$ tc -graph class show dev wg0
+---(1:) hfsc
     +---(1:1) hfsc sc m1 0bit d 0us m2 10Mbit ul m1 0bit d 0us m2 10Mbit
          +---(1:11) hfsc ls m1 0bit d 0us m2 2Mbit ul m1 0bit d 0us m2 2Mbit
          +---(1:19) hfsc ls m1 0bit d 0us m2 6Mbit
          +---(1:12) hfsc ls m1 0bit d 0us m2 2Mbit

And we can see the new filter chain by running the following command:

$ tc -pretty filter show dev wg0
filter parent 1: protocol ip pref 1 u32 chain 0
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800: ht divisor 1
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:11 not_in_hw
  match IP dst 10.0.0.2/32
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800::801 order 2049 key ht 800 bkt 0 flowid 1:11 not_in_hw
  match IP dst 10.0.0.3/32
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800::802 order 2050 key ht 800 bkt 0 flowid 1:12 not_in_hw
  match IP dst 10.0.0.4/32
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800::803 order 2051 key ht 800 bkt 0 flowid 1:12 not_in_hw
  match IP dst 10.0.0.5/32

Reserved Limit for Some Peers

To reserve a certain amount of bandwidth for a particular peer or application using our WireGuard network, we can set up an HFSC class to guarantee the class will get a minimum amount of bandwidth. This guarantee will override the other link sharing (or upper limit) settings we’ve configured for its ancestor classes (within the physical limits of the network).

For example, say we’re running a VoIP (Voice over Internet Protocol) application on UDP port 4569 of our 10.0.0.2 WireGuard peer. We want to guarantee it will always get at least 1 Mbps no matter what.

To implement this, we’ll add two child classes to the class we previously set up for all 10.0.0.2 traffic, 1:11 (in the Hard Limit for Some Peers section). The first new class, 1:111, will enforce this 1mbit guarantee; and the second, 1:119, will share the rest of the traffic from the parent class (which itself has a 2mbit share):

$ sudo tc class add dev wg0 parent 1:11 classid 1:111 hfsc rt rate 1mbit
$ sudo tc class add dev wg0 parent 1:11 classid 1:119 hfsc ls rate 2mbit

The rt keyword in the first command stands for “Real Time”. The real-time configuration of the first class (rt rate 1mbit) guarantees that it gets at least 1 Mbps of real bandwidth (assuming the bandwidth is physically available), ahead of all the other bandwidth sharing enforced by the HFSC qdisc. It may use up to the full 2 Mbps allocation of its parent’s upper limit, as long as its sibling, set up by the second command, is not using it; otherwise, its sibling gets to use the remaining share.

To direct packets from the parent 1:11 class to the two new child classes, we also need to set up two new filters. The first will match packets sent to UDP port 4569 (the UDP protocol number is 17), and direct them to the 1:111 class; the second will match all other packets, and direct them to the 1:119 class:

$ sudo tc filter add dev wg0 parent 1:11 protocol ip prio 111 u32 \
    match ip protocol 17 0xff \
    match ip dport 4569 0xffff \
    classid 1:111
$ sudo tc filter add dev wg0 parent 1:11 prio 119 matchall classid 1:119

The packet flow for our new class hierarchy now looks like this:

Packet Flow with Reserved Limit
Figure 5. Packet flow with an additional subtree

We can see the class hierarchy for our WireGuard interface by running the following command:

$ tc -graph class show dev wg0
+---(1:) hfsc
     +---(1:1) hfsc sc m1 0bit d 0us m2 10Mbit ul m1 0bit d 0us m2 10Mbit
          +---(1:11) hfsc ls m1 0bit d 0us m2 2Mbit ul m1 0bit d 0us m2 2Mbit
          |    +---(1:111) hfsc rt m1 0bit d 0us m2 1Mbit
          |    +---(1:119) hfsc ls m1 0bit d 0us m2 2Mbit
          |
          +---(1:12) hfsc ls m1 0bit d 0us m2 2Mbit
          +---(1:19) hfsc ls m1 0bit d 0us m2 6Mbit

And we can see the new filters we set up on the 1:11 class with the following command:

$ tc -pretty filter show dev wg0 parent 1:11
filter protocol ip pref 111 u32 chain 0
filter protocol ip pref 111 u32 chain 0 fh 801: ht divisor 1
filter protocol ip pref 111 u32 chain 0 fh 801::800 order 2048 key ht 801 bkt 0 flowid 1:111 not_in_hw
  match IP protocol 17
  match dport 4569
filter protocol all pref 119 matchall chain 0
filter protocol all pref 119 matchall chain 0 handle 0x1 flowid 1:119
  not_in_hw

Inbound Limit

Bandwidth limiting or traffic shaping is best applied to outbound traffic, where it can immediately restrict extra traffic from being sent out to the network. You can apply limits to inbound traffic, too, but your traffic-control rules will only affect traffic that’s already been received from the network — it won’t necessarily prevent traffic from being sent in the first place.

Limiting inbound WireGuard traffic is mainly useful on hosts that act as a hub in a WireGuard hub-and-spoke network, or the site endpoint in a WireGuard point-to-site network. With those two cases, it can be used to limit the amount of traffic forwarded from other WireGuard peers to the rest of the network. Additionally, it can also be of some utility for individual WireGuard endpoints when handling large TCP streams (eg large file downloads) — not to limit the traffic it receives per se, but as a feedback mechanism to signal (through TCP congestion control) to the traffic’s original source that it should send traffic at a slower rate.

However, setting up inbound (aka ingress) traffic control is more complicated than outbound (aka egress). We first have to set up a virtual IFB (Intermediate Functional Block) network device through which to send our inbound WireGuard traffic, and then we can configure the IFB device with a custom HFSC qdisc that implements our inbound WireGuard traffic limits.

The first step is to add a new IFB device. We’ll call this device indbound0, and we’ll add it and start it up with the following commands:

$ sudo ip link add name inbound0 type ifb
$ sudo ip link set inbound0 up

The next step is to direct inbound packets from our WireGuard interface to this IFB interface. We can do that by setting up a special ingress qdisc on the WireGuard interface, and attaching a filter to the qdisc that redirects all its traffic to the IFB interface:

$ sudo tc qdisc add dev wg0 handle ffff: ingress
$ sudo tc filter add dev wg0 parent ffff: matchall \
    action mirred egress redirect dev inbound0

With the interfaces connected, we can now set up a separate HFSC qdisc for the IFB interface, and configure this qdisc just like we did in the previous sections. This configuration will apply to all our inbound WireGuard traffic.

Let’s say we want to cap inbound traffic coming in our WireGuard interface to a limit of 50 Mbps; and also put a hard limit of 5 Mbps on HTTPS traffic coming in from our 10.0.0.6 peer. We could run the following commands to set up a top-level class, f:1, with an upper limit of 50mbit; and then add two children to it, f:11 and f:19, the first of which has an upper limit of 5mbit, and the second of which shares the rest of the 45mbit of traffic:

$ sudo tc qdisc add dev inbound0 parent root handle f: hfsc default 19
$ sudo tc class add dev inbound0 parent f: classid f:1 hfsc sc rate 50mbit ul rate 50mbit
$ sudo tc class add dev inbound0 parent f:1 classid f:11 hfsc ls rate 5mbit ul rate 5mbit
$ sudo tc class add dev inbound0 parent f:1 classid f:19 hfsc ls rate 45mbit

The f:19 class will be applied as the default. To classify HTTPS traffic from 10.0.0.6 with the f:11 class, we’d attach the following filter to the root class of the IFB interface:

$ sudo tc filter add dev inbound0 parent f: protocol ip prio 11 u32 \
    match ip src 10.0.0.6 \
    match ip protocol 6 0xff \
    match ip sport 443 0xffff \
    classid f:11

The packet flow for our inbound WireGuard traffic now looks like this:

Packet Flow with Inbound Limit
Figure 6. Packet flow with qdisc on IFB interface

We can see the two new qdiscs we’ve added by running the tc qdisc command:

$ tc qdisc
qdisc noqueue 0: dev lo root refcnt 2
qdisc mq 0: dev eth0 root
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
qdisc hfsc 1: dev wg0 root refcnt 2 default 19
qdisc ingress ffff: dev wg0 parent ffff:fff1 ----------------
qdisc hfsc f: dev inbound0 root refcnt 2 default 19

And we can view the qdisc class hierarchy of our IFB interface by running the following command:

$ tc -graph class show dev inbound0
+---(f:) hfsc
     +---(f:1) hfsc sc m1 0bit d 0us m2 50Mbit ul m1 0bit d 0us m2 50Mbit
          +---(f:11) hfsc ls m1 0bit d 0us m2 5Mbit ul m1 0bit d 0us m2 5Mbit
          +---(f:19) hfsc ls m1 0bit d 0us m2 45Mbit

We can also view the filter we added to the root of that hierarchy by running the following command:

$ tc -pretty filter show dev inbound0
filter protocol ip pref 11 u32 chain 0
filter protocol ip pref 11 u32 chain 0 fh 800: ht divisor 1
filter protocol ip pref 11 u32 chain 0 fh 800::800 order 2048 key ht 800 bkt 0 flowid f:11 not_in_hw
  match IP src 10.0.0.6/32
  match IP protocol 6
  match sport 443

Tips

Modifying Classes

As long as you specify an ID (aka “handle”) for each qdisc class when you create it, you can easily modify the class by using the same syntax as you used to create the class, simply substituting the change keyword for the add keyword.

For example, if you create a class like the following:

$ sudo tc class add dev wg0 parent 1:1 classid 1:19 hfsc ls rate 8mbit

You can modify the class to specify a different link-share rate like the following:

$ sudo tc class change dev wg0 parent 1:1 classid 1:19 hfsc ls rate 6mbit

And you can delete the class like the following:

$ sudo tc class delete dev wg0 parent 1:1 classid 1:19

Modifying Filters

Modifying and deleting filters are more complicated, since tc filter commands don’t always map one-to-one with a single filter being created. It’s often best to delete the filter, and then re-create it with its modified version.

But be careful when you delete a filter — you can’t just substitute the delete keyword for the add keyword in the command you used to create the filter. For example, don’t run this:

$ sudo tc filter delete dev wg0 parent 1: protocol ip prio 1 u32 \
    match ip dst 10.0.0.3 \
    classid 1:11

This will delete all the priority 1 filters for the 1:0 class — not just the filter you were trying to match.

Instead, list the existing filters, and find the filter’s handle:

$ tc -pretty filter show dev wg0
filter parent 1: protocol ip pref 1 u32 chain 0
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800: ht divisor 1
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:11 not_in_hw
  match IP dst 10.0.0.2/32
filter parent 1: protocol ip pref 1 u32 chain 0 fh 800::801 order 2049 key ht 800 bkt 0 flowid 1:11 not_in_hw
  match IP dst 10.0.0.3/32

Then you can safely delete just the filter you want by specifying its handle:

$ sudo tc filter delete dev wg0 parent 1: handle 800::801 protocol ip prio 1 u32

Note that in addition to the filter’s handle, you also have to specify its:

  • Device (eg dev wg0)

  • Parent (eg parent 1:11)

  • Protocol (eg protocol ip)

  • Priority (eg prio 1)

  • Type (eg u32)

Leaf Queue Disciplines

You can implement further traffic shaping by customizing the qdisc used by the packets that match any of the HFSC leaf classes. By default, leaf classes use a simple, unlimited PFIFO (Packet First-In First-Out) qdisc, where the first packet queued for each class will be the first packet sent for that class when the root HFSC qdisc determines that it’s that class’s turn to send a packet.

You can create a different qdisc to use for an HFSC leaf class by running the tc qdisc add command, and specifying the class as the parent of the new qdisc. For example, the following command would set the 1:19 class (which we created in the Hard Limit for Some Peers section) to use the FQ CoDel (Fair Queuing with Controlled Delay) qdisc:

$ sudo tc qdisc add dev wg0 parent 1:19 handle 19: fq_codel

Make sure you specify a unique major ID for the qdisc; in the above command, we created the qdisc with an ID of 19: (meaning its major ID is 19, and its minor ID is 0). FQ CoDel is the best qdisc for general TCP traffic, and you may find it improves application performance for classes that include lots of TCP streams from remote servers at a variety of distant locations.

Advanced Tuning

You can fine-tune HFSC’s traffic shaping rates by specifying additional parameters in HFSC class definitions. For example, you can specify both a “Real Time” rate (rt) and a “Link Share” rate (ls):

$ sudo tc class add dev wg0 parent 1:11 classid 1:111 hfsc rt rate 1mbit ls rate 1.5mbit

This guarantees the class at least 1 Mbps bandwidth even when the interface is using all of its available bandwidth — plus it can use up to an additional 500 Kbps (or that proportional amount, depending on how other classes have been configured) after the real-time guarantees of all other classes have been satisfied.

Note

The sc keyword (“Service Curve”) is a shortcut for specifying both rt and ls with the same rate:

$ sudo tc class add dev wg0 parent 1:11 classid 1:111 hfsc sc rate 1mbit

While this usually won’t have much of an effect on the class itself (it will usually just use its real-time allotment, and not get any additional shared allotment), it can be useful to use the sc keyword instead of the rt keyword for the purpose of calculating other classes shared allotment (as it better represents the shared allotment from which this class will steal).

You can also specify a second, initial segment of the service curve when you define an HFSC class. This allows a class to be tuned to the specific sending patterns of a particular application. For example, for a latency-sensitive application that sends out bursts of 100 KB of traffic every 100 milliseconds, you might specify a class like the following to ensure that its usual bursts are sent out quickly (but without allowing it to take over more than 1 Mbps of your bandwidth if its bursts exceed that threshold):

$ sudo tc class add dev wg0 parent 1:11 classid 1:111 hfsc rt umax 100k dmax 100ms rate 1mbit

With the above definition, the burst unit (100 KB) is specified via the umax parameter, the burst delay (100 milliseconds) is specified via the dmax parameter, and the regular rate is specified via the rate parameter.

HFSC also has an alternate syntax for specifying the same service curve using different parameters, where the burst rate is specified with the m1 parameter, the burst delay is specified with the d parameter, and the regular rate is specified with the m2 parameter. The following definition that uses m1 and m2 is functionally equivalent to the above definition that used umax and rate:

$ sudo tc class add dev wg0 parent 1:11 classid 1:111 hfsc rt m1 8192kbit d 100ms m2 1mbit

And this alternate syntax is always what tc class show uses when displaying class definitions:

$ tc class show dev wg0
class hfsc 1:11 parent 1:1 ls m1 0bit d 0us m2 2Mbit ul m1 0bit d 0us m2 2Mbit
class hfsc 1: root
class hfsc 1:1 parent 1: sc m1 0bit d 0us m2 10Mbit ul m1 0bit d 0us m2 10Mbit
class hfsc 1:111 parent 1:11 rt m1 8192Kbit d 100ms m2 1Mbit
class hfsc 1:12 parent 1:1 ls m1 0bit d 0us m2 2Mbit
class hfsc 1:19 parent 1:1 ls m1 0bit d 0us m2 6Mbit
class hfsc 1:119 parent 1:11 ls m1 0bit d 0us m2 2Mbit