tc-cbq
Shape network traffic using Class Based Queuing
SYNOPSIS
tc qdisc add dev DEV root | parent HANDLE cbq [ QDISC_OPTIONS ]
tc class add dev DEV parent HANDLE classid CLASSID cbq [ CLASS_OPTIONS ]
PARAMETERS
dev DEV
The network device on which the qdisc or class operates.
root | parent HANDLE
Specifies whether this is the root qdisc or a child of an existing qdisc identified by HANDLE.
classid CLASSID
The unique identifier for a specific class within the CBQ hierarchy.
bandwidth RATE
(Qdisc) Total bandwidth of the interface or parent qdisc, e.g., '100mbit'.
avpkt BYTES
(Qdisc/Class) Average packet size, used for internal calculations, e.g., '1000'.
cell BYTES
(Qdisc/Class) Cell size for packet data, affects granularity of shaping, e.g., '8'.
maxburst PACKETS
(Qdisc/Class) Maximum allowed burst of packets before shaping takes effect.
minburst PACKETS
(Qdisc/Class) Minimum burst of packets allowed.
ewma VALUE
(Qdisc/Class) Exponentially Weighted Moving Average factor for utilization calculation.
offtime TIME
(Qdisc) How long a class should remain throttled after exceeding its limit, e.g., '1ms'.
split HANDLE
(Qdisc/Class) Creates an internal classifier for sub-classes.
defmap MAP
(Qdisc/Class) Default priority map, defines how priorities map to classes.
rate RATE
(Class) The maximum rate allowed for this specific class, e.g., '10mbit'.
prio PRIORITY
(Class) The priority of the class; lower values indicate higher priority.
bounded
(Class) Prevents this class from borrowing bandwidth from its parent.
isolated
(Class) Prevents this class from lending bandwidth to its sibling classes.
weight WEIGHT
(Class) Specifies the weight for proportional bandwidth sharing among classes.
mtu BYTES
(Class) Maximum Transmission Unit for the class, defaults to interface MTU.
DESCRIPTION
CBQ (Class Based Queueing) is a queuing discipline (qdisc) used with the Linux tc (traffic control) command to manage and shape network traffic. It enables hierarchical bandwidth allocation, allowing administrators to divide available bandwidth among different classes of traffic based on various criteria like source/destination IP, port, or protocol.
CBQ operates by attempting to provide a specific share of the link's bandwidth to each class, borrowing bandwidth when a class is idle and distributing excess bandwidth to busy classes. While powerful, CBQ is known for its complexity in configuration and can be resource-intensive. It creates a tree-like structure where a root qdisc distributes traffic to child classes, which can further have their own children. Each class can define its own rate, priority, and borrowing behavior.
CAVEATS
CBQ is known for its significant configuration complexity and can be challenging to fine-tune for optimal performance. It can be CPU-intensive, especially on high-speed interfaces, due to its intricate packet scheduling and borrowing/lending mechanisms.
For many modern traffic shaping needs, HTB (Hierarchical Token Bucket) is often preferred over CBQ due to its simpler configuration, better performance, and more predictable behavior, making CBQ less commonly used in new deployments.
WORKING WITH FILTERS
CBQ classes are typically populated by using tc filters. Filters classify incoming packets based on various criteria (e.g., source/destination IP, port, protocol) and direct them to specific CBQ classes. Without proper filter configuration, traffic will not flow into the intended classes, bypassing the shaping rules.
HIERARCHICAL STRUCTURE
CBQ creates a tree-like hierarchy of classes. A root CBQ qdisc manages the entire interface bandwidth, and child classes can further subdivide and manage their allocated share. This allows for very granular control over traffic flow, where a parent class can define an aggregate limit for its children, and children can define their own rates and priorities within that limit.
HISTORY
CBQ was one of the foundational and most powerful traffic shaping disciplines introduced early in Linux's network stack development. It aimed to provide flexible, hierarchical bandwidth management. However, due to its inherent configuration complexity and the later development of more streamlined and performant alternatives like HTB, its usage has become less prevalent in modern network deployments, though it remains a robust option for those who master its intricacies.