tc-tbf
Limit network traffic bandwidth using tokens
SYNOPSIS
tc qdisc add dev DEV root | parent CLASSID [ handle MINOR:] tbf [ rate RATE ] [ burst BYTES ] [ peakrate RATE ] [ mtu BYTES ] [ minburst BYTES ] [ latency TIME ] [ limit PACKETS | BYTES ] [ overhead BYTES ] [ linklayer TYPE ] [ avpkt BYTES ] [ action ACTION ]
PARAMETERS
dev DEV
Specifies the network device (e.g., eth0) to which the TBF qdisc should be attached.
root
Attaches the TBF qdisc as the root (main) qdisc for the device.
parent CLASSID
Attaches the TBF qdisc to an existing class (e.g., 1:1) of another qdisc (like HTB or PRIO).
handle MINOR:
Assigns a handle (identifier) to the TBF qdisc, which can be used to refer to it later (e.g., 1: or 20:).
rate RATE
The committed rate at which tokens are generated, effectively the average speed of traffic shaping. Can be specified in bits/bytes per second (e.g., 1mbit, 512kbit, 10mbyte).
burst BYTES
The maximum size of the token bucket, defining how much data can be sent in a single burst. Should be at least mtu. Also known as the 'buffer' or 'maxburst'.
peakrate RATE
An optional secondary rate at which tokens can be generated if the bucket is not full, allowing for bursts above the committed rate up to this peak limit. This is controlled by a second internal TBF instance.
mtu BYTES
The maximum transfer unit, or the largest packet size that TBF will account for in its calculations. This value should be consistent with the actual network MTU. Can be specified as minburst.
minburst BYTES
A variant of burst. The minimum amount of bytes to keep in the bucket. Helps avoid 'starvation' in low-rate scenarios. Typically set to a value around the device's actual MTU.
latency TIME
The maximum amount of time a packet can be held in the TBF queue before it is dropped. Specifies the maximum delay a packet can experience due to insufficient tokens (e.g., 50ms, 1s).
limit PACKETS | BYTES
The maximum number of bytes or packets that can be queued within the TBF instance before packets are dropped. If exceeded, new packets are dropped. This prevents excessive memory usage or unbounded delays.
overhead BYTES
An amount of bytes to add to each packet's actual size for accounting, useful for compensating for layer 2 or other protocol overheads not included in the IP packet size.
linklayer TYPE
Specifies the link layer type for more accurate accounting. Options include ethernet, atm, adsl, or none. Defaults to ethernet if not specified.
avpkt BYTES
Average packet size. Used internally for calculations when overhead or linklayer is specified. Not typically set by the user directly unless very precise tuning is required.
action ACTION
Specifies an action to be performed on packets matching certain criteria. Actions can modify, drop, or redirect packets.
DESCRIPTION
The tc-tbf command is part of the Linux traffic control (tc) utility, implementing the Token Bucket Filter (TBF) queuing discipline. It's used for precisely shaping outbound network traffic by controlling the rate at which data is sent.
The TBF works on the principle of a 'bucket' filled with 'tokens' at a steady rate. Each byte of data transmitted consumes a token. If there are tokens available, the packet is sent immediately. If the bucket is empty, the packet is held until enough tokens accumulate. If tokens arrive faster than they can be consumed, the bucket fills up to a specified maximum capacity (burst size). Any further tokens generated are discarded until space becomes available.
This mechanism ensures that traffic never exceeds the configured rate (rate), while allowing for short bursts of data (burst) up to a certain limit, preventing abrupt drops and making the traffic flow smoother. It's particularly useful for rate-limiting applications, VPN tunnels, or ensuring fair bandwidth allocation.
CAVEATS
The TBF qdisc is designed for traffic shaping (delaying packets to meet a rate), not strictly for dropping excess traffic instantly. If latency or limit parameters are not set appropriately, packets might be delayed indefinitely or dropped unexpectedly. Careful tuning of rate, burst, and latency is crucial for optimal performance. While peakrate allows for temporary bursts, it introduces a second bucket and can complicate tuning if not fully understood.
WORKING PRINCIPLE OF TOKEN BUCKET FILTER
The TBF operates with two key components: a 'bucket' of a specific size and a 'token generator'. Tokens are continuously added to the bucket at a defined rate. Each byte of data requires one token to be transmitted. If a packet arrives and there are enough tokens in the bucket, the tokens are consumed, and the packet is transmitted immediately. If the bucket is empty, the packet is queued, waiting for tokens to accumulate. If the bucket becomes full, newly generated tokens are discarded. This ensures that the long-term average transmission rate never exceeds the specified rate, while the burst parameter allows for temporary peaks in traffic, absorbing variations in data arrival.
HISTORY
The Token Bucket Filter concept has been a cornerstone of network traffic management and quality of service (QoS) for decades, predating its inclusion in Linux. Within the Linux kernel's traffic control (tc) framework, TBF was one of the earliest and most fundamental queueing disciplines implemented, providing a robust and widely understood mechanism for rate limiting. Its development has focused on refinement and integration with other tc components, remaining a core tool for shaping traffic on network interfaces.
SEE ALSO
tc(8), tc-htb(8), tc-prio(8), tc-fq_codel(8)