Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) // SPDX-License-Identifier: GPL-2.0-or-later
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3)  * net/sched/sch_tbf.c	Token Bucket Filter queue.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5)  * Authors:	Alexey Kuznetsov, <kuznet@ms2.inr.ac.ru>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6)  *		Dmitry Torokhov <dtor@mail.ru> - allow attaching inner qdiscs -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7)  *						 original idea by Martin Devera
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10) #include <linux/module.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11) #include <linux/types.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12) #include <linux/kernel.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13) #include <linux/string.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14) #include <linux/errno.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15) #include <linux/skbuff.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16) #include <net/netlink.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17) #include <net/sch_generic.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18) #include <net/pkt_cls.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19) #include <net/pkt_sched.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22) /*	Simple Token Bucket Filter.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23) 	=======================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25) 	SOURCE.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26) 	-------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28) 	None.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30) 	Description.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31) 	------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33) 	A data flow obeys TBF with rate R and depth B, if for any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34) 	time interval t_i...t_f the number of transmitted bits
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35) 	does not exceed B + R*(t_f-t_i).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37) 	Packetized version of this definition:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38) 	The sequence of packets of sizes s_i served at moments t_i
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39) 	obeys TBF, if for any i<=k:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41) 	s_i+....+s_k <= B + R*(t_k - t_i)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43) 	Algorithm.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44) 	----------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46) 	Let N(t_i) be B/R initially and N(t) grow continuously with time as:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48) 	N(t+delta) = min{B/R, N(t) + delta}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50) 	If the first packet in queue has length S, it may be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51) 	transmitted only at the time t_* when S/R <= N(t_*),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52) 	and in this case N(t) jumps:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54) 	N(t_* + 0) = N(t_* - 0) - S/R.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58) 	Actually, QoS requires two TBF to be applied to a data stream.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59) 	One of them controls steady state burst size, another
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60) 	one with rate P (peak rate) and depth M (equal to link MTU)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61) 	limits bursts at a smaller time scale.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63) 	It is easy to see that P>R, and B>M. If P is infinity, this double
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64) 	TBF is equivalent to a single one.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66) 	When TBF works in reshaping mode, latency is estimated as:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68) 	lat = max ((L-B)/R, (L-M)/P)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71) 	NOTES.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72) 	------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74) 	If TBF throttles, it starts a watchdog timer, which will wake it up
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75) 	when it is ready to transmit.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76) 	Note that the minimal timer resolution is 1/HZ.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77) 	If no new packets arrive during this period,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78) 	or if the device is not awaken by EOI for some previous packet,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79) 	TBF can stop its activity for 1/HZ.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82) 	This means, that with depth B, the maximal rate is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84) 	R_crit = B*HZ
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86) 	F.e. for 10Mbit ethernet and HZ=100 the minimal allowed B is ~10Kbytes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88) 	Note that the peak rate TBF is much more tough: with MTU 1500
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89) 	P_crit = 150Kbytes/sec. So, if you need greater peak
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90) 	rates, use alpha with HZ=1000 :-)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92) 	With classful TBF, limit is just kept for backwards compatibility.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93) 	It is passed to the default bfifo qdisc - if the inner qdisc is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94) 	changed the limit is not effective anymore.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97) struct tbf_sched_data {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98) /* Parameters */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99) 	u32		limit;		/* Maximal length of backlog: bytes */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) 	u32		max_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) 	s64		buffer;		/* Token bucket depth/rate: MUST BE >= MTU/B */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) 	s64		mtu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) 	struct psched_ratecfg rate;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) 	struct psched_ratecfg peak;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) /* Variables */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) 	s64	tokens;			/* Current number of B tokens */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) 	s64	ptokens;		/* Current number of P tokens */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) 	s64	t_c;			/* Time check-point */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) 	struct Qdisc	*qdisc;		/* Inner qdisc, default - bfifo queue */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) 	struct qdisc_watchdog watchdog;	/* Watchdog timer */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) /* Time to Length, convert time in ns to length in bytes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116)  * to determinate how many bytes can be sent in given time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) static u64 psched_ns_t2l(const struct psched_ratecfg *r,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) 			 u64 time_in_ns)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) 	/* The formula is :
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) 	 * len = (time_in_ns * r->rate_bytes_ps) / NSEC_PER_SEC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) 	u64 len = time_in_ns * r->rate_bytes_ps;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) 	do_div(len, NSEC_PER_SEC);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) 	if (unlikely(r->linklayer == TC_LINKLAYER_ATM)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) 		do_div(len, 53);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) 		len = len * 48;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) 	if (len > r->overhead)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) 		len -= r->overhead;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) 		len = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) 	return len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) static void tbf_offload_change(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) 	struct tbf_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) 	struct net_device *dev = qdisc_dev(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) 	struct tc_tbf_qopt_offload qopt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) 	if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) 	qopt.command = TC_TBF_REPLACE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) 	qopt.handle = sch->handle;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) 	qopt.parent = sch->parent;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) 	qopt.replace_params.rate = q->rate;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) 	qopt.replace_params.max_size = q->max_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) 	qopt.replace_params.qstats = &sch->qstats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) 	dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_TBF, &qopt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) static void tbf_offload_destroy(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) 	struct net_device *dev = qdisc_dev(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) 	struct tc_tbf_qopt_offload qopt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) 	if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) 	qopt.command = TC_TBF_DESTROY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) 	qopt.handle = sch->handle;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) 	qopt.parent = sch->parent;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) 	dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_TBF, &qopt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) static int tbf_offload_dump(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) 	struct tc_tbf_qopt_offload qopt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) 	qopt.command = TC_TBF_STATS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) 	qopt.handle = sch->handle;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) 	qopt.parent = sch->parent;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) 	qopt.stats.bstats = &sch->bstats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) 	qopt.stats.qstats = &sch->qstats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) 	return qdisc_offload_dump_helper(sch, TC_SETUP_QDISC_TBF, &qopt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) /* GSO packet is too big, segment it so that tbf can transmit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188)  * each segment in time
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) 		       struct sk_buff **to_free)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) 	struct tbf_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) 	struct sk_buff *segs, *nskb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) 	netdev_features_t features = netif_skb_features(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) 	unsigned int len = 0, prev_len = qdisc_pkt_len(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) 	int ret, nb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) 	segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) 	if (IS_ERR_OR_NULL(segs))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) 		return qdisc_drop(skb, sch, to_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) 	nb = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) 	skb_list_walk_safe(segs, segs, nskb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) 		skb_mark_not_on_list(segs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) 		qdisc_skb_cb(segs)->pkt_len = segs->len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) 		len += segs->len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) 		ret = qdisc_enqueue(segs, q->qdisc, to_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) 		if (ret != NET_XMIT_SUCCESS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) 			if (net_xmit_drop_count(ret))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) 				qdisc_qstats_drop(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) 			nb++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) 	sch->q.qlen += nb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) 	if (nb > 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) 		qdisc_tree_reduce_backlog(sch, 1 - nb, prev_len - len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) 	consume_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) 	return nb > 0 ? NET_XMIT_SUCCESS : NET_XMIT_DROP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) 		       struct sk_buff **to_free)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) 	struct tbf_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) 	unsigned int len = qdisc_pkt_len(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) 	if (qdisc_pkt_len(skb) > q->max_size) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) 		if (skb_is_gso(skb) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) 		    skb_gso_validate_mac_len(skb, q->max_size))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) 			return tbf_segment(skb, sch, to_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) 		return qdisc_drop(skb, sch, to_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) 	ret = qdisc_enqueue(skb, q->qdisc, to_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) 	if (ret != NET_XMIT_SUCCESS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) 		if (net_xmit_drop_count(ret))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) 			qdisc_qstats_drop(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) 	sch->qstats.backlog += len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) 	sch->q.qlen++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) 	return NET_XMIT_SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) static bool tbf_peak_present(const struct tbf_sched_data *q)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) 	return q->peak.rate_bytes_ps;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) static struct sk_buff *tbf_dequeue(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) 	struct tbf_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) 	struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) 	skb = q->qdisc->ops->peek(q->qdisc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) 	if (skb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) 		s64 now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) 		s64 toks;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) 		s64 ptoks = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) 		unsigned int len = qdisc_pkt_len(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) 		now = ktime_get_ns();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) 		toks = min_t(s64, now - q->t_c, q->buffer);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) 		if (tbf_peak_present(q)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) 			ptoks = toks + q->ptokens;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) 			if (ptoks > q->mtu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) 				ptoks = q->mtu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) 			ptoks -= (s64) psched_l2t_ns(&q->peak, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) 		toks += q->tokens;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) 		if (toks > q->buffer)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) 			toks = q->buffer;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) 		toks -= (s64) psched_l2t_ns(&q->rate, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) 		if ((toks|ptoks) >= 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) 			skb = qdisc_dequeue_peeked(q->qdisc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) 			if (unlikely(!skb))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) 				return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) 			q->t_c = now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) 			q->tokens = toks;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) 			q->ptokens = ptoks;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) 			qdisc_qstats_backlog_dec(sch, skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) 			sch->q.qlen--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) 			qdisc_bstats_update(sch, skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) 			return skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) 		qdisc_watchdog_schedule_ns(&q->watchdog,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) 					   now + max_t(long, -toks, -ptoks));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) 		/* Maybe we have a shorter packet in the queue,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) 		   which can be sent now. It sounds cool,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) 		   but, however, this is wrong in principle.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) 		   We MUST NOT reorder packets under these circumstances.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) 		   Really, if we split the flow into independent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) 		   subflows, it would be a very good solution.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) 		   This is the main idea of all FQ algorithms
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) 		   (cf. CSZ, HPFQ, HFSC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) 		qdisc_qstats_overlimit(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) 	return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) static void tbf_reset(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) 	struct tbf_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) 	qdisc_reset(q->qdisc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) 	sch->qstats.backlog = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) 	sch->q.qlen = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) 	q->t_c = ktime_get_ns();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) 	q->tokens = q->buffer;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) 	q->ptokens = q->mtu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) 	qdisc_watchdog_cancel(&q->watchdog);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) static const struct nla_policy tbf_policy[TCA_TBF_MAX + 1] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) 	[TCA_TBF_PARMS]	= { .len = sizeof(struct tc_tbf_qopt) },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) 	[TCA_TBF_RTAB]	= { .type = NLA_BINARY, .len = TC_RTAB_SIZE },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) 	[TCA_TBF_PTAB]	= { .type = NLA_BINARY, .len = TC_RTAB_SIZE },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) 	[TCA_TBF_RATE64]	= { .type = NLA_U64 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) 	[TCA_TBF_PRATE64]	= { .type = NLA_U64 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) 	[TCA_TBF_BURST] = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) 	[TCA_TBF_PBURST] = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) static int tbf_change(struct Qdisc *sch, struct nlattr *opt,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) 		      struct netlink_ext_ack *extack)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) 	struct tbf_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) 	struct nlattr *tb[TCA_TBF_MAX + 1];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) 	struct tc_tbf_qopt *qopt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) 	struct Qdisc *child = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) 	struct psched_ratecfg rate;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) 	struct psched_ratecfg peak;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) 	u64 max_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) 	s64 buffer, mtu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) 	u64 rate64 = 0, prate64 = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) 	err = nla_parse_nested_deprecated(tb, TCA_TBF_MAX, opt, tbf_policy,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) 					  NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) 	if (err < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) 		return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) 	err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) 	if (tb[TCA_TBF_PARMS] == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) 		goto done;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) 	qopt = nla_data(tb[TCA_TBF_PARMS]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) 	if (qopt->rate.linklayer == TC_LINKLAYER_UNAWARE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) 		qdisc_put_rtab(qdisc_get_rtab(&qopt->rate,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) 					      tb[TCA_TBF_RTAB],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) 					      NULL));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) 	if (qopt->peakrate.linklayer == TC_LINKLAYER_UNAWARE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) 			qdisc_put_rtab(qdisc_get_rtab(&qopt->peakrate,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) 						      tb[TCA_TBF_PTAB],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) 						      NULL));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) 	buffer = min_t(u64, PSCHED_TICKS2NS(qopt->buffer), ~0U);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372) 	mtu = min_t(u64, PSCHED_TICKS2NS(qopt->mtu), ~0U);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) 	if (tb[TCA_TBF_RATE64])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) 		rate64 = nla_get_u64(tb[TCA_TBF_RATE64]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) 	psched_ratecfg_precompute(&rate, &qopt->rate, rate64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) 	if (tb[TCA_TBF_BURST]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) 		max_size = nla_get_u32(tb[TCA_TBF_BURST]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) 		buffer = psched_l2t_ns(&rate, max_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382) 		max_size = min_t(u64, psched_ns_t2l(&rate, buffer), ~0U);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385) 	if (qopt->peakrate.rate) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) 		if (tb[TCA_TBF_PRATE64])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387) 			prate64 = nla_get_u64(tb[TCA_TBF_PRATE64]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388) 		psched_ratecfg_precompute(&peak, &qopt->peakrate, prate64);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389) 		if (peak.rate_bytes_ps <= rate.rate_bytes_ps) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390) 			pr_warn_ratelimited("sch_tbf: peakrate %llu is lower than or equals to rate %llu !\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391) 					peak.rate_bytes_ps, rate.rate_bytes_ps);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392) 			err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393) 			goto done;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396) 		if (tb[TCA_TBF_PBURST]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) 			u32 pburst = nla_get_u32(tb[TCA_TBF_PBURST]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) 			max_size = min_t(u32, max_size, pburst);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) 			mtu = psched_l2t_ns(&peak, pburst);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401) 			max_size = min_t(u64, max_size, psched_ns_t2l(&peak, mtu));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) 		memset(&peak, 0, sizeof(peak));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407) 	if (max_size < psched_mtu(qdisc_dev(sch)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) 		pr_warn_ratelimited("sch_tbf: burst %llu is lower than device %s mtu (%u) !\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) 				    max_size, qdisc_dev(sch)->name,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410) 				    psched_mtu(qdisc_dev(sch)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) 	if (!max_size) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413) 		err = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) 		goto done;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) 	if (q->qdisc != &noop_qdisc) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418) 		err = fifo_set_limit(q->qdisc, qopt->limit);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) 		if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420) 			goto done;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421) 	} else if (qopt->limit > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422) 		child = fifo_create_dflt(sch, &bfifo_qdisc_ops, qopt->limit,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423) 					 extack);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424) 		if (IS_ERR(child)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425) 			err = PTR_ERR(child);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426) 			goto done;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429) 		/* child is fifo, no need to check for noop_qdisc */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430) 		qdisc_hash_add(child, true);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433) 	sch_tree_lock(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) 	if (child) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435) 		qdisc_tree_flush_backlog(q->qdisc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436) 		qdisc_put(q->qdisc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437) 		q->qdisc = child;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439) 	q->limit = qopt->limit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440) 	if (tb[TCA_TBF_PBURST])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441) 		q->mtu = mtu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443) 		q->mtu = PSCHED_TICKS2NS(qopt->mtu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) 	q->max_size = max_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445) 	if (tb[TCA_TBF_BURST])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446) 		q->buffer = buffer;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448) 		q->buffer = PSCHED_TICKS2NS(qopt->buffer);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449) 	q->tokens = q->buffer;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450) 	q->ptokens = q->mtu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452) 	memcpy(&q->rate, &rate, sizeof(struct psched_ratecfg));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453) 	memcpy(&q->peak, &peak, sizeof(struct psched_ratecfg));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455) 	sch_tree_unlock(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456) 	err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458) 	tbf_offload_change(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459) done:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 460) 	return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 461) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 462) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 463) static int tbf_init(struct Qdisc *sch, struct nlattr *opt,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 464) 		    struct netlink_ext_ack *extack)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 465) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 466) 	struct tbf_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 467) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 468) 	qdisc_watchdog_init(&q->watchdog, sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 469) 	q->qdisc = &noop_qdisc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 470) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 471) 	if (!opt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 472) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 473) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 474) 	q->t_c = ktime_get_ns();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 475) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 476) 	return tbf_change(sch, opt, extack);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 477) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 478) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 479) static void tbf_destroy(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 480) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 481) 	struct tbf_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 482) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 483) 	qdisc_watchdog_cancel(&q->watchdog);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 484) 	tbf_offload_destroy(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 485) 	qdisc_put(q->qdisc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 486) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 487) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 488) static int tbf_dump(struct Qdisc *sch, struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 489) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 490) 	struct tbf_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 491) 	struct nlattr *nest;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 492) 	struct tc_tbf_qopt opt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 493) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 494) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 495) 	err = tbf_offload_dump(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 496) 	if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 497) 		return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 498) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 499) 	nest = nla_nest_start_noflag(skb, TCA_OPTIONS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 500) 	if (nest == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 501) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 502) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 503) 	opt.limit = q->limit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 504) 	psched_ratecfg_getrate(&opt.rate, &q->rate);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 505) 	if (tbf_peak_present(q))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 506) 		psched_ratecfg_getrate(&opt.peakrate, &q->peak);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 507) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 508) 		memset(&opt.peakrate, 0, sizeof(opt.peakrate));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 509) 	opt.mtu = PSCHED_NS2TICKS(q->mtu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 510) 	opt.buffer = PSCHED_NS2TICKS(q->buffer);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 511) 	if (nla_put(skb, TCA_TBF_PARMS, sizeof(opt), &opt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 512) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 513) 	if (q->rate.rate_bytes_ps >= (1ULL << 32) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 514) 	    nla_put_u64_64bit(skb, TCA_TBF_RATE64, q->rate.rate_bytes_ps,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 515) 			      TCA_TBF_PAD))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 516) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 517) 	if (tbf_peak_present(q) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 518) 	    q->peak.rate_bytes_ps >= (1ULL << 32) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 519) 	    nla_put_u64_64bit(skb, TCA_TBF_PRATE64, q->peak.rate_bytes_ps,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 520) 			      TCA_TBF_PAD))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 521) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 522) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 523) 	return nla_nest_end(skb, nest);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 524) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 525) nla_put_failure:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 526) 	nla_nest_cancel(skb, nest);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 527) 	return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 528) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 529) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 530) static int tbf_dump_class(struct Qdisc *sch, unsigned long cl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 531) 			  struct sk_buff *skb, struct tcmsg *tcm)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 532) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 533) 	struct tbf_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 534) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 535) 	tcm->tcm_handle |= TC_H_MIN(1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 536) 	tcm->tcm_info = q->qdisc->handle;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 537) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 538) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 539) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 540) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 541) static int tbf_graft(struct Qdisc *sch, unsigned long arg, struct Qdisc *new,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 542) 		     struct Qdisc **old, struct netlink_ext_ack *extack)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 543) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 544) 	struct tbf_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 545) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 546) 	if (new == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 547) 		new = &noop_qdisc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 548) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 549) 	*old = qdisc_replace(sch, new, &q->qdisc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 550) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 551) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 552) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 553) static struct Qdisc *tbf_leaf(struct Qdisc *sch, unsigned long arg)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 554) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 555) 	struct tbf_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 556) 	return q->qdisc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 557) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 558) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 559) static unsigned long tbf_find(struct Qdisc *sch, u32 classid)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 560) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 561) 	return 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 562) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 563) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 564) static void tbf_walk(struct Qdisc *sch, struct qdisc_walker *walker)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 565) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 566) 	if (!walker->stop) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 567) 		if (walker->count >= walker->skip)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 568) 			if (walker->fn(sch, 1, walker) < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 569) 				walker->stop = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 570) 				return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 571) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 572) 		walker->count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 573) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 574) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 575) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 576) static const struct Qdisc_class_ops tbf_class_ops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 577) 	.graft		=	tbf_graft,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 578) 	.leaf		=	tbf_leaf,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 579) 	.find		=	tbf_find,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 580) 	.walk		=	tbf_walk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 581) 	.dump		=	tbf_dump_class,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 582) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 583) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 584) static struct Qdisc_ops tbf_qdisc_ops __read_mostly = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 585) 	.next		=	NULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 586) 	.cl_ops		=	&tbf_class_ops,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 587) 	.id		=	"tbf",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 588) 	.priv_size	=	sizeof(struct tbf_sched_data),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 589) 	.enqueue	=	tbf_enqueue,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 590) 	.dequeue	=	tbf_dequeue,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 591) 	.peek		=	qdisc_peek_dequeued,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 592) 	.init		=	tbf_init,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 593) 	.reset		=	tbf_reset,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 594) 	.destroy	=	tbf_destroy,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 595) 	.change		=	tbf_change,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 596) 	.dump		=	tbf_dump,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 597) 	.owner		=	THIS_MODULE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 598) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 599) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 600) static int __init tbf_module_init(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 601) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 602) 	return register_qdisc(&tbf_qdisc_ops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 603) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 604) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 605) static void __exit tbf_module_exit(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 606) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 607) 	unregister_qdisc(&tbf_qdisc_ops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 608) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 609) module_init(tbf_module_init)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 610) module_exit(tbf_module_exit)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 611) MODULE_LICENSE("GPL");