Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    1) // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    2) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    3) /* COMMON Applications Kept Enhanced (CAKE) discipline
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    4)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    5)  * Copyright (C) 2014-2018 Jonathan Morton <chromatix99@gmail.com>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    6)  * Copyright (C) 2015-2018 Toke Høiland-Jørgensen <toke@toke.dk>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    7)  * Copyright (C) 2014-2018 Dave Täht <dave.taht@gmail.com>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    8)  * Copyright (C) 2015-2018 Sebastian Moeller <moeller0@gmx.de>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    9)  * (C) 2015-2018 Kevin Darbyshire-Bryant <kevin@darbyshire-bryant.me.uk>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   10)  * Copyright (C) 2017-2018 Ryan Mounce <ryan@mounce.com.au>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   11)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   12)  * The CAKE Principles:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   13)  *		   (or, how to have your cake and eat it too)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   14)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   15)  * This is a combination of several shaping, AQM and FQ techniques into one
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   16)  * easy-to-use package:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   17)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   18)  * - An overall bandwidth shaper, to move the bottleneck away from dumb CPE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   19)  *   equipment and bloated MACs.  This operates in deficit mode (as in sch_fq),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   20)  *   eliminating the need for any sort of burst parameter (eg. token bucket
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   21)  *   depth).  Burst support is limited to that necessary to overcome scheduling
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   22)  *   latency.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   23)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   24)  * - A Diffserv-aware priority queue, giving more priority to certain classes,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   25)  *   up to a specified fraction of bandwidth.  Above that bandwidth threshold,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   26)  *   the priority is reduced to avoid starving other tins.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   27)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   28)  * - Each priority tin has a separate Flow Queue system, to isolate traffic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   29)  *   flows from each other.  This prevents a burst on one flow from increasing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   30)  *   the delay to another.  Flows are distributed to queues using a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   31)  *   set-associative hash function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   32)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   33)  * - Each queue is actively managed by Cobalt, which is a combination of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   34)  *   Codel and Blue AQM algorithms.  This serves flows fairly, and signals
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   35)  *   congestion early via ECN (if available) and/or packet drops, to keep
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   36)  *   latency low.  The codel parameters are auto-tuned based on the bandwidth
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   37)  *   setting, as is necessary at low bandwidths.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   38)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   39)  * The configuration parameters are kept deliberately simple for ease of use.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   40)  * Everything has sane defaults.  Complete generality of configuration is *not*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   41)  * a goal.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   42)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   43)  * The priority queue operates according to a weighted DRR scheme, combined with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   44)  * a bandwidth tracker which reuses the shaper logic to detect which side of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   45)  * bandwidth sharing threshold the tin is operating.  This determines whether a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   46)  * priority-based weight (high) or a bandwidth-based weight (low) is used for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   47)  * that tin in the current pass.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   48)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   49)  * This qdisc was inspired by Eric Dumazet's fq_codel code, which he kindly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   50)  * granted us permission to leverage.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   51)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   52) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   53) #include <linux/module.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   54) #include <linux/types.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   55) #include <linux/kernel.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   56) #include <linux/jiffies.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   57) #include <linux/string.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   58) #include <linux/in.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   59) #include <linux/errno.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   60) #include <linux/init.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   61) #include <linux/skbuff.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   62) #include <linux/jhash.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   63) #include <linux/slab.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   64) #include <linux/vmalloc.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   65) #include <linux/reciprocal_div.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   66) #include <net/netlink.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   67) #include <linux/if_vlan.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   68) #include <net/pkt_sched.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   69) #include <net/pkt_cls.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   70) #include <net/tcp.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   71) #include <net/flow_dissector.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   72) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   73) #if IS_ENABLED(CONFIG_NF_CONNTRACK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   74) #include <net/netfilter/nf_conntrack_core.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   75) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   76) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   77) #define CAKE_SET_WAYS (8)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   78) #define CAKE_MAX_TINS (8)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   79) #define CAKE_QUEUES (1024)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   80) #define CAKE_FLOW_MASK 63
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   81) #define CAKE_FLOW_NAT_FLAG 64
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   82) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   83) /* struct cobalt_params - contains codel and blue parameters
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   84)  * @interval:	codel initial drop rate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   85)  * @target:     maximum persistent sojourn time & blue update rate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   86)  * @mtu_time:   serialisation delay of maximum-size packet
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   87)  * @p_inc:      increment of blue drop probability (0.32 fxp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   88)  * @p_dec:      decrement of blue drop probability (0.32 fxp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   89)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   90) struct cobalt_params {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   91) 	u64	interval;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   92) 	u64	target;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   93) 	u64	mtu_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   94) 	u32	p_inc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   95) 	u32	p_dec;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   96) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   97) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   98) /* struct cobalt_vars - contains codel and blue variables
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   99)  * @count:		codel dropping frequency
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  100)  * @rec_inv_sqrt:	reciprocal value of sqrt(count) >> 1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  101)  * @drop_next:		time to drop next packet, or when we dropped last
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  102)  * @blue_timer:		Blue time to next drop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  103)  * @p_drop:		BLUE drop probability (0.32 fxp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  104)  * @dropping:		set if in dropping state
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  105)  * @ecn_marked:		set if marked
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  106)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  107) struct cobalt_vars {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  108) 	u32	count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  109) 	u32	rec_inv_sqrt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  110) 	ktime_t	drop_next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  111) 	ktime_t	blue_timer;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  112) 	u32     p_drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  113) 	bool	dropping;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  114) 	bool    ecn_marked;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  115) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  116) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  117) enum {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  118) 	CAKE_SET_NONE = 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  119) 	CAKE_SET_SPARSE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  120) 	CAKE_SET_SPARSE_WAIT, /* counted in SPARSE, actually in BULK */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  121) 	CAKE_SET_BULK,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  122) 	CAKE_SET_DECAYING
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  123) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  124) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  125) struct cake_flow {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  126) 	/* this stuff is all needed per-flow at dequeue time */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  127) 	struct sk_buff	  *head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  128) 	struct sk_buff	  *tail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  129) 	struct list_head  flowchain;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  130) 	s32		  deficit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  131) 	u32		  dropped;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  132) 	struct cobalt_vars cvars;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  133) 	u16		  srchost; /* index into cake_host table */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  134) 	u16		  dsthost;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  135) 	u8		  set;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  136) }; /* please try to keep this structure <= 64 bytes */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  137) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  138) struct cake_host {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  139) 	u32 srchost_tag;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  140) 	u32 dsthost_tag;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  141) 	u16 srchost_bulk_flow_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  142) 	u16 dsthost_bulk_flow_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  143) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  144) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  145) struct cake_heap_entry {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  146) 	u16 t:3, b:10;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  147) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  148) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  149) struct cake_tin_data {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  150) 	struct cake_flow flows[CAKE_QUEUES];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  151) 	u32	backlogs[CAKE_QUEUES];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  152) 	u32	tags[CAKE_QUEUES]; /* for set association */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  153) 	u16	overflow_idx[CAKE_QUEUES];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  154) 	struct cake_host hosts[CAKE_QUEUES]; /* for triple isolation */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  155) 	u16	flow_quantum;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  156) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  157) 	struct cobalt_params cparams;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  158) 	u32	drop_overlimit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  159) 	u16	bulk_flow_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  160) 	u16	sparse_flow_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  161) 	u16	decaying_flow_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  162) 	u16	unresponsive_flow_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  163) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  164) 	u32	max_skblen;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  165) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  166) 	struct list_head new_flows;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  167) 	struct list_head old_flows;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  168) 	struct list_head decaying_flows;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  169) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  170) 	/* time_next = time_this + ((len * rate_ns) >> rate_shft) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  171) 	ktime_t	time_next_packet;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  172) 	u64	tin_rate_ns;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  173) 	u64	tin_rate_bps;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  174) 	u16	tin_rate_shft;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  175) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  176) 	u16	tin_quantum;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  177) 	s32	tin_deficit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  178) 	u32	tin_backlog;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  179) 	u32	tin_dropped;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  180) 	u32	tin_ecn_mark;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  181) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  182) 	u32	packets;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  183) 	u64	bytes;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  184) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  185) 	u32	ack_drops;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  186) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  187) 	/* moving averages */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  188) 	u64 avge_delay;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  189) 	u64 peak_delay;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  190) 	u64 base_delay;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  191) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  192) 	/* hash function stats */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  193) 	u32	way_directs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  194) 	u32	way_hits;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  195) 	u32	way_misses;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  196) 	u32	way_collisions;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  197) }; /* number of tins is small, so size of this struct doesn't matter much */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  198) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  199) struct cake_sched_data {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  200) 	struct tcf_proto __rcu *filter_list; /* optional external classifier */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  201) 	struct tcf_block *block;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  202) 	struct cake_tin_data *tins;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  203) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  204) 	struct cake_heap_entry overflow_heap[CAKE_QUEUES * CAKE_MAX_TINS];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  205) 	u16		overflow_timeout;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  206) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  207) 	u16		tin_cnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  208) 	u8		tin_mode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  209) 	u8		flow_mode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  210) 	u8		ack_filter;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  211) 	u8		atm_mode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  212) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  213) 	u32		fwmark_mask;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  214) 	u16		fwmark_shft;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  215) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  216) 	/* time_next = time_this + ((len * rate_ns) >> rate_shft) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  217) 	u16		rate_shft;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  218) 	ktime_t		time_next_packet;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  219) 	ktime_t		failsafe_next_packet;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  220) 	u64		rate_ns;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  221) 	u64		rate_bps;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  222) 	u16		rate_flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  223) 	s16		rate_overhead;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  224) 	u16		rate_mpu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  225) 	u64		interval;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  226) 	u64		target;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  227) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  228) 	/* resource tracking */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  229) 	u32		buffer_used;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  230) 	u32		buffer_max_used;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  231) 	u32		buffer_limit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  232) 	u32		buffer_config_limit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  233) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  234) 	/* indices for dequeue */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  235) 	u16		cur_tin;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  236) 	u16		cur_flow;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  237) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  238) 	struct qdisc_watchdog watchdog;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  239) 	const u8	*tin_index;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  240) 	const u8	*tin_order;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  241) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  242) 	/* bandwidth capacity estimate */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  243) 	ktime_t		last_packet_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  244) 	ktime_t		avg_window_begin;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  245) 	u64		avg_packet_interval;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  246) 	u64		avg_window_bytes;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  247) 	u64		avg_peak_bandwidth;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  248) 	ktime_t		last_reconfig_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  249) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  250) 	/* packet length stats */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  251) 	u32		avg_netoff;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  252) 	u16		max_netlen;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  253) 	u16		max_adjlen;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  254) 	u16		min_netlen;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  255) 	u16		min_adjlen;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  256) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  257) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  258) enum {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  259) 	CAKE_FLAG_OVERHEAD	   = BIT(0),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  260) 	CAKE_FLAG_AUTORATE_INGRESS = BIT(1),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  261) 	CAKE_FLAG_INGRESS	   = BIT(2),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  262) 	CAKE_FLAG_WASH		   = BIT(3),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  263) 	CAKE_FLAG_SPLIT_GSO	   = BIT(4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  264) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  265) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  266) /* COBALT operates the Codel and BLUE algorithms in parallel, in order to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  267)  * obtain the best features of each.  Codel is excellent on flows which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  268)  * respond to congestion signals in a TCP-like way.  BLUE is more effective on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  269)  * unresponsive flows.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  270)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  271) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  272) struct cobalt_skb_cb {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  273) 	ktime_t enqueue_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  274) 	u32     adjusted_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  275) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  276) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  277) static u64 us_to_ns(u64 us)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  278) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  279) 	return us * NSEC_PER_USEC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  280) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  281) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  282) static struct cobalt_skb_cb *get_cobalt_cb(const struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  283) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  284) 	qdisc_cb_private_validate(skb, sizeof(struct cobalt_skb_cb));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  285) 	return (struct cobalt_skb_cb *)qdisc_skb_cb(skb)->data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  286) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  287) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  288) static ktime_t cobalt_get_enqueue_time(const struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  289) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  290) 	return get_cobalt_cb(skb)->enqueue_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  291) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  292) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  293) static void cobalt_set_enqueue_time(struct sk_buff *skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  294) 				    ktime_t now)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  295) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  296) 	get_cobalt_cb(skb)->enqueue_time = now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  297) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  298) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  299) static u16 quantum_div[CAKE_QUEUES + 1] = {0};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  300) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  301) /* Diffserv lookup tables */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  302) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  303) static const u8 precedence[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  304) 	0, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  305) 	1, 1, 1, 1, 1, 1, 1, 1,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  306) 	2, 2, 2, 2, 2, 2, 2, 2,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  307) 	3, 3, 3, 3, 3, 3, 3, 3,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  308) 	4, 4, 4, 4, 4, 4, 4, 4,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  309) 	5, 5, 5, 5, 5, 5, 5, 5,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  310) 	6, 6, 6, 6, 6, 6, 6, 6,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  311) 	7, 7, 7, 7, 7, 7, 7, 7,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  312) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  313) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  314) static const u8 diffserv8[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  315) 	2, 0, 1, 2, 4, 2, 2, 2,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  316) 	1, 2, 1, 2, 1, 2, 1, 2,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  317) 	5, 2, 4, 2, 4, 2, 4, 2,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  318) 	3, 2, 3, 2, 3, 2, 3, 2,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  319) 	6, 2, 3, 2, 3, 2, 3, 2,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  320) 	6, 2, 2, 2, 6, 2, 6, 2,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  321) 	7, 2, 2, 2, 2, 2, 2, 2,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  322) 	7, 2, 2, 2, 2, 2, 2, 2,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  323) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  324) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  325) static const u8 diffserv4[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  326) 	0, 1, 0, 0, 2, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  327) 	1, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  328) 	2, 0, 2, 0, 2, 0, 2, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  329) 	2, 0, 2, 0, 2, 0, 2, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  330) 	3, 0, 2, 0, 2, 0, 2, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  331) 	3, 0, 0, 0, 3, 0, 3, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  332) 	3, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  333) 	3, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  334) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  335) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  336) static const u8 diffserv3[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  337) 	0, 1, 0, 0, 2, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  338) 	1, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  339) 	0, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  340) 	0, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  341) 	0, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  342) 	0, 0, 0, 0, 2, 0, 2, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  343) 	2, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  344) 	2, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  345) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  346) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  347) static const u8 besteffort[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  348) 	0, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  349) 	0, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  350) 	0, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  351) 	0, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  352) 	0, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  353) 	0, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  354) 	0, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  355) 	0, 0, 0, 0, 0, 0, 0, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  356) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  357) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  358) /* tin priority order for stats dumping */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  359) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  360) static const u8 normal_order[] = {0, 1, 2, 3, 4, 5, 6, 7};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  361) static const u8 bulk_order[] = {1, 0, 2, 3};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  362) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  363) #define REC_INV_SQRT_CACHE (16)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  364) static u32 cobalt_rec_inv_sqrt_cache[REC_INV_SQRT_CACHE] = {0};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  365) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  366) /* http://en.wikipedia.org/wiki/Methods_of_computing_square_roots
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  367)  * new_invsqrt = (invsqrt / 2) * (3 - count * invsqrt^2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  368)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  369)  * Here, invsqrt is a fixed point number (< 1.0), 32bit mantissa, aka Q0.32
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  370)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  371) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  372) static void cobalt_newton_step(struct cobalt_vars *vars)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  373) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  374) 	u32 invsqrt, invsqrt2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  375) 	u64 val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  376) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  377) 	invsqrt = vars->rec_inv_sqrt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  378) 	invsqrt2 = ((u64)invsqrt * invsqrt) >> 32;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  379) 	val = (3LL << 32) - ((u64)vars->count * invsqrt2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  380) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  381) 	val >>= 2; /* avoid overflow in following multiply */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  382) 	val = (val * invsqrt) >> (32 - 2 + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  383) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  384) 	vars->rec_inv_sqrt = val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  385) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  386) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  387) static void cobalt_invsqrt(struct cobalt_vars *vars)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  388) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  389) 	if (vars->count < REC_INV_SQRT_CACHE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  390) 		vars->rec_inv_sqrt = cobalt_rec_inv_sqrt_cache[vars->count];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  391) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  392) 		cobalt_newton_step(vars);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  393) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  394) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  395) /* There is a big difference in timing between the accurate values placed in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  396)  * the cache and the approximations given by a single Newton step for small
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  397)  * count values, particularly when stepping from count 1 to 2 or vice versa.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  398)  * Above 16, a single Newton step gives sufficient accuracy in either
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  399)  * direction, given the precision stored.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  400)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  401)  * The magnitude of the error when stepping up to count 2 is such as to give
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  402)  * the value that *should* have been produced at count 4.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  403)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  404) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  405) static void cobalt_cache_init(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  406) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  407) 	struct cobalt_vars v;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  408) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  409) 	memset(&v, 0, sizeof(v));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  410) 	v.rec_inv_sqrt = ~0U;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  411) 	cobalt_rec_inv_sqrt_cache[0] = v.rec_inv_sqrt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  412) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  413) 	for (v.count = 1; v.count < REC_INV_SQRT_CACHE; v.count++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  414) 		cobalt_newton_step(&v);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  415) 		cobalt_newton_step(&v);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  416) 		cobalt_newton_step(&v);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  417) 		cobalt_newton_step(&v);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  418) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  419) 		cobalt_rec_inv_sqrt_cache[v.count] = v.rec_inv_sqrt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  420) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  421) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  422) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  423) static void cobalt_vars_init(struct cobalt_vars *vars)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  424) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  425) 	memset(vars, 0, sizeof(*vars));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  426) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  427) 	if (!cobalt_rec_inv_sqrt_cache[0]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  428) 		cobalt_cache_init();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  429) 		cobalt_rec_inv_sqrt_cache[0] = ~0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  430) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  431) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  432) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  433) /* CoDel control_law is t + interval/sqrt(count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  434)  * We maintain in rec_inv_sqrt the reciprocal value of sqrt(count) to avoid
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  435)  * both sqrt() and divide operation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  436)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  437) static ktime_t cobalt_control(ktime_t t,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  438) 			      u64 interval,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  439) 			      u32 rec_inv_sqrt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  440) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  441) 	return ktime_add_ns(t, reciprocal_scale(interval,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  442) 						rec_inv_sqrt));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  443) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  444) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  445) /* Call this when a packet had to be dropped due to queue overflow.  Returns
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  446)  * true if the BLUE state was quiescent before but active after this call.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  447)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  448) static bool cobalt_queue_full(struct cobalt_vars *vars,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  449) 			      struct cobalt_params *p,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  450) 			      ktime_t now)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  451) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  452) 	bool up = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  453) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  454) 	if (ktime_to_ns(ktime_sub(now, vars->blue_timer)) > p->target) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  455) 		up = !vars->p_drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  456) 		vars->p_drop += p->p_inc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  457) 		if (vars->p_drop < p->p_inc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  458) 			vars->p_drop = ~0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  459) 		vars->blue_timer = now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  460) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  461) 	vars->dropping = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  462) 	vars->drop_next = now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  463) 	if (!vars->count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  464) 		vars->count = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  465) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  466) 	return up;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  467) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  468) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  469) /* Call this when the queue was serviced but turned out to be empty.  Returns
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  470)  * true if the BLUE state was active before but quiescent after this call.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  471)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  472) static bool cobalt_queue_empty(struct cobalt_vars *vars,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  473) 			       struct cobalt_params *p,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  474) 			       ktime_t now)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  475) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  476) 	bool down = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  477) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  478) 	if (vars->p_drop &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  479) 	    ktime_to_ns(ktime_sub(now, vars->blue_timer)) > p->target) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  480) 		if (vars->p_drop < p->p_dec)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  481) 			vars->p_drop = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  482) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  483) 			vars->p_drop -= p->p_dec;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  484) 		vars->blue_timer = now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  485) 		down = !vars->p_drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  486) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  487) 	vars->dropping = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  488) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  489) 	if (vars->count && ktime_to_ns(ktime_sub(now, vars->drop_next)) >= 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  490) 		vars->count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  491) 		cobalt_invsqrt(vars);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  492) 		vars->drop_next = cobalt_control(vars->drop_next,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  493) 						 p->interval,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  494) 						 vars->rec_inv_sqrt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  495) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  496) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  497) 	return down;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  498) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  499) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  500) /* Call this with a freshly dequeued packet for possible congestion marking.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  501)  * Returns true as an instruction to drop the packet, false for delivery.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  502)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  503) static bool cobalt_should_drop(struct cobalt_vars *vars,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  504) 			       struct cobalt_params *p,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  505) 			       ktime_t now,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  506) 			       struct sk_buff *skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  507) 			       u32 bulk_flows)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  508) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  509) 	bool next_due, over_target, drop = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  510) 	ktime_t schedule;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  511) 	u64 sojourn;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  512) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  513) /* The 'schedule' variable records, in its sign, whether 'now' is before or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  514)  * after 'drop_next'.  This allows 'drop_next' to be updated before the next
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  515)  * scheduling decision is actually branched, without destroying that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  516)  * information.  Similarly, the first 'schedule' value calculated is preserved
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  517)  * in the boolean 'next_due'.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  518)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  519)  * As for 'drop_next', we take advantage of the fact that 'interval' is both
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  520)  * the delay between first exceeding 'target' and the first signalling event,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  521)  * *and* the scaling factor for the signalling frequency.  It's therefore very
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  522)  * natural to use a single mechanism for both purposes, and eliminates a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  523)  * significant amount of reference Codel's spaghetti code.  To help with this,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  524)  * both the '0' and '1' entries in the invsqrt cache are 0xFFFFFFFF, as close
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  525)  * as possible to 1.0 in fixed-point.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  526)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  527) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  528) 	sojourn = ktime_to_ns(ktime_sub(now, cobalt_get_enqueue_time(skb)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  529) 	schedule = ktime_sub(now, vars->drop_next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  530) 	over_target = sojourn > p->target &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  531) 		      sojourn > p->mtu_time * bulk_flows * 2 &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  532) 		      sojourn > p->mtu_time * 4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  533) 	next_due = vars->count && ktime_to_ns(schedule) >= 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  534) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  535) 	vars->ecn_marked = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  536) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  537) 	if (over_target) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  538) 		if (!vars->dropping) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  539) 			vars->dropping = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  540) 			vars->drop_next = cobalt_control(now,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  541) 							 p->interval,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  542) 							 vars->rec_inv_sqrt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  543) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  544) 		if (!vars->count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  545) 			vars->count = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  546) 	} else if (vars->dropping) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  547) 		vars->dropping = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  548) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  549) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  550) 	if (next_due && vars->dropping) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  551) 		/* Use ECN mark if possible, otherwise drop */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  552) 		drop = !(vars->ecn_marked = INET_ECN_set_ce(skb));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  553) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  554) 		vars->count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  555) 		if (!vars->count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  556) 			vars->count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  557) 		cobalt_invsqrt(vars);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  558) 		vars->drop_next = cobalt_control(vars->drop_next,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  559) 						 p->interval,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  560) 						 vars->rec_inv_sqrt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  561) 		schedule = ktime_sub(now, vars->drop_next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  562) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  563) 		while (next_due) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  564) 			vars->count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  565) 			cobalt_invsqrt(vars);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  566) 			vars->drop_next = cobalt_control(vars->drop_next,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  567) 							 p->interval,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  568) 							 vars->rec_inv_sqrt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  569) 			schedule = ktime_sub(now, vars->drop_next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  570) 			next_due = vars->count && ktime_to_ns(schedule) >= 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  571) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  572) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  573) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  574) 	/* Simple BLUE implementation.  Lack of ECN is deliberate. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  575) 	if (vars->p_drop)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  576) 		drop |= (prandom_u32() < vars->p_drop);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  577) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  578) 	/* Overload the drop_next field as an activity timeout */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  579) 	if (!vars->count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  580) 		vars->drop_next = ktime_add_ns(now, p->interval);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  581) 	else if (ktime_to_ns(schedule) > 0 && !drop)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  582) 		vars->drop_next = now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  583) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  584) 	return drop;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  585) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  586) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  587) static bool cake_update_flowkeys(struct flow_keys *keys,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  588) 				 const struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  589) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  590) #if IS_ENABLED(CONFIG_NF_CONNTRACK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  591) 	struct nf_conntrack_tuple tuple = {};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  592) 	bool rev = !skb->_nfct, upd = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  593) 	__be32 ip;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  594) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  595) 	if (skb_protocol(skb, true) != htons(ETH_P_IP))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  596) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  597) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  598) 	if (!nf_ct_get_tuple_skb(&tuple, skb))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  599) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  600) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  601) 	ip = rev ? tuple.dst.u3.ip : tuple.src.u3.ip;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  602) 	if (ip != keys->addrs.v4addrs.src) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  603) 		keys->addrs.v4addrs.src = ip;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  604) 		upd = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  605) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  606) 	ip = rev ? tuple.src.u3.ip : tuple.dst.u3.ip;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  607) 	if (ip != keys->addrs.v4addrs.dst) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  608) 		keys->addrs.v4addrs.dst = ip;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  609) 		upd = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  610) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  611) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  612) 	if (keys->ports.ports) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  613) 		__be16 port;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  614) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  615) 		port = rev ? tuple.dst.u.all : tuple.src.u.all;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  616) 		if (port != keys->ports.src) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  617) 			keys->ports.src = port;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  618) 			upd = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  619) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  620) 		port = rev ? tuple.src.u.all : tuple.dst.u.all;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  621) 		if (port != keys->ports.dst) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  622) 			port = keys->ports.dst;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  623) 			upd = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  624) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  625) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  626) 	return upd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  627) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  628) 	return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  629) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  630) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  631) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  632) /* Cake has several subtle multiple bit settings. In these cases you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  633)  *  would be matching triple isolate mode as well.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  634)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  635) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  636) static bool cake_dsrc(int flow_mode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  637) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  638) 	return (flow_mode & CAKE_FLOW_DUAL_SRC) == CAKE_FLOW_DUAL_SRC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  639) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  640) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  641) static bool cake_ddst(int flow_mode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  642) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  643) 	return (flow_mode & CAKE_FLOW_DUAL_DST) == CAKE_FLOW_DUAL_DST;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  644) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  645) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  646) static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  647) 		     int flow_mode, u16 flow_override, u16 host_override)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  648) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  649) 	bool hash_flows = (!flow_override && !!(flow_mode & CAKE_FLOW_FLOWS));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  650) 	bool hash_hosts = (!host_override && !!(flow_mode & CAKE_FLOW_HOSTS));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  651) 	bool nat_enabled = !!(flow_mode & CAKE_FLOW_NAT_FLAG);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  652) 	u32 flow_hash = 0, srchost_hash = 0, dsthost_hash = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  653) 	u16 reduced_hash, srchost_idx, dsthost_idx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  654) 	struct flow_keys keys, host_keys;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  655) 	bool use_skbhash = skb->l4_hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  656) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  657) 	if (unlikely(flow_mode == CAKE_FLOW_NONE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  658) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  659) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  660) 	/* If both overrides are set, or we can use the SKB hash and nat mode is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  661) 	 * disabled, we can skip packet dissection entirely. If nat mode is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  662) 	 * enabled there's another check below after doing the conntrack lookup.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  663) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  664) 	if ((!hash_flows || (use_skbhash && !nat_enabled)) && !hash_hosts)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  665) 		goto skip_hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  666) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  667) 	skb_flow_dissect_flow_keys(skb, &keys,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  668) 				   FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  669) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  670) 	/* Don't use the SKB hash if we change the lookup keys from conntrack */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  671) 	if (nat_enabled && cake_update_flowkeys(&keys, skb))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  672) 		use_skbhash = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  673) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  674) 	/* If we can still use the SKB hash and don't need the host hash, we can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  675) 	 * skip the rest of the hashing procedure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  676) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  677) 	if (use_skbhash && !hash_hosts)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  678) 		goto skip_hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  679) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  680) 	/* flow_hash_from_keys() sorts the addresses by value, so we have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  681) 	 * to preserve their order in a separate data structure to treat
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  682) 	 * src and dst host addresses as independently selectable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  683) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  684) 	host_keys = keys;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  685) 	host_keys.ports.ports     = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  686) 	host_keys.basic.ip_proto  = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  687) 	host_keys.keyid.keyid     = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  688) 	host_keys.tags.flow_label = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  689) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  690) 	switch (host_keys.control.addr_type) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  691) 	case FLOW_DISSECTOR_KEY_IPV4_ADDRS:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  692) 		host_keys.addrs.v4addrs.src = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  693) 		dsthost_hash = flow_hash_from_keys(&host_keys);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  694) 		host_keys.addrs.v4addrs.src = keys.addrs.v4addrs.src;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  695) 		host_keys.addrs.v4addrs.dst = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  696) 		srchost_hash = flow_hash_from_keys(&host_keys);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  697) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  698) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  699) 	case FLOW_DISSECTOR_KEY_IPV6_ADDRS:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  700) 		memset(&host_keys.addrs.v6addrs.src, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  701) 		       sizeof(host_keys.addrs.v6addrs.src));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  702) 		dsthost_hash = flow_hash_from_keys(&host_keys);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  703) 		host_keys.addrs.v6addrs.src = keys.addrs.v6addrs.src;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  704) 		memset(&host_keys.addrs.v6addrs.dst, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  705) 		       sizeof(host_keys.addrs.v6addrs.dst));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  706) 		srchost_hash = flow_hash_from_keys(&host_keys);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  707) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  708) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  709) 	default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  710) 		dsthost_hash = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  711) 		srchost_hash = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  712) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  713) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  714) 	/* This *must* be after the above switch, since as a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  715) 	 * side-effect it sorts the src and dst addresses.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  716) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  717) 	if (hash_flows && !use_skbhash)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  718) 		flow_hash = flow_hash_from_keys(&keys);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  719) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  720) skip_hash:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  721) 	if (flow_override)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  722) 		flow_hash = flow_override - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  723) 	else if (use_skbhash && (flow_mode & CAKE_FLOW_FLOWS))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  724) 		flow_hash = skb->hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  725) 	if (host_override) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  726) 		dsthost_hash = host_override - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  727) 		srchost_hash = host_override - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  728) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  729) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  730) 	if (!(flow_mode & CAKE_FLOW_FLOWS)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  731) 		if (flow_mode & CAKE_FLOW_SRC_IP)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  732) 			flow_hash ^= srchost_hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  733) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  734) 		if (flow_mode & CAKE_FLOW_DST_IP)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  735) 			flow_hash ^= dsthost_hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  736) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  737) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  738) 	reduced_hash = flow_hash % CAKE_QUEUES;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  739) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  740) 	/* set-associative hashing */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  741) 	/* fast path if no hash collision (direct lookup succeeds) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  742) 	if (likely(q->tags[reduced_hash] == flow_hash &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  743) 		   q->flows[reduced_hash].set)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  744) 		q->way_directs++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  745) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  746) 		u32 inner_hash = reduced_hash % CAKE_SET_WAYS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  747) 		u32 outer_hash = reduced_hash - inner_hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  748) 		bool allocate_src = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  749) 		bool allocate_dst = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  750) 		u32 i, k;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  751) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  752) 		/* check if any active queue in the set is reserved for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  753) 		 * this flow.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  754) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  755) 		for (i = 0, k = inner_hash; i < CAKE_SET_WAYS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  756) 		     i++, k = (k + 1) % CAKE_SET_WAYS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  757) 			if (q->tags[outer_hash + k] == flow_hash) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  758) 				if (i)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  759) 					q->way_hits++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  760) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  761) 				if (!q->flows[outer_hash + k].set) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  762) 					/* need to increment host refcnts */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  763) 					allocate_src = cake_dsrc(flow_mode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  764) 					allocate_dst = cake_ddst(flow_mode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  765) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  766) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  767) 				goto found;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  768) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  769) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  770) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  771) 		/* no queue is reserved for this flow, look for an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  772) 		 * empty one.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  773) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  774) 		for (i = 0; i < CAKE_SET_WAYS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  775) 			 i++, k = (k + 1) % CAKE_SET_WAYS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  776) 			if (!q->flows[outer_hash + k].set) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  777) 				q->way_misses++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  778) 				allocate_src = cake_dsrc(flow_mode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  779) 				allocate_dst = cake_ddst(flow_mode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  780) 				goto found;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  781) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  782) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  783) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  784) 		/* With no empty queues, default to the original
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  785) 		 * queue, accept the collision, update the host tags.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  786) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  787) 		q->way_collisions++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  788) 		if (q->flows[outer_hash + k].set == CAKE_SET_BULK) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  789) 			q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  790) 			q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  791) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  792) 		allocate_src = cake_dsrc(flow_mode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  793) 		allocate_dst = cake_ddst(flow_mode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  794) found:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  795) 		/* reserve queue for future packets in same flow */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  796) 		reduced_hash = outer_hash + k;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  797) 		q->tags[reduced_hash] = flow_hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  798) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  799) 		if (allocate_src) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  800) 			srchost_idx = srchost_hash % CAKE_QUEUES;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  801) 			inner_hash = srchost_idx % CAKE_SET_WAYS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  802) 			outer_hash = srchost_idx - inner_hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  803) 			for (i = 0, k = inner_hash; i < CAKE_SET_WAYS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  804) 				i++, k = (k + 1) % CAKE_SET_WAYS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  805) 				if (q->hosts[outer_hash + k].srchost_tag ==
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  806) 				    srchost_hash)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  807) 					goto found_src;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  808) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  809) 			for (i = 0; i < CAKE_SET_WAYS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  810) 				i++, k = (k + 1) % CAKE_SET_WAYS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  811) 				if (!q->hosts[outer_hash + k].srchost_bulk_flow_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  812) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  813) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  814) 			q->hosts[outer_hash + k].srchost_tag = srchost_hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  815) found_src:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  816) 			srchost_idx = outer_hash + k;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  817) 			if (q->flows[reduced_hash].set == CAKE_SET_BULK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  818) 				q->hosts[srchost_idx].srchost_bulk_flow_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  819) 			q->flows[reduced_hash].srchost = srchost_idx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  820) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  821) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  822) 		if (allocate_dst) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  823) 			dsthost_idx = dsthost_hash % CAKE_QUEUES;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  824) 			inner_hash = dsthost_idx % CAKE_SET_WAYS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  825) 			outer_hash = dsthost_idx - inner_hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  826) 			for (i = 0, k = inner_hash; i < CAKE_SET_WAYS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  827) 			     i++, k = (k + 1) % CAKE_SET_WAYS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  828) 				if (q->hosts[outer_hash + k].dsthost_tag ==
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  829) 				    dsthost_hash)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  830) 					goto found_dst;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  831) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  832) 			for (i = 0; i < CAKE_SET_WAYS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  833) 			     i++, k = (k + 1) % CAKE_SET_WAYS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  834) 				if (!q->hosts[outer_hash + k].dsthost_bulk_flow_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  835) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  836) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  837) 			q->hosts[outer_hash + k].dsthost_tag = dsthost_hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  838) found_dst:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  839) 			dsthost_idx = outer_hash + k;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  840) 			if (q->flows[reduced_hash].set == CAKE_SET_BULK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  841) 				q->hosts[dsthost_idx].dsthost_bulk_flow_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  842) 			q->flows[reduced_hash].dsthost = dsthost_idx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  843) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  844) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  845) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  846) 	return reduced_hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  847) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  848) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  849) /* helper functions : might be changed when/if skb use a standard list_head */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  850) /* remove one skb from head of slot queue */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  851) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  852) static struct sk_buff *dequeue_head(struct cake_flow *flow)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  853) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  854) 	struct sk_buff *skb = flow->head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  855) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  856) 	if (skb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  857) 		flow->head = skb->next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  858) 		skb_mark_not_on_list(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  859) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  860) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  861) 	return skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  862) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  863) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  864) /* add skb to flow queue (tail add) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  865) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  866) static void flow_queue_add(struct cake_flow *flow, struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  867) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  868) 	if (!flow->head)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  869) 		flow->head = skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  870) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  871) 		flow->tail->next = skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  872) 	flow->tail = skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  873) 	skb->next = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  874) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  875) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  876) static struct iphdr *cake_get_iphdr(const struct sk_buff *skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  877) 				    struct ipv6hdr *buf)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  878) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  879) 	unsigned int offset = skb_network_offset(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  880) 	struct iphdr *iph;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  881) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  882) 	iph = skb_header_pointer(skb, offset, sizeof(struct iphdr), buf);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  883) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  884) 	if (!iph)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  885) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  886) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  887) 	if (iph->version == 4 && iph->protocol == IPPROTO_IPV6)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  888) 		return skb_header_pointer(skb, offset + iph->ihl * 4,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  889) 					  sizeof(struct ipv6hdr), buf);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  890) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  891) 	else if (iph->version == 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  892) 		return iph;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  893) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  894) 	else if (iph->version == 6)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  895) 		return skb_header_pointer(skb, offset, sizeof(struct ipv6hdr),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  896) 					  buf);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  897) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  898) 	return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  899) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  900) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  901) static struct tcphdr *cake_get_tcphdr(const struct sk_buff *skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  902) 				      void *buf, unsigned int bufsize)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  903) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  904) 	unsigned int offset = skb_network_offset(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  905) 	const struct ipv6hdr *ipv6h;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  906) 	const struct tcphdr *tcph;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  907) 	const struct iphdr *iph;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  908) 	struct ipv6hdr _ipv6h;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  909) 	struct tcphdr _tcph;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  910) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  911) 	ipv6h = skb_header_pointer(skb, offset, sizeof(_ipv6h), &_ipv6h);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  912) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  913) 	if (!ipv6h)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  914) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  915) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  916) 	if (ipv6h->version == 4) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  917) 		iph = (struct iphdr *)ipv6h;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  918) 		offset += iph->ihl * 4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  919) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  920) 		/* special-case 6in4 tunnelling, as that is a common way to get
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  921) 		 * v6 connectivity in the home
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  922) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  923) 		if (iph->protocol == IPPROTO_IPV6) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  924) 			ipv6h = skb_header_pointer(skb, offset,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  925) 						   sizeof(_ipv6h), &_ipv6h);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  926) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  927) 			if (!ipv6h || ipv6h->nexthdr != IPPROTO_TCP)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  928) 				return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  929) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  930) 			offset += sizeof(struct ipv6hdr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  931) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  932) 		} else if (iph->protocol != IPPROTO_TCP) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  933) 			return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  934) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  935) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  936) 	} else if (ipv6h->version == 6) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  937) 		if (ipv6h->nexthdr != IPPROTO_TCP)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  938) 			return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  939) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  940) 		offset += sizeof(struct ipv6hdr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  941) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  942) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  943) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  944) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  945) 	tcph = skb_header_pointer(skb, offset, sizeof(_tcph), &_tcph);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  946) 	if (!tcph || tcph->doff < 5)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  947) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  948) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  949) 	return skb_header_pointer(skb, offset,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  950) 				  min(__tcp_hdrlen(tcph), bufsize), buf);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  951) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  952) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  953) static const void *cake_get_tcpopt(const struct tcphdr *tcph,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  954) 				   int code, int *oplen)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  955) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  956) 	/* inspired by tcp_parse_options in tcp_input.c */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  957) 	int length = __tcp_hdrlen(tcph) - sizeof(struct tcphdr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  958) 	const u8 *ptr = (const u8 *)(tcph + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  959) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  960) 	while (length > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  961) 		int opcode = *ptr++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  962) 		int opsize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  963) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  964) 		if (opcode == TCPOPT_EOL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  965) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  966) 		if (opcode == TCPOPT_NOP) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  967) 			length--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  968) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  969) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  970) 		if (length < 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  971) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  972) 		opsize = *ptr++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  973) 		if (opsize < 2 || opsize > length)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  974) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  975) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  976) 		if (opcode == code) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  977) 			*oplen = opsize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  978) 			return ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  979) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  980) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  981) 		ptr += opsize - 2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  982) 		length -= opsize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  983) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  984) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  985) 	return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  986) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  987) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  988) /* Compare two SACK sequences. A sequence is considered greater if it SACKs more
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  989)  * bytes than the other. In the case where both sequences ACKs bytes that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  990)  * other doesn't, A is considered greater. DSACKs in A also makes A be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  991)  * considered greater.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  992)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  993)  * @return -1, 0 or 1 as normal compare functions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  994)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  995) static int cake_tcph_sack_compare(const struct tcphdr *tcph_a,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  996) 				  const struct tcphdr *tcph_b)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  997) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  998) 	const struct tcp_sack_block_wire *sack_a, *sack_b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  999) 	u32 ack_seq_a = ntohl(tcph_a->ack_seq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1000) 	u32 bytes_a = 0, bytes_b = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1001) 	int oplen_a, oplen_b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1002) 	bool first = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1003) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1004) 	sack_a = cake_get_tcpopt(tcph_a, TCPOPT_SACK, &oplen_a);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1005) 	sack_b = cake_get_tcpopt(tcph_b, TCPOPT_SACK, &oplen_b);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1006) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1007) 	/* pointers point to option contents */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1008) 	oplen_a -= TCPOLEN_SACK_BASE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1009) 	oplen_b -= TCPOLEN_SACK_BASE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1010) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1011) 	if (sack_a && oplen_a >= sizeof(*sack_a) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1012) 	    (!sack_b || oplen_b < sizeof(*sack_b)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1013) 		return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1014) 	else if (sack_b && oplen_b >= sizeof(*sack_b) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1015) 		 (!sack_a || oplen_a < sizeof(*sack_a)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1016) 		return 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1017) 	else if ((!sack_a || oplen_a < sizeof(*sack_a)) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1018) 		 (!sack_b || oplen_b < sizeof(*sack_b)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1019) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1020) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1021) 	while (oplen_a >= sizeof(*sack_a)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1022) 		const struct tcp_sack_block_wire *sack_tmp = sack_b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1023) 		u32 start_a = get_unaligned_be32(&sack_a->start_seq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1024) 		u32 end_a = get_unaligned_be32(&sack_a->end_seq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1025) 		int oplen_tmp = oplen_b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1026) 		bool found = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1027) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1028) 		/* DSACK; always considered greater to prevent dropping */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1029) 		if (before(start_a, ack_seq_a))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1030) 			return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1031) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1032) 		bytes_a += end_a - start_a;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1033) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1034) 		while (oplen_tmp >= sizeof(*sack_tmp)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1035) 			u32 start_b = get_unaligned_be32(&sack_tmp->start_seq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1036) 			u32 end_b = get_unaligned_be32(&sack_tmp->end_seq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1037) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1038) 			/* first time through we count the total size */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1039) 			if (first)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1040) 				bytes_b += end_b - start_b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1041) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1042) 			if (!after(start_b, start_a) && !before(end_b, end_a)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1043) 				found = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1044) 				if (!first)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1045) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1046) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1047) 			oplen_tmp -= sizeof(*sack_tmp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1048) 			sack_tmp++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1049) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1050) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1051) 		if (!found)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1052) 			return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1053) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1054) 		oplen_a -= sizeof(*sack_a);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1055) 		sack_a++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1056) 		first = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1057) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1058) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1059) 	/* If we made it this far, all ranges SACKed by A are covered by B, so
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1060) 	 * either the SACKs are equal, or B SACKs more bytes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1061) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1062) 	return bytes_b > bytes_a ? 1 : 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1063) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1064) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1065) static void cake_tcph_get_tstamp(const struct tcphdr *tcph,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1066) 				 u32 *tsval, u32 *tsecr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1067) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1068) 	const u8 *ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1069) 	int opsize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1070) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1071) 	ptr = cake_get_tcpopt(tcph, TCPOPT_TIMESTAMP, &opsize);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1072) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1073) 	if (ptr && opsize == TCPOLEN_TIMESTAMP) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1074) 		*tsval = get_unaligned_be32(ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1075) 		*tsecr = get_unaligned_be32(ptr + 4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1076) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1077) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1078) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1079) static bool cake_tcph_may_drop(const struct tcphdr *tcph,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1080) 			       u32 tstamp_new, u32 tsecr_new)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1081) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1082) 	/* inspired by tcp_parse_options in tcp_input.c */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1083) 	int length = __tcp_hdrlen(tcph) - sizeof(struct tcphdr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1084) 	const u8 *ptr = (const u8 *)(tcph + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1085) 	u32 tstamp, tsecr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1086) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1087) 	/* 3 reserved flags must be unset to avoid future breakage
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1088) 	 * ACK must be set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1089) 	 * ECE/CWR are handled separately
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1090) 	 * All other flags URG/PSH/RST/SYN/FIN must be unset
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1091) 	 * 0x0FFF0000 = all TCP flags (confirm ACK=1, others zero)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1092) 	 * 0x00C00000 = CWR/ECE (handled separately)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1093) 	 * 0x0F3F0000 = 0x0FFF0000 & ~0x00C00000
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1094) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1095) 	if (((tcp_flag_word(tcph) &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1096) 	      cpu_to_be32(0x0F3F0000)) != TCP_FLAG_ACK))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1097) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1098) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1099) 	while (length > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1100) 		int opcode = *ptr++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1101) 		int opsize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1102) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1103) 		if (opcode == TCPOPT_EOL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1104) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1105) 		if (opcode == TCPOPT_NOP) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1106) 			length--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1107) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1108) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1109) 		if (length < 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1110) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1111) 		opsize = *ptr++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1112) 		if (opsize < 2 || opsize > length)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1113) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1114) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1115) 		switch (opcode) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1116) 		case TCPOPT_MD5SIG: /* doesn't influence state */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1117) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1118) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1119) 		case TCPOPT_SACK: /* stricter checking performed later */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1120) 			if (opsize % 8 != 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1121) 				return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1122) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1123) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1124) 		case TCPOPT_TIMESTAMP:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1125) 			/* only drop timestamps lower than new */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1126) 			if (opsize != TCPOLEN_TIMESTAMP)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1127) 				return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1128) 			tstamp = get_unaligned_be32(ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1129) 			tsecr = get_unaligned_be32(ptr + 4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1130) 			if (after(tstamp, tstamp_new) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1131) 			    after(tsecr, tsecr_new))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1132) 				return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1133) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1134) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1135) 		case TCPOPT_MSS:  /* these should only be set on SYN */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1136) 		case TCPOPT_WINDOW:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1137) 		case TCPOPT_SACK_PERM:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1138) 		case TCPOPT_FASTOPEN:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1139) 		case TCPOPT_EXP:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1140) 		default: /* don't drop if any unknown options are present */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1141) 			return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1142) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1143) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1144) 		ptr += opsize - 2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1145) 		length -= opsize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1146) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1147) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1148) 	return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1149) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1150) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1151) static struct sk_buff *cake_ack_filter(struct cake_sched_data *q,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1152) 				       struct cake_flow *flow)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1153) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1154) 	bool aggressive = q->ack_filter == CAKE_ACK_AGGRESSIVE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1155) 	struct sk_buff *elig_ack = NULL, *elig_ack_prev = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1156) 	struct sk_buff *skb_check, *skb_prev = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1157) 	const struct ipv6hdr *ipv6h, *ipv6h_check;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1158) 	unsigned char _tcph[64], _tcph_check[64];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1159) 	const struct tcphdr *tcph, *tcph_check;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1160) 	const struct iphdr *iph, *iph_check;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1161) 	struct ipv6hdr _iph, _iph_check;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1162) 	const struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1163) 	int seglen, num_found = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1164) 	u32 tstamp = 0, tsecr = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1165) 	__be32 elig_flags = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1166) 	int sack_comp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1167) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1168) 	/* no other possible ACKs to filter */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1169) 	if (flow->head == flow->tail)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1170) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1171) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1172) 	skb = flow->tail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1173) 	tcph = cake_get_tcphdr(skb, _tcph, sizeof(_tcph));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1174) 	iph = cake_get_iphdr(skb, &_iph);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1175) 	if (!tcph)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1176) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1177) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1178) 	cake_tcph_get_tstamp(tcph, &tstamp, &tsecr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1179) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1180) 	/* the 'triggering' packet need only have the ACK flag set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1181) 	 * also check that SYN is not set, as there won't be any previous ACKs.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1182) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1183) 	if ((tcp_flag_word(tcph) &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1184) 	     (TCP_FLAG_ACK | TCP_FLAG_SYN)) != TCP_FLAG_ACK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1185) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1186) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1187) 	/* the 'triggering' ACK is at the tail of the queue, we have already
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1188) 	 * returned if it is the only packet in the flow. loop through the rest
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1189) 	 * of the queue looking for pure ACKs with the same 5-tuple as the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1190) 	 * triggering one.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1191) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1192) 	for (skb_check = flow->head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1193) 	     skb_check && skb_check != skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1194) 	     skb_prev = skb_check, skb_check = skb_check->next) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1195) 		iph_check = cake_get_iphdr(skb_check, &_iph_check);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1196) 		tcph_check = cake_get_tcphdr(skb_check, &_tcph_check,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1197) 					     sizeof(_tcph_check));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1198) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1199) 		/* only TCP packets with matching 5-tuple are eligible, and only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1200) 		 * drop safe headers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1201) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1202) 		if (!tcph_check || iph->version != iph_check->version ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1203) 		    tcph_check->source != tcph->source ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1204) 		    tcph_check->dest != tcph->dest)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1205) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1206) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1207) 		if (iph_check->version == 4) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1208) 			if (iph_check->saddr != iph->saddr ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1209) 			    iph_check->daddr != iph->daddr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1210) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1211) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1212) 			seglen = ntohs(iph_check->tot_len) -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1213) 				       (4 * iph_check->ihl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1214) 		} else if (iph_check->version == 6) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1215) 			ipv6h = (struct ipv6hdr *)iph;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1216) 			ipv6h_check = (struct ipv6hdr *)iph_check;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1217) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1218) 			if (ipv6_addr_cmp(&ipv6h_check->saddr, &ipv6h->saddr) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1219) 			    ipv6_addr_cmp(&ipv6h_check->daddr, &ipv6h->daddr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1220) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1221) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1222) 			seglen = ntohs(ipv6h_check->payload_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1223) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1224) 			WARN_ON(1);  /* shouldn't happen */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1225) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1226) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1227) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1228) 		/* If the ECE/CWR flags changed from the previous eligible
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1229) 		 * packet in the same flow, we should no longer be dropping that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1230) 		 * previous packet as this would lose information.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1231) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1232) 		if (elig_ack && (tcp_flag_word(tcph_check) &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1233) 				 (TCP_FLAG_ECE | TCP_FLAG_CWR)) != elig_flags) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1234) 			elig_ack = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1235) 			elig_ack_prev = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1236) 			num_found--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1237) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1238) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1239) 		/* Check TCP options and flags, don't drop ACKs with segment
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1240) 		 * data, and don't drop ACKs with a higher cumulative ACK
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1241) 		 * counter than the triggering packet. Check ACK seqno here to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1242) 		 * avoid parsing SACK options of packets we are going to exclude
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1243) 		 * anyway.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1244) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1245) 		if (!cake_tcph_may_drop(tcph_check, tstamp, tsecr) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1246) 		    (seglen - __tcp_hdrlen(tcph_check)) != 0 ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1247) 		    after(ntohl(tcph_check->ack_seq), ntohl(tcph->ack_seq)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1248) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1249) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1250) 		/* Check SACK options. The triggering packet must SACK more data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1251) 		 * than the ACK under consideration, or SACK the same range but
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1252) 		 * have a larger cumulative ACK counter. The latter is a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1253) 		 * pathological case, but is contained in the following check
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1254) 		 * anyway, just to be safe.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1255) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1256) 		sack_comp = cake_tcph_sack_compare(tcph_check, tcph);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1257) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1258) 		if (sack_comp < 0 ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1259) 		    (ntohl(tcph_check->ack_seq) == ntohl(tcph->ack_seq) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1260) 		     sack_comp == 0))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1261) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1262) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1263) 		/* At this point we have found an eligible pure ACK to drop; if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1264) 		 * we are in aggressive mode, we are done. Otherwise, keep
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1265) 		 * searching unless this is the second eligible ACK we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1266) 		 * found.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1267) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1268) 		 * Since we want to drop ACK closest to the head of the queue,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1269) 		 * save the first eligible ACK we find, even if we need to loop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1270) 		 * again.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1271) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1272) 		if (!elig_ack) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1273) 			elig_ack = skb_check;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1274) 			elig_ack_prev = skb_prev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1275) 			elig_flags = (tcp_flag_word(tcph_check)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1276) 				      & (TCP_FLAG_ECE | TCP_FLAG_CWR));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1277) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1278) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1279) 		if (num_found++ > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1280) 			goto found;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1281) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1282) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1283) 	/* We made it through the queue without finding two eligible ACKs . If
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1284) 	 * we found a single eligible ACK we can drop it in aggressive mode if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1285) 	 * we can guarantee that this does not interfere with ECN flag
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1286) 	 * information. We ensure this by dropping it only if the enqueued
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1287) 	 * packet is consecutive with the eligible ACK, and their flags match.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1288) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1289) 	if (elig_ack && aggressive && elig_ack->next == skb &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1290) 	    (elig_flags == (tcp_flag_word(tcph) &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1291) 			    (TCP_FLAG_ECE | TCP_FLAG_CWR))))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1292) 		goto found;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1293) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1294) 	return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1295) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1296) found:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1297) 	if (elig_ack_prev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1298) 		elig_ack_prev->next = elig_ack->next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1299) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1300) 		flow->head = elig_ack->next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1301) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1302) 	skb_mark_not_on_list(elig_ack);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1303) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1304) 	return elig_ack;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1305) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1306) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1307) static u64 cake_ewma(u64 avg, u64 sample, u32 shift)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1308) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1309) 	avg -= avg >> shift;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1310) 	avg += sample >> shift;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1311) 	return avg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1312) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1313) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1314) static u32 cake_calc_overhead(struct cake_sched_data *q, u32 len, u32 off)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1315) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1316) 	if (q->rate_flags & CAKE_FLAG_OVERHEAD)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1317) 		len -= off;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1318) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1319) 	if (q->max_netlen < len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1320) 		q->max_netlen = len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1321) 	if (q->min_netlen > len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1322) 		q->min_netlen = len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1323) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1324) 	len += q->rate_overhead;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1325) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1326) 	if (len < q->rate_mpu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1327) 		len = q->rate_mpu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1328) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1329) 	if (q->atm_mode == CAKE_ATM_ATM) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1330) 		len += 47;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1331) 		len /= 48;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1332) 		len *= 53;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1333) 	} else if (q->atm_mode == CAKE_ATM_PTM) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1334) 		/* Add one byte per 64 bytes or part thereof.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1335) 		 * This is conservative and easier to calculate than the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1336) 		 * precise value.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1337) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1338) 		len += (len + 63) / 64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1339) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1340) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1341) 	if (q->max_adjlen < len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1342) 		q->max_adjlen = len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1343) 	if (q->min_adjlen > len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1344) 		q->min_adjlen = len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1345) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1346) 	return len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1347) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1348) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1349) static u32 cake_overhead(struct cake_sched_data *q, const struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1350) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1351) 	const struct skb_shared_info *shinfo = skb_shinfo(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1352) 	unsigned int hdr_len, last_len = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1353) 	u32 off = skb_network_offset(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1354) 	u32 len = qdisc_pkt_len(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1355) 	u16 segs = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1356) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1357) 	q->avg_netoff = cake_ewma(q->avg_netoff, off << 16, 8);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1358) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1359) 	if (!shinfo->gso_size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1360) 		return cake_calc_overhead(q, len, off);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1361) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1362) 	/* borrowed from qdisc_pkt_len_init() */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1363) 	hdr_len = skb_transport_header(skb) - skb_mac_header(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1364) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1365) 	/* + transport layer */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1366) 	if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1367) 						SKB_GSO_TCPV6))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1368) 		const struct tcphdr *th;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1369) 		struct tcphdr _tcphdr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1370) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1371) 		th = skb_header_pointer(skb, skb_transport_offset(skb),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1372) 					sizeof(_tcphdr), &_tcphdr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1373) 		if (likely(th))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1374) 			hdr_len += __tcp_hdrlen(th);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1375) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1376) 		struct udphdr _udphdr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1377) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1378) 		if (skb_header_pointer(skb, skb_transport_offset(skb),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1379) 				       sizeof(_udphdr), &_udphdr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1380) 			hdr_len += sizeof(struct udphdr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1381) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1382) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1383) 	if (unlikely(shinfo->gso_type & SKB_GSO_DODGY))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1384) 		segs = DIV_ROUND_UP(skb->len - hdr_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1385) 				    shinfo->gso_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1386) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1387) 		segs = shinfo->gso_segs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1388) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1389) 	len = shinfo->gso_size + hdr_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1390) 	last_len = skb->len - shinfo->gso_size * (segs - 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1391) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1392) 	return (cake_calc_overhead(q, len, off) * (segs - 1) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1393) 		cake_calc_overhead(q, last_len, off));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1394) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1395) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1396) static void cake_heap_swap(struct cake_sched_data *q, u16 i, u16 j)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1397) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1398) 	struct cake_heap_entry ii = q->overflow_heap[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1399) 	struct cake_heap_entry jj = q->overflow_heap[j];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1400) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1401) 	q->overflow_heap[i] = jj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1402) 	q->overflow_heap[j] = ii;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1403) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1404) 	q->tins[ii.t].overflow_idx[ii.b] = j;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1405) 	q->tins[jj.t].overflow_idx[jj.b] = i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1406) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1407) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1408) static u32 cake_heap_get_backlog(const struct cake_sched_data *q, u16 i)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1409) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1410) 	struct cake_heap_entry ii = q->overflow_heap[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1411) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1412) 	return q->tins[ii.t].backlogs[ii.b];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1413) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1414) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1415) static void cake_heapify(struct cake_sched_data *q, u16 i)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1416) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1417) 	static const u32 a = CAKE_MAX_TINS * CAKE_QUEUES;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1418) 	u32 mb = cake_heap_get_backlog(q, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1419) 	u32 m = i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1420) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1421) 	while (m < a) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1422) 		u32 l = m + m + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1423) 		u32 r = l + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1424) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1425) 		if (l < a) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1426) 			u32 lb = cake_heap_get_backlog(q, l);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1427) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1428) 			if (lb > mb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1429) 				m  = l;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1430) 				mb = lb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1431) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1432) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1433) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1434) 		if (r < a) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1435) 			u32 rb = cake_heap_get_backlog(q, r);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1436) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1437) 			if (rb > mb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1438) 				m  = r;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1439) 				mb = rb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1440) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1441) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1442) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1443) 		if (m != i) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1444) 			cake_heap_swap(q, i, m);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1445) 			i = m;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1446) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1447) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1448) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1449) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1450) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1451) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1452) static void cake_heapify_up(struct cake_sched_data *q, u16 i)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1453) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1454) 	while (i > 0 && i < CAKE_MAX_TINS * CAKE_QUEUES) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1455) 		u16 p = (i - 1) >> 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1456) 		u32 ib = cake_heap_get_backlog(q, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1457) 		u32 pb = cake_heap_get_backlog(q, p);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1458) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1459) 		if (ib > pb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1460) 			cake_heap_swap(q, i, p);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1461) 			i = p;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1462) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1463) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1464) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1465) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1466) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1467) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1468) static int cake_advance_shaper(struct cake_sched_data *q,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1469) 			       struct cake_tin_data *b,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1470) 			       struct sk_buff *skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1471) 			       ktime_t now, bool drop)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1472) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1473) 	u32 len = get_cobalt_cb(skb)->adjusted_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1474) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1475) 	/* charge packet bandwidth to this tin
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1476) 	 * and to the global shaper.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1477) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1478) 	if (q->rate_ns) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1479) 		u64 tin_dur = (len * b->tin_rate_ns) >> b->tin_rate_shft;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1480) 		u64 global_dur = (len * q->rate_ns) >> q->rate_shft;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1481) 		u64 failsafe_dur = global_dur + (global_dur >> 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1482) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1483) 		if (ktime_before(b->time_next_packet, now))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1484) 			b->time_next_packet = ktime_add_ns(b->time_next_packet,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1485) 							   tin_dur);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1486) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1487) 		else if (ktime_before(b->time_next_packet,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1488) 				      ktime_add_ns(now, tin_dur)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1489) 			b->time_next_packet = ktime_add_ns(now, tin_dur);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1490) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1491) 		q->time_next_packet = ktime_add_ns(q->time_next_packet,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1492) 						   global_dur);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1493) 		if (!drop)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1494) 			q->failsafe_next_packet = \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1495) 				ktime_add_ns(q->failsafe_next_packet,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1496) 					     failsafe_dur);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1497) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1498) 	return len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1499) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1500) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1501) static unsigned int cake_drop(struct Qdisc *sch, struct sk_buff **to_free)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1502) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1503) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1504) 	ktime_t now = ktime_get();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1505) 	u32 idx = 0, tin = 0, len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1506) 	struct cake_heap_entry qq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1507) 	struct cake_tin_data *b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1508) 	struct cake_flow *flow;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1509) 	struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1510) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1511) 	if (!q->overflow_timeout) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1512) 		int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1513) 		/* Build fresh max-heap */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1514) 		for (i = CAKE_MAX_TINS * CAKE_QUEUES / 2; i >= 0; i--)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1515) 			cake_heapify(q, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1516) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1517) 	q->overflow_timeout = 65535;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1518) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1519) 	/* select longest queue for pruning */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1520) 	qq  = q->overflow_heap[0];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1521) 	tin = qq.t;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1522) 	idx = qq.b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1523) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1524) 	b = &q->tins[tin];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1525) 	flow = &b->flows[idx];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1526) 	skb = dequeue_head(flow);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1527) 	if (unlikely(!skb)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1528) 		/* heap has gone wrong, rebuild it next time */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1529) 		q->overflow_timeout = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1530) 		return idx + (tin << 16);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1531) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1532) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1533) 	if (cobalt_queue_full(&flow->cvars, &b->cparams, now))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1534) 		b->unresponsive_flow_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1535) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1536) 	len = qdisc_pkt_len(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1537) 	q->buffer_used      -= skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1538) 	b->backlogs[idx]    -= len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1539) 	b->tin_backlog      -= len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1540) 	sch->qstats.backlog -= len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1541) 	qdisc_tree_reduce_backlog(sch, 1, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1542) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1543) 	flow->dropped++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1544) 	b->tin_dropped++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1545) 	sch->qstats.drops++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1546) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1547) 	if (q->rate_flags & CAKE_FLAG_INGRESS)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1548) 		cake_advance_shaper(q, b, skb, now, true);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1549) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1550) 	__qdisc_drop(skb, to_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1551) 	sch->q.qlen--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1552) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1553) 	cake_heapify(q, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1554) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1555) 	return idx + (tin << 16);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1556) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1557) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1558) static u8 cake_handle_diffserv(struct sk_buff *skb, bool wash)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1559) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1560) 	const int offset = skb_network_offset(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1561) 	u16 *buf, buf_;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1562) 	u8 dscp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1563) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1564) 	switch (skb_protocol(skb, true)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1565) 	case htons(ETH_P_IP):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1566) 		buf = skb_header_pointer(skb, offset, sizeof(buf_), &buf_);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1567) 		if (unlikely(!buf))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1568) 			return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1569) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1570) 		/* ToS is in the second byte of iphdr */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1571) 		dscp = ipv4_get_dsfield((struct iphdr *)buf) >> 2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1572) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1573) 		if (wash && dscp) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1574) 			const int wlen = offset + sizeof(struct iphdr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1575) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1576) 			if (!pskb_may_pull(skb, wlen) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1577) 			    skb_try_make_writable(skb, wlen))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1578) 				return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1579) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1580) 			ipv4_change_dsfield(ip_hdr(skb), INET_ECN_MASK, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1581) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1582) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1583) 		return dscp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1584) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1585) 	case htons(ETH_P_IPV6):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1586) 		buf = skb_header_pointer(skb, offset, sizeof(buf_), &buf_);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1587) 		if (unlikely(!buf))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1588) 			return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1589) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1590) 		/* Traffic class is in the first and second bytes of ipv6hdr */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1591) 		dscp = ipv6_get_dsfield((struct ipv6hdr *)buf) >> 2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1592) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1593) 		if (wash && dscp) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1594) 			const int wlen = offset + sizeof(struct ipv6hdr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1595) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1596) 			if (!pskb_may_pull(skb, wlen) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1597) 			    skb_try_make_writable(skb, wlen))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1598) 				return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1599) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1600) 			ipv6_change_dsfield(ipv6_hdr(skb), INET_ECN_MASK, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1601) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1602) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1603) 		return dscp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1604) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1605) 	case htons(ETH_P_ARP):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1606) 		return 0x38;  /* CS7 - Net Control */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1607) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1608) 	default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1609) 		/* If there is no Diffserv field, treat as best-effort */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1610) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1611) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1612) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1613) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1614) static struct cake_tin_data *cake_select_tin(struct Qdisc *sch,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1615) 					     struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1616) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1617) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1618) 	u32 tin, mark;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1619) 	bool wash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1620) 	u8 dscp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1621) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1622) 	/* Tin selection: Default to diffserv-based selection, allow overriding
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1623) 	 * using firewall marks or skb->priority. Call DSCP parsing early if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1624) 	 * wash is enabled, otherwise defer to below to skip unneeded parsing.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1625) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1626) 	mark = (skb->mark & q->fwmark_mask) >> q->fwmark_shft;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1627) 	wash = !!(q->rate_flags & CAKE_FLAG_WASH);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1628) 	if (wash)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1629) 		dscp = cake_handle_diffserv(skb, wash);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1630) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1631) 	if (q->tin_mode == CAKE_DIFFSERV_BESTEFFORT)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1632) 		tin = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1633) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1634) 	else if (mark && mark <= q->tin_cnt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1635) 		tin = q->tin_order[mark - 1];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1636) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1637) 	else if (TC_H_MAJ(skb->priority) == sch->handle &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1638) 		 TC_H_MIN(skb->priority) > 0 &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1639) 		 TC_H_MIN(skb->priority) <= q->tin_cnt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1640) 		tin = q->tin_order[TC_H_MIN(skb->priority) - 1];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1641) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1642) 	else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1643) 		if (!wash)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1644) 			dscp = cake_handle_diffserv(skb, wash);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1645) 		tin = q->tin_index[dscp];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1646) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1647) 		if (unlikely(tin >= q->tin_cnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1648) 			tin = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1649) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1650) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1651) 	return &q->tins[tin];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1652) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1653) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1654) static u32 cake_classify(struct Qdisc *sch, struct cake_tin_data **t,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1655) 			 struct sk_buff *skb, int flow_mode, int *qerr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1656) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1657) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1658) 	struct tcf_proto *filter;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1659) 	struct tcf_result res;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1660) 	u16 flow = 0, host = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1661) 	int result;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1662) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1663) 	filter = rcu_dereference_bh(q->filter_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1664) 	if (!filter)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1665) 		goto hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1666) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1667) 	*qerr = NET_XMIT_SUCCESS | __NET_XMIT_BYPASS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1668) 	result = tcf_classify(skb, filter, &res, false);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1669) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1670) 	if (result >= 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1671) #ifdef CONFIG_NET_CLS_ACT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1672) 		switch (result) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1673) 		case TC_ACT_STOLEN:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1674) 		case TC_ACT_QUEUED:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1675) 		case TC_ACT_TRAP:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1676) 			*qerr = NET_XMIT_SUCCESS | __NET_XMIT_STOLEN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1677) 			fallthrough;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1678) 		case TC_ACT_SHOT:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1679) 			return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1680) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1681) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1682) 		if (TC_H_MIN(res.classid) <= CAKE_QUEUES)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1683) 			flow = TC_H_MIN(res.classid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1684) 		if (TC_H_MAJ(res.classid) <= (CAKE_QUEUES << 16))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1685) 			host = TC_H_MAJ(res.classid) >> 16;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1686) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1687) hash:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1688) 	*t = cake_select_tin(sch, skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1689) 	return cake_hash(*t, skb, flow_mode, flow, host) + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1690) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1691) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1692) static void cake_reconfigure(struct Qdisc *sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1693) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1694) static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1695) 			struct sk_buff **to_free)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1696) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1697) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1698) 	int len = qdisc_pkt_len(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1699) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1700) 	struct sk_buff *ack = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1701) 	ktime_t now = ktime_get();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1702) 	struct cake_tin_data *b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1703) 	struct cake_flow *flow;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1704) 	u32 idx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1705) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1706) 	/* choose flow to insert into */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1707) 	idx = cake_classify(sch, &b, skb, q->flow_mode, &ret);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1708) 	if (idx == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1709) 		if (ret & __NET_XMIT_BYPASS)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1710) 			qdisc_qstats_drop(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1711) 		__qdisc_drop(skb, to_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1712) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1713) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1714) 	idx--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1715) 	flow = &b->flows[idx];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1716) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1717) 	/* ensure shaper state isn't stale */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1718) 	if (!b->tin_backlog) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1719) 		if (ktime_before(b->time_next_packet, now))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1720) 			b->time_next_packet = now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1721) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1722) 		if (!sch->q.qlen) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1723) 			if (ktime_before(q->time_next_packet, now)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1724) 				q->failsafe_next_packet = now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1725) 				q->time_next_packet = now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1726) 			} else if (ktime_after(q->time_next_packet, now) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1727) 				   ktime_after(q->failsafe_next_packet, now)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1728) 				u64 next = \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1729) 					min(ktime_to_ns(q->time_next_packet),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1730) 					    ktime_to_ns(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1731) 						   q->failsafe_next_packet));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1732) 				sch->qstats.overlimits++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1733) 				qdisc_watchdog_schedule_ns(&q->watchdog, next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1734) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1735) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1736) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1737) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1738) 	if (unlikely(len > b->max_skblen))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1739) 		b->max_skblen = len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1740) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1741) 	if (skb_is_gso(skb) && q->rate_flags & CAKE_FLAG_SPLIT_GSO) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1742) 		struct sk_buff *segs, *nskb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1743) 		netdev_features_t features = netif_skb_features(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1744) 		unsigned int slen = 0, numsegs = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1745) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1746) 		segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1747) 		if (IS_ERR_OR_NULL(segs))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1748) 			return qdisc_drop(skb, sch, to_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1749) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1750) 		skb_list_walk_safe(segs, segs, nskb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1751) 			skb_mark_not_on_list(segs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1752) 			qdisc_skb_cb(segs)->pkt_len = segs->len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1753) 			cobalt_set_enqueue_time(segs, now);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1754) 			get_cobalt_cb(segs)->adjusted_len = cake_overhead(q,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1755) 									  segs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1756) 			flow_queue_add(flow, segs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1757) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1758) 			sch->q.qlen++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1759) 			numsegs++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1760) 			slen += segs->len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1761) 			q->buffer_used += segs->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1762) 			b->packets++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1763) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1764) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1765) 		/* stats */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1766) 		b->bytes	    += slen;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1767) 		b->backlogs[idx]    += slen;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1768) 		b->tin_backlog      += slen;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1769) 		sch->qstats.backlog += slen;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1770) 		q->avg_window_bytes += slen;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1771) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1772) 		qdisc_tree_reduce_backlog(sch, 1-numsegs, len-slen);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1773) 		consume_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1774) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1775) 		/* not splitting */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1776) 		cobalt_set_enqueue_time(skb, now);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1777) 		get_cobalt_cb(skb)->adjusted_len = cake_overhead(q, skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1778) 		flow_queue_add(flow, skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1779) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1780) 		if (q->ack_filter)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1781) 			ack = cake_ack_filter(q, flow);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1782) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1783) 		if (ack) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1784) 			b->ack_drops++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1785) 			sch->qstats.drops++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1786) 			b->bytes += qdisc_pkt_len(ack);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1787) 			len -= qdisc_pkt_len(ack);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1788) 			q->buffer_used += skb->truesize - ack->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1789) 			if (q->rate_flags & CAKE_FLAG_INGRESS)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1790) 				cake_advance_shaper(q, b, ack, now, true);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1791) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1792) 			qdisc_tree_reduce_backlog(sch, 1, qdisc_pkt_len(ack));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1793) 			consume_skb(ack);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1794) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1795) 			sch->q.qlen++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1796) 			q->buffer_used      += skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1797) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1798) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1799) 		/* stats */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1800) 		b->packets++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1801) 		b->bytes	    += len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1802) 		b->backlogs[idx]    += len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1803) 		b->tin_backlog      += len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1804) 		sch->qstats.backlog += len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1805) 		q->avg_window_bytes += len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1806) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1807) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1808) 	if (q->overflow_timeout)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1809) 		cake_heapify_up(q, b->overflow_idx[idx]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1810) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1811) 	/* incoming bandwidth capacity estimate */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1812) 	if (q->rate_flags & CAKE_FLAG_AUTORATE_INGRESS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1813) 		u64 packet_interval = \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1814) 			ktime_to_ns(ktime_sub(now, q->last_packet_time));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1815) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1816) 		if (packet_interval > NSEC_PER_SEC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1817) 			packet_interval = NSEC_PER_SEC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1818) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1819) 		/* filter out short-term bursts, eg. wifi aggregation */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1820) 		q->avg_packet_interval = \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1821) 			cake_ewma(q->avg_packet_interval,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1822) 				  packet_interval,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1823) 				  (packet_interval > q->avg_packet_interval ?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1824) 					  2 : 8));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1825) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1826) 		q->last_packet_time = now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1827) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1828) 		if (packet_interval > q->avg_packet_interval) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1829) 			u64 window_interval = \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1830) 				ktime_to_ns(ktime_sub(now,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1831) 						      q->avg_window_begin));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1832) 			u64 b = q->avg_window_bytes * (u64)NSEC_PER_SEC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1833) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1834) 			b = div64_u64(b, window_interval);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1835) 			q->avg_peak_bandwidth =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1836) 				cake_ewma(q->avg_peak_bandwidth, b,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1837) 					  b > q->avg_peak_bandwidth ? 2 : 8);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1838) 			q->avg_window_bytes = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1839) 			q->avg_window_begin = now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1840) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1841) 			if (ktime_after(now,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1842) 					ktime_add_ms(q->last_reconfig_time,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1843) 						     250))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1844) 				q->rate_bps = (q->avg_peak_bandwidth * 15) >> 4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1845) 				cake_reconfigure(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1846) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1847) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1848) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1849) 		q->avg_window_bytes = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1850) 		q->last_packet_time = now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1851) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1852) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1853) 	/* flowchain */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1854) 	if (!flow->set || flow->set == CAKE_SET_DECAYING) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1855) 		struct cake_host *srchost = &b->hosts[flow->srchost];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1856) 		struct cake_host *dsthost = &b->hosts[flow->dsthost];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1857) 		u16 host_load = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1858) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1859) 		if (!flow->set) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1860) 			list_add_tail(&flow->flowchain, &b->new_flows);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1861) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1862) 			b->decaying_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1863) 			list_move_tail(&flow->flowchain, &b->new_flows);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1864) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1865) 		flow->set = CAKE_SET_SPARSE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1866) 		b->sparse_flow_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1867) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1868) 		if (cake_dsrc(q->flow_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1869) 			host_load = max(host_load, srchost->srchost_bulk_flow_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1870) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1871) 		if (cake_ddst(q->flow_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1872) 			host_load = max(host_load, dsthost->dsthost_bulk_flow_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1873) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1874) 		flow->deficit = (b->flow_quantum *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1875) 				 quantum_div[host_load]) >> 16;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1876) 	} else if (flow->set == CAKE_SET_SPARSE_WAIT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1877) 		struct cake_host *srchost = &b->hosts[flow->srchost];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1878) 		struct cake_host *dsthost = &b->hosts[flow->dsthost];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1879) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1880) 		/* this flow was empty, accounted as a sparse flow, but actually
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1881) 		 * in the bulk rotation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1882) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1883) 		flow->set = CAKE_SET_BULK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1884) 		b->sparse_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1885) 		b->bulk_flow_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1886) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1887) 		if (cake_dsrc(q->flow_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1888) 			srchost->srchost_bulk_flow_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1889) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1890) 		if (cake_ddst(q->flow_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1891) 			dsthost->dsthost_bulk_flow_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1892) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1893) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1894) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1895) 	if (q->buffer_used > q->buffer_max_used)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1896) 		q->buffer_max_used = q->buffer_used;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1897) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1898) 	if (q->buffer_used > q->buffer_limit) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1899) 		u32 dropped = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1900) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1901) 		while (q->buffer_used > q->buffer_limit) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1902) 			dropped++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1903) 			cake_drop(sch, to_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1904) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1905) 		b->drop_overlimit += dropped;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1906) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1907) 	return NET_XMIT_SUCCESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1908) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1909) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1910) static struct sk_buff *cake_dequeue_one(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1911) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1912) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1913) 	struct cake_tin_data *b = &q->tins[q->cur_tin];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1914) 	struct cake_flow *flow = &b->flows[q->cur_flow];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1915) 	struct sk_buff *skb = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1916) 	u32 len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1917) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1918) 	if (flow->head) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1919) 		skb = dequeue_head(flow);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1920) 		len = qdisc_pkt_len(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1921) 		b->backlogs[q->cur_flow] -= len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1922) 		b->tin_backlog		 -= len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1923) 		sch->qstats.backlog      -= len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1924) 		q->buffer_used		 -= skb->truesize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1925) 		sch->q.qlen--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1926) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1927) 		if (q->overflow_timeout)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1928) 			cake_heapify(q, b->overflow_idx[q->cur_flow]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1929) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1930) 	return skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1931) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1932) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1933) /* Discard leftover packets from a tin no longer in use. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1934) static void cake_clear_tin(struct Qdisc *sch, u16 tin)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1935) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1936) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1937) 	struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1938) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1939) 	q->cur_tin = tin;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1940) 	for (q->cur_flow = 0; q->cur_flow < CAKE_QUEUES; q->cur_flow++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1941) 		while (!!(skb = cake_dequeue_one(sch)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1942) 			kfree_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1943) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1944) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1945) static struct sk_buff *cake_dequeue(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1946) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1947) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1948) 	struct cake_tin_data *b = &q->tins[q->cur_tin];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1949) 	struct cake_host *srchost, *dsthost;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1950) 	ktime_t now = ktime_get();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1951) 	struct cake_flow *flow;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1952) 	struct list_head *head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1953) 	bool first_flow = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1954) 	struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1955) 	u16 host_load;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1956) 	u64 delay;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1957) 	u32 len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1958) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1959) begin:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1960) 	if (!sch->q.qlen)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1961) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1962) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1963) 	/* global hard shaper */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1964) 	if (ktime_after(q->time_next_packet, now) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1965) 	    ktime_after(q->failsafe_next_packet, now)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1966) 		u64 next = min(ktime_to_ns(q->time_next_packet),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1967) 			       ktime_to_ns(q->failsafe_next_packet));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1968) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1969) 		sch->qstats.overlimits++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1970) 		qdisc_watchdog_schedule_ns(&q->watchdog, next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1971) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1972) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1973) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1974) 	/* Choose a class to work on. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1975) 	if (!q->rate_ns) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1976) 		/* In unlimited mode, can't rely on shaper timings, just balance
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1977) 		 * with DRR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1978) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1979) 		bool wrapped = false, empty = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1980) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1981) 		while (b->tin_deficit < 0 ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1982) 		       !(b->sparse_flow_count + b->bulk_flow_count)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1983) 			if (b->tin_deficit <= 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1984) 				b->tin_deficit += b->tin_quantum;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1985) 			if (b->sparse_flow_count + b->bulk_flow_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1986) 				empty = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1987) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1988) 			q->cur_tin++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1989) 			b++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1990) 			if (q->cur_tin >= q->tin_cnt) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1991) 				q->cur_tin = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1992) 				b = q->tins;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1993) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1994) 				if (wrapped) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1995) 					/* It's possible for q->qlen to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1996) 					 * nonzero when we actually have no
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1997) 					 * packets anywhere.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1998) 					 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1999) 					if (empty)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2000) 						return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2001) 				} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2002) 					wrapped = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2003) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2004) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2005) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2006) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2007) 		/* In shaped mode, choose:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2008) 		 * - Highest-priority tin with queue and meeting schedule, or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2009) 		 * - The earliest-scheduled tin with queue.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2010) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2011) 		ktime_t best_time = KTIME_MAX;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2012) 		int tin, best_tin = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2013) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2014) 		for (tin = 0; tin < q->tin_cnt; tin++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2015) 			b = q->tins + tin;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2016) 			if ((b->sparse_flow_count + b->bulk_flow_count) > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2017) 				ktime_t time_to_pkt = \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2018) 					ktime_sub(b->time_next_packet, now);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2019) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2020) 				if (ktime_to_ns(time_to_pkt) <= 0 ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2021) 				    ktime_compare(time_to_pkt,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2022) 						  best_time) <= 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2023) 					best_time = time_to_pkt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2024) 					best_tin = tin;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2025) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2026) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2027) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2028) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2029) 		q->cur_tin = best_tin;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2030) 		b = q->tins + best_tin;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2031) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2032) 		/* No point in going further if no packets to deliver. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2033) 		if (unlikely(!(b->sparse_flow_count + b->bulk_flow_count)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2034) 			return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2035) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2036) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2037) retry:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2038) 	/* service this class */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2039) 	head = &b->decaying_flows;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2040) 	if (!first_flow || list_empty(head)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2041) 		head = &b->new_flows;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2042) 		if (list_empty(head)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2043) 			head = &b->old_flows;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2044) 			if (unlikely(list_empty(head))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2045) 				head = &b->decaying_flows;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2046) 				if (unlikely(list_empty(head)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2047) 					goto begin;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2048) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2049) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2050) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2051) 	flow = list_first_entry(head, struct cake_flow, flowchain);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2052) 	q->cur_flow = flow - b->flows;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2053) 	first_flow = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2054) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2055) 	/* triple isolation (modified DRR++) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2056) 	srchost = &b->hosts[flow->srchost];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2057) 	dsthost = &b->hosts[flow->dsthost];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2058) 	host_load = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2059) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2060) 	/* flow isolation (DRR++) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2061) 	if (flow->deficit <= 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2062) 		/* Keep all flows with deficits out of the sparse and decaying
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2063) 		 * rotations.  No non-empty flow can go into the decaying
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2064) 		 * rotation, so they can't get deficits
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2065) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2066) 		if (flow->set == CAKE_SET_SPARSE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2067) 			if (flow->head) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2068) 				b->sparse_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2069) 				b->bulk_flow_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2070) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2071) 				if (cake_dsrc(q->flow_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2072) 					srchost->srchost_bulk_flow_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2073) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2074) 				if (cake_ddst(q->flow_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2075) 					dsthost->dsthost_bulk_flow_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2076) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2077) 				flow->set = CAKE_SET_BULK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2078) 			} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2079) 				/* we've moved it to the bulk rotation for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2080) 				 * correct deficit accounting but we still want
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2081) 				 * to count it as a sparse flow, not a bulk one.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2082) 				 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2083) 				flow->set = CAKE_SET_SPARSE_WAIT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2084) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2085) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2086) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2087) 		if (cake_dsrc(q->flow_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2088) 			host_load = max(host_load, srchost->srchost_bulk_flow_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2089) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2090) 		if (cake_ddst(q->flow_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2091) 			host_load = max(host_load, dsthost->dsthost_bulk_flow_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2092) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2093) 		WARN_ON(host_load > CAKE_QUEUES);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2094) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2095) 		/* The shifted prandom_u32() is a way to apply dithering to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2096) 		 * avoid accumulating roundoff errors
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2097) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2098) 		flow->deficit += (b->flow_quantum * quantum_div[host_load] +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2099) 				  (prandom_u32() >> 16)) >> 16;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2100) 		list_move_tail(&flow->flowchain, &b->old_flows);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2101) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2102) 		goto retry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2103) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2104) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2105) 	/* Retrieve a packet via the AQM */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2106) 	while (1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2107) 		skb = cake_dequeue_one(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2108) 		if (!skb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2109) 			/* this queue was actually empty */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2110) 			if (cobalt_queue_empty(&flow->cvars, &b->cparams, now))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2111) 				b->unresponsive_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2112) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2113) 			if (flow->cvars.p_drop || flow->cvars.count ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2114) 			    ktime_before(now, flow->cvars.drop_next)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2115) 				/* keep in the flowchain until the state has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2116) 				 * decayed to rest
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2117) 				 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2118) 				list_move_tail(&flow->flowchain,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2119) 					       &b->decaying_flows);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2120) 				if (flow->set == CAKE_SET_BULK) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2121) 					b->bulk_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2122) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2123) 					if (cake_dsrc(q->flow_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2124) 						srchost->srchost_bulk_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2125) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2126) 					if (cake_ddst(q->flow_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2127) 						dsthost->dsthost_bulk_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2128) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2129) 					b->decaying_flow_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2130) 				} else if (flow->set == CAKE_SET_SPARSE ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2131) 					   flow->set == CAKE_SET_SPARSE_WAIT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2132) 					b->sparse_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2133) 					b->decaying_flow_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2134) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2135) 				flow->set = CAKE_SET_DECAYING;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2136) 			} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2137) 				/* remove empty queue from the flowchain */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2138) 				list_del_init(&flow->flowchain);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2139) 				if (flow->set == CAKE_SET_SPARSE ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2140) 				    flow->set == CAKE_SET_SPARSE_WAIT)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2141) 					b->sparse_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2142) 				else if (flow->set == CAKE_SET_BULK) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2143) 					b->bulk_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2144) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2145) 					if (cake_dsrc(q->flow_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2146) 						srchost->srchost_bulk_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2147) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2148) 					if (cake_ddst(q->flow_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2149) 						dsthost->dsthost_bulk_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2150) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2151) 				} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2152) 					b->decaying_flow_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2153) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2154) 				flow->set = CAKE_SET_NONE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2155) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2156) 			goto begin;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2157) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2158) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2159) 		/* Last packet in queue may be marked, shouldn't be dropped */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2160) 		if (!cobalt_should_drop(&flow->cvars, &b->cparams, now, skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2161) 					(b->bulk_flow_count *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2162) 					 !!(q->rate_flags &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2163) 					    CAKE_FLAG_INGRESS))) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2164) 		    !flow->head)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2165) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2166) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2167) 		/* drop this packet, get another one */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2168) 		if (q->rate_flags & CAKE_FLAG_INGRESS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2169) 			len = cake_advance_shaper(q, b, skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2170) 						  now, true);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2171) 			flow->deficit -= len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2172) 			b->tin_deficit -= len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2173) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2174) 		flow->dropped++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2175) 		b->tin_dropped++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2176) 		qdisc_tree_reduce_backlog(sch, 1, qdisc_pkt_len(skb));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2177) 		qdisc_qstats_drop(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2178) 		kfree_skb(skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2179) 		if (q->rate_flags & CAKE_FLAG_INGRESS)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2180) 			goto retry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2181) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2183) 	b->tin_ecn_mark += !!flow->cvars.ecn_marked;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2184) 	qdisc_bstats_update(sch, skb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2185) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2186) 	/* collect delay stats */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2187) 	delay = ktime_to_ns(ktime_sub(now, cobalt_get_enqueue_time(skb)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2188) 	b->avge_delay = cake_ewma(b->avge_delay, delay, 8);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2189) 	b->peak_delay = cake_ewma(b->peak_delay, delay,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2190) 				  delay > b->peak_delay ? 2 : 8);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2191) 	b->base_delay = cake_ewma(b->base_delay, delay,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2192) 				  delay < b->base_delay ? 2 : 8);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2193) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2194) 	len = cake_advance_shaper(q, b, skb, now, false);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2195) 	flow->deficit -= len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2196) 	b->tin_deficit -= len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2197) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2198) 	if (ktime_after(q->time_next_packet, now) && sch->q.qlen) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2199) 		u64 next = min(ktime_to_ns(q->time_next_packet),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2200) 			       ktime_to_ns(q->failsafe_next_packet));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2201) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2202) 		qdisc_watchdog_schedule_ns(&q->watchdog, next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2203) 	} else if (!sch->q.qlen) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2204) 		int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2205) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2206) 		for (i = 0; i < q->tin_cnt; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2207) 			if (q->tins[i].decaying_flow_count) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2208) 				ktime_t next = \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2209) 					ktime_add_ns(now,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2210) 						     q->tins[i].cparams.target);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2211) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2212) 				qdisc_watchdog_schedule_ns(&q->watchdog,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2213) 							   ktime_to_ns(next));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2214) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2215) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2216) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2217) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2218) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2219) 	if (q->overflow_timeout)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2220) 		q->overflow_timeout--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2221) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2222) 	return skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2223) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2224) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2225) static void cake_reset(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2226) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2227) 	u32 c;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2228) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2229) 	for (c = 0; c < CAKE_MAX_TINS; c++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2230) 		cake_clear_tin(sch, c);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2231) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2232) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2233) static const struct nla_policy cake_policy[TCA_CAKE_MAX + 1] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2234) 	[TCA_CAKE_BASE_RATE64]   = { .type = NLA_U64 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2235) 	[TCA_CAKE_DIFFSERV_MODE] = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2236) 	[TCA_CAKE_ATM]		 = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2237) 	[TCA_CAKE_FLOW_MODE]     = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2238) 	[TCA_CAKE_OVERHEAD]      = { .type = NLA_S32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2239) 	[TCA_CAKE_RTT]		 = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2240) 	[TCA_CAKE_TARGET]	 = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2241) 	[TCA_CAKE_AUTORATE]      = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2242) 	[TCA_CAKE_MEMORY]	 = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2243) 	[TCA_CAKE_NAT]		 = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2244) 	[TCA_CAKE_RAW]		 = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2245) 	[TCA_CAKE_WASH]		 = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2246) 	[TCA_CAKE_MPU]		 = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2247) 	[TCA_CAKE_INGRESS]	 = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2248) 	[TCA_CAKE_ACK_FILTER]	 = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2249) 	[TCA_CAKE_SPLIT_GSO]	 = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2250) 	[TCA_CAKE_FWMARK]	 = { .type = NLA_U32 },
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2251) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2252) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2253) static void cake_set_rate(struct cake_tin_data *b, u64 rate, u32 mtu,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2254) 			  u64 target_ns, u64 rtt_est_ns)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2255) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2256) 	/* convert byte-rate into time-per-byte
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2257) 	 * so it will always unwedge in reasonable time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2258) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2259) 	static const u64 MIN_RATE = 64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2260) 	u32 byte_target = mtu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2261) 	u64 byte_target_ns;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2262) 	u8  rate_shft = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2263) 	u64 rate_ns = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2264) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2265) 	b->flow_quantum = 1514;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2266) 	if (rate) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2267) 		b->flow_quantum = max(min(rate >> 12, 1514ULL), 300ULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2268) 		rate_shft = 34;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2269) 		rate_ns = ((u64)NSEC_PER_SEC) << rate_shft;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2270) 		rate_ns = div64_u64(rate_ns, max(MIN_RATE, rate));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2271) 		while (!!(rate_ns >> 34)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2272) 			rate_ns >>= 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2273) 			rate_shft--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2274) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2275) 	} /* else unlimited, ie. zero delay */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2276) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2277) 	b->tin_rate_bps  = rate;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2278) 	b->tin_rate_ns   = rate_ns;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2279) 	b->tin_rate_shft = rate_shft;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2280) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2281) 	byte_target_ns = (byte_target * rate_ns) >> rate_shft;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2282) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2283) 	b->cparams.target = max((byte_target_ns * 3) / 2, target_ns);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2284) 	b->cparams.interval = max(rtt_est_ns +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2285) 				     b->cparams.target - target_ns,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2286) 				     b->cparams.target * 2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2287) 	b->cparams.mtu_time = byte_target_ns;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2288) 	b->cparams.p_inc = 1 << 24; /* 1/256 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2289) 	b->cparams.p_dec = 1 << 20; /* 1/4096 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2290) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2291) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2292) static int cake_config_besteffort(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2293) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2294) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2295) 	struct cake_tin_data *b = &q->tins[0];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2296) 	u32 mtu = psched_mtu(qdisc_dev(sch));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2297) 	u64 rate = q->rate_bps;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2298) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2299) 	q->tin_cnt = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2300) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2301) 	q->tin_index = besteffort;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2302) 	q->tin_order = normal_order;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2303) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2304) 	cake_set_rate(b, rate, mtu,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2305) 		      us_to_ns(q->target), us_to_ns(q->interval));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2306) 	b->tin_quantum = 65535;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2307) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2308) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2309) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2310) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2311) static int cake_config_precedence(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2312) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2313) 	/* convert high-level (user visible) parameters into internal format */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2314) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2315) 	u32 mtu = psched_mtu(qdisc_dev(sch));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2316) 	u64 rate = q->rate_bps;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2317) 	u32 quantum = 256;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2318) 	u32 i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2319) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2320) 	q->tin_cnt = 8;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2321) 	q->tin_index = precedence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2322) 	q->tin_order = normal_order;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2323) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2324) 	for (i = 0; i < q->tin_cnt; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2325) 		struct cake_tin_data *b = &q->tins[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2326) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2327) 		cake_set_rate(b, rate, mtu, us_to_ns(q->target),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2328) 			      us_to_ns(q->interval));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2329) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2330) 		b->tin_quantum = max_t(u16, 1U, quantum);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2331) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2332) 		/* calculate next class's parameters */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2333) 		rate  *= 7;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2334) 		rate >>= 3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2335) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2336) 		quantum  *= 7;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2337) 		quantum >>= 3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2338) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2339) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2340) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2341) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2342) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2343) /*	List of known Diffserv codepoints:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2344)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2345)  *	Least Effort (CS1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2346)  *	Best Effort (CS0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2347)  *	Max Reliability & LLT "Lo" (TOS1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2348)  *	Max Throughput (TOS2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2349)  *	Min Delay (TOS4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2350)  *	LLT "La" (TOS5)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2351)  *	Assured Forwarding 1 (AF1x) - x3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2352)  *	Assured Forwarding 2 (AF2x) - x3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2353)  *	Assured Forwarding 3 (AF3x) - x3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2354)  *	Assured Forwarding 4 (AF4x) - x3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2355)  *	Precedence Class 2 (CS2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2356)  *	Precedence Class 3 (CS3)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2357)  *	Precedence Class 4 (CS4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2358)  *	Precedence Class 5 (CS5)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2359)  *	Precedence Class 6 (CS6)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2360)  *	Precedence Class 7 (CS7)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2361)  *	Voice Admit (VA)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2362)  *	Expedited Forwarding (EF)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2363) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2364)  *	Total 25 codepoints.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2365)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2366) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2367) /*	List of traffic classes in RFC 4594:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2368)  *		(roughly descending order of contended priority)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2369)  *		(roughly ascending order of uncontended throughput)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2370)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2371)  *	Network Control (CS6,CS7)      - routing traffic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2372)  *	Telephony (EF,VA)         - aka. VoIP streams
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2373)  *	Signalling (CS5)               - VoIP setup
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2374)  *	Multimedia Conferencing (AF4x) - aka. video calls
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2375)  *	Realtime Interactive (CS4)     - eg. games
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2376)  *	Multimedia Streaming (AF3x)    - eg. YouTube, NetFlix, Twitch
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2377)  *	Broadcast Video (CS3)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2378)  *	Low Latency Data (AF2x,TOS4)      - eg. database
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2379)  *	Ops, Admin, Management (CS2,TOS1) - eg. ssh
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2380)  *	Standard Service (CS0 & unrecognised codepoints)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2381)  *	High Throughput Data (AF1x,TOS2)  - eg. web traffic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2382)  *	Low Priority Data (CS1)           - eg. BitTorrent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2383) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2384)  *	Total 12 traffic classes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2385)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2386) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2387) static int cake_config_diffserv8(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2388) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2389) /*	Pruned list of traffic classes for typical applications:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2390)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2391)  *		Network Control          (CS6, CS7)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2392)  *		Minimum Latency          (EF, VA, CS5, CS4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2393)  *		Interactive Shell        (CS2, TOS1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2394)  *		Low Latency Transactions (AF2x, TOS4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2395)  *		Video Streaming          (AF4x, AF3x, CS3)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2396)  *		Bog Standard             (CS0 etc.)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2397)  *		High Throughput          (AF1x, TOS2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2398)  *		Background Traffic       (CS1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2399)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2400)  *		Total 8 traffic classes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2401)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2402) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2403) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2404) 	u32 mtu = psched_mtu(qdisc_dev(sch));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2405) 	u64 rate = q->rate_bps;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2406) 	u32 quantum = 256;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2407) 	u32 i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2408) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2409) 	q->tin_cnt = 8;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2410) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2411) 	/* codepoint to class mapping */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2412) 	q->tin_index = diffserv8;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2413) 	q->tin_order = normal_order;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2414) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2415) 	/* class characteristics */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2416) 	for (i = 0; i < q->tin_cnt; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2417) 		struct cake_tin_data *b = &q->tins[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2418) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2419) 		cake_set_rate(b, rate, mtu, us_to_ns(q->target),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2420) 			      us_to_ns(q->interval));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2421) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2422) 		b->tin_quantum = max_t(u16, 1U, quantum);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2423) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2424) 		/* calculate next class's parameters */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2425) 		rate  *= 7;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2426) 		rate >>= 3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2427) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2428) 		quantum  *= 7;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2429) 		quantum >>= 3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2430) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2431) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2432) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2433) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2434) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2435) static int cake_config_diffserv4(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2436) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2437) /*  Further pruned list of traffic classes for four-class system:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2438)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2439)  *	    Latency Sensitive  (CS7, CS6, EF, VA, CS5, CS4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2440)  *	    Streaming Media    (AF4x, AF3x, CS3, AF2x, TOS4, CS2, TOS1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2441)  *	    Best Effort        (CS0, AF1x, TOS2, and those not specified)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2442)  *	    Background Traffic (CS1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2443)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2444)  *		Total 4 traffic classes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2445)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2446) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2447) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2448) 	u32 mtu = psched_mtu(qdisc_dev(sch));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2449) 	u64 rate = q->rate_bps;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2450) 	u32 quantum = 1024;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2451) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2452) 	q->tin_cnt = 4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2453) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2454) 	/* codepoint to class mapping */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2455) 	q->tin_index = diffserv4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2456) 	q->tin_order = bulk_order;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2457) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2458) 	/* class characteristics */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2459) 	cake_set_rate(&q->tins[0], rate, mtu,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2460) 		      us_to_ns(q->target), us_to_ns(q->interval));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2461) 	cake_set_rate(&q->tins[1], rate >> 4, mtu,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2462) 		      us_to_ns(q->target), us_to_ns(q->interval));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2463) 	cake_set_rate(&q->tins[2], rate >> 1, mtu,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2464) 		      us_to_ns(q->target), us_to_ns(q->interval));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2465) 	cake_set_rate(&q->tins[3], rate >> 2, mtu,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2466) 		      us_to_ns(q->target), us_to_ns(q->interval));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2467) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2468) 	/* bandwidth-sharing weights */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2469) 	q->tins[0].tin_quantum = quantum;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2470) 	q->tins[1].tin_quantum = quantum >> 4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2471) 	q->tins[2].tin_quantum = quantum >> 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2472) 	q->tins[3].tin_quantum = quantum >> 2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2473) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2474) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2475) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2476) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2477) static int cake_config_diffserv3(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2478) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2479) /*  Simplified Diffserv structure with 3 tins.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2480)  *		Low Priority		(CS1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2481)  *		Best Effort
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2482)  *		Latency Sensitive	(TOS4, VA, EF, CS6, CS7)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2483)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2484) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2485) 	u32 mtu = psched_mtu(qdisc_dev(sch));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2486) 	u64 rate = q->rate_bps;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2487) 	u32 quantum = 1024;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2488) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2489) 	q->tin_cnt = 3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2490) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2491) 	/* codepoint to class mapping */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2492) 	q->tin_index = diffserv3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2493) 	q->tin_order = bulk_order;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2494) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2495) 	/* class characteristics */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2496) 	cake_set_rate(&q->tins[0], rate, mtu,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2497) 		      us_to_ns(q->target), us_to_ns(q->interval));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2498) 	cake_set_rate(&q->tins[1], rate >> 4, mtu,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2499) 		      us_to_ns(q->target), us_to_ns(q->interval));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2500) 	cake_set_rate(&q->tins[2], rate >> 2, mtu,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2501) 		      us_to_ns(q->target), us_to_ns(q->interval));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2502) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2503) 	/* bandwidth-sharing weights */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2504) 	q->tins[0].tin_quantum = quantum;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2505) 	q->tins[1].tin_quantum = quantum >> 4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2506) 	q->tins[2].tin_quantum = quantum >> 2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2507) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2508) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2509) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2510) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2511) static void cake_reconfigure(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2512) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2513) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2514) 	int c, ft;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2515) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2516) 	switch (q->tin_mode) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2517) 	case CAKE_DIFFSERV_BESTEFFORT:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2518) 		ft = cake_config_besteffort(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2519) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2520) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2521) 	case CAKE_DIFFSERV_PRECEDENCE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2522) 		ft = cake_config_precedence(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2523) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2524) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2525) 	case CAKE_DIFFSERV_DIFFSERV8:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2526) 		ft = cake_config_diffserv8(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2527) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2528) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2529) 	case CAKE_DIFFSERV_DIFFSERV4:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2530) 		ft = cake_config_diffserv4(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2531) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2532) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2533) 	case CAKE_DIFFSERV_DIFFSERV3:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2534) 	default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2535) 		ft = cake_config_diffserv3(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2536) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2537) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2538) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2539) 	for (c = q->tin_cnt; c < CAKE_MAX_TINS; c++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2540) 		cake_clear_tin(sch, c);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2541) 		q->tins[c].cparams.mtu_time = q->tins[ft].cparams.mtu_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2542) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2543) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2544) 	q->rate_ns   = q->tins[ft].tin_rate_ns;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2545) 	q->rate_shft = q->tins[ft].tin_rate_shft;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2546) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2547) 	if (q->buffer_config_limit) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2548) 		q->buffer_limit = q->buffer_config_limit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2549) 	} else if (q->rate_bps) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2550) 		u64 t = q->rate_bps * q->interval;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2551) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2552) 		do_div(t, USEC_PER_SEC / 4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2553) 		q->buffer_limit = max_t(u32, t, 4U << 20);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2554) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2555) 		q->buffer_limit = ~0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2556) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2557) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2558) 	sch->flags &= ~TCQ_F_CAN_BYPASS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2559) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2560) 	q->buffer_limit = min(q->buffer_limit,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2561) 			      max(sch->limit * psched_mtu(qdisc_dev(sch)),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2562) 				  q->buffer_config_limit));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2563) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2564) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2565) static int cake_change(struct Qdisc *sch, struct nlattr *opt,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2566) 		       struct netlink_ext_ack *extack)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2567) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2568) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2569) 	struct nlattr *tb[TCA_CAKE_MAX + 1];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2570) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2571) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2572) 	if (!opt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2573) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2574) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2575) 	err = nla_parse_nested_deprecated(tb, TCA_CAKE_MAX, opt, cake_policy,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2576) 					  extack);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2577) 	if (err < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2578) 		return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2579) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2580) 	if (tb[TCA_CAKE_NAT]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2581) #if IS_ENABLED(CONFIG_NF_CONNTRACK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2582) 		q->flow_mode &= ~CAKE_FLOW_NAT_FLAG;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2583) 		q->flow_mode |= CAKE_FLOW_NAT_FLAG *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2584) 			!!nla_get_u32(tb[TCA_CAKE_NAT]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2585) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2586) 		NL_SET_ERR_MSG_ATTR(extack, tb[TCA_CAKE_NAT],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2587) 				    "No conntrack support in kernel");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2588) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2589) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2590) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2591) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2592) 	if (tb[TCA_CAKE_BASE_RATE64])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2593) 		q->rate_bps = nla_get_u64(tb[TCA_CAKE_BASE_RATE64]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2594) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2595) 	if (tb[TCA_CAKE_DIFFSERV_MODE])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2596) 		q->tin_mode = nla_get_u32(tb[TCA_CAKE_DIFFSERV_MODE]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2597) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2598) 	if (tb[TCA_CAKE_WASH]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2599) 		if (!!nla_get_u32(tb[TCA_CAKE_WASH]))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2600) 			q->rate_flags |= CAKE_FLAG_WASH;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2601) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2602) 			q->rate_flags &= ~CAKE_FLAG_WASH;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2603) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2604) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2605) 	if (tb[TCA_CAKE_FLOW_MODE])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2606) 		q->flow_mode = ((q->flow_mode & CAKE_FLOW_NAT_FLAG) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2607) 				(nla_get_u32(tb[TCA_CAKE_FLOW_MODE]) &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2608) 					CAKE_FLOW_MASK));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2609) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2610) 	if (tb[TCA_CAKE_ATM])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2611) 		q->atm_mode = nla_get_u32(tb[TCA_CAKE_ATM]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2612) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2613) 	if (tb[TCA_CAKE_OVERHEAD]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2614) 		q->rate_overhead = nla_get_s32(tb[TCA_CAKE_OVERHEAD]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2615) 		q->rate_flags |= CAKE_FLAG_OVERHEAD;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2616) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2617) 		q->max_netlen = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2618) 		q->max_adjlen = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2619) 		q->min_netlen = ~0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2620) 		q->min_adjlen = ~0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2621) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2622) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2623) 	if (tb[TCA_CAKE_RAW]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2624) 		q->rate_flags &= ~CAKE_FLAG_OVERHEAD;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2625) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2626) 		q->max_netlen = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2627) 		q->max_adjlen = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2628) 		q->min_netlen = ~0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2629) 		q->min_adjlen = ~0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2630) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2631) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2632) 	if (tb[TCA_CAKE_MPU])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2633) 		q->rate_mpu = nla_get_u32(tb[TCA_CAKE_MPU]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2634) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2635) 	if (tb[TCA_CAKE_RTT]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2636) 		q->interval = nla_get_u32(tb[TCA_CAKE_RTT]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2637) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2638) 		if (!q->interval)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2639) 			q->interval = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2640) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2641) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2642) 	if (tb[TCA_CAKE_TARGET]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2643) 		q->target = nla_get_u32(tb[TCA_CAKE_TARGET]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2644) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2645) 		if (!q->target)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2646) 			q->target = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2647) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2648) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2649) 	if (tb[TCA_CAKE_AUTORATE]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2650) 		if (!!nla_get_u32(tb[TCA_CAKE_AUTORATE]))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2651) 			q->rate_flags |= CAKE_FLAG_AUTORATE_INGRESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2652) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2653) 			q->rate_flags &= ~CAKE_FLAG_AUTORATE_INGRESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2654) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2655) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2656) 	if (tb[TCA_CAKE_INGRESS]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2657) 		if (!!nla_get_u32(tb[TCA_CAKE_INGRESS]))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2658) 			q->rate_flags |= CAKE_FLAG_INGRESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2659) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2660) 			q->rate_flags &= ~CAKE_FLAG_INGRESS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2661) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2662) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2663) 	if (tb[TCA_CAKE_ACK_FILTER])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2664) 		q->ack_filter = nla_get_u32(tb[TCA_CAKE_ACK_FILTER]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2665) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2666) 	if (tb[TCA_CAKE_MEMORY])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2667) 		q->buffer_config_limit = nla_get_u32(tb[TCA_CAKE_MEMORY]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2668) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2669) 	if (tb[TCA_CAKE_SPLIT_GSO]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2670) 		if (!!nla_get_u32(tb[TCA_CAKE_SPLIT_GSO]))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2671) 			q->rate_flags |= CAKE_FLAG_SPLIT_GSO;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2672) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2673) 			q->rate_flags &= ~CAKE_FLAG_SPLIT_GSO;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2674) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2675) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2676) 	if (tb[TCA_CAKE_FWMARK]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2677) 		q->fwmark_mask = nla_get_u32(tb[TCA_CAKE_FWMARK]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2678) 		q->fwmark_shft = q->fwmark_mask ? __ffs(q->fwmark_mask) : 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2679) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2680) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2681) 	if (q->tins) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2682) 		sch_tree_lock(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2683) 		cake_reconfigure(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2684) 		sch_tree_unlock(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2685) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2686) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2687) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2688) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2689) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2690) static void cake_destroy(struct Qdisc *sch)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2691) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2692) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2693) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2694) 	qdisc_watchdog_cancel(&q->watchdog);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2695) 	tcf_block_put(q->block);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2696) 	kvfree(q->tins);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2697) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2698) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2699) static int cake_init(struct Qdisc *sch, struct nlattr *opt,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2700) 		     struct netlink_ext_ack *extack)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2701) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2702) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2703) 	int i, j, err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2704) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2705) 	sch->limit = 10240;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2706) 	q->tin_mode = CAKE_DIFFSERV_DIFFSERV3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2707) 	q->flow_mode  = CAKE_FLOW_TRIPLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2708) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2709) 	q->rate_bps = 0; /* unlimited by default */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2710) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2711) 	q->interval = 100000; /* 100ms default */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2712) 	q->target   =   5000; /* 5ms: codel RFC argues
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2713) 			       * for 5 to 10% of interval
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2714) 			       */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2715) 	q->rate_flags |= CAKE_FLAG_SPLIT_GSO;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2716) 	q->cur_tin = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2717) 	q->cur_flow  = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2718) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2719) 	qdisc_watchdog_init(&q->watchdog, sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2720) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2721) 	if (opt) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2722) 		err = cake_change(sch, opt, extack);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2723) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2724) 		if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2725) 			return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2726) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2727) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2728) 	err = tcf_block_get(&q->block, &q->filter_list, sch, extack);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2729) 	if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2730) 		return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2731) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2732) 	quantum_div[0] = ~0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2733) 	for (i = 1; i <= CAKE_QUEUES; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2734) 		quantum_div[i] = 65535 / i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2735) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2736) 	q->tins = kvcalloc(CAKE_MAX_TINS, sizeof(struct cake_tin_data),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2737) 			   GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2738) 	if (!q->tins)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2739) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2740) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2741) 	for (i = 0; i < CAKE_MAX_TINS; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2742) 		struct cake_tin_data *b = q->tins + i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2743) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2744) 		INIT_LIST_HEAD(&b->new_flows);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2745) 		INIT_LIST_HEAD(&b->old_flows);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2746) 		INIT_LIST_HEAD(&b->decaying_flows);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2747) 		b->sparse_flow_count = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2748) 		b->bulk_flow_count = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2749) 		b->decaying_flow_count = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2750) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2751) 		for (j = 0; j < CAKE_QUEUES; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2752) 			struct cake_flow *flow = b->flows + j;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2753) 			u32 k = j * CAKE_MAX_TINS + i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2754) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2755) 			INIT_LIST_HEAD(&flow->flowchain);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2756) 			cobalt_vars_init(&flow->cvars);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2757) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2758) 			q->overflow_heap[k].t = i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2759) 			q->overflow_heap[k].b = j;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2760) 			b->overflow_idx[j] = k;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2761) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2762) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2763) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2764) 	cake_reconfigure(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2765) 	q->avg_peak_bandwidth = q->rate_bps;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2766) 	q->min_netlen = ~0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2767) 	q->min_adjlen = ~0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2768) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2769) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2770) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2771) static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2772) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2773) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2774) 	struct nlattr *opts;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2775) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2776) 	opts = nla_nest_start_noflag(skb, TCA_OPTIONS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2777) 	if (!opts)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2778) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2779) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2780) 	if (nla_put_u64_64bit(skb, TCA_CAKE_BASE_RATE64, q->rate_bps,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2781) 			      TCA_CAKE_PAD))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2782) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2783) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2784) 	if (nla_put_u32(skb, TCA_CAKE_FLOW_MODE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2785) 			q->flow_mode & CAKE_FLOW_MASK))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2786) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2787) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2788) 	if (nla_put_u32(skb, TCA_CAKE_RTT, q->interval))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2789) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2790) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2791) 	if (nla_put_u32(skb, TCA_CAKE_TARGET, q->target))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2792) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2793) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2794) 	if (nla_put_u32(skb, TCA_CAKE_MEMORY, q->buffer_config_limit))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2795) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2796) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2797) 	if (nla_put_u32(skb, TCA_CAKE_AUTORATE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2798) 			!!(q->rate_flags & CAKE_FLAG_AUTORATE_INGRESS)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2799) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2800) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2801) 	if (nla_put_u32(skb, TCA_CAKE_INGRESS,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2802) 			!!(q->rate_flags & CAKE_FLAG_INGRESS)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2803) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2804) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2805) 	if (nla_put_u32(skb, TCA_CAKE_ACK_FILTER, q->ack_filter))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2806) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2807) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2808) 	if (nla_put_u32(skb, TCA_CAKE_NAT,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2809) 			!!(q->flow_mode & CAKE_FLOW_NAT_FLAG)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2810) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2811) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2812) 	if (nla_put_u32(skb, TCA_CAKE_DIFFSERV_MODE, q->tin_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2813) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2814) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2815) 	if (nla_put_u32(skb, TCA_CAKE_WASH,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2816) 			!!(q->rate_flags & CAKE_FLAG_WASH)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2817) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2818) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2819) 	if (nla_put_u32(skb, TCA_CAKE_OVERHEAD, q->rate_overhead))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2820) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2821) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2822) 	if (!(q->rate_flags & CAKE_FLAG_OVERHEAD))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2823) 		if (nla_put_u32(skb, TCA_CAKE_RAW, 0))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2824) 			goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2825) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2826) 	if (nla_put_u32(skb, TCA_CAKE_ATM, q->atm_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2827) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2828) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2829) 	if (nla_put_u32(skb, TCA_CAKE_MPU, q->rate_mpu))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2830) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2831) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2832) 	if (nla_put_u32(skb, TCA_CAKE_SPLIT_GSO,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2833) 			!!(q->rate_flags & CAKE_FLAG_SPLIT_GSO)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2834) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2835) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2836) 	if (nla_put_u32(skb, TCA_CAKE_FWMARK, q->fwmark_mask))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2837) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2838) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2839) 	return nla_nest_end(skb, opts);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2840) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2841) nla_put_failure:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2842) 	return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2843) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2844) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2845) static int cake_dump_stats(struct Qdisc *sch, struct gnet_dump *d)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2846) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2847) 	struct nlattr *stats = nla_nest_start_noflag(d->skb, TCA_STATS_APP);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2848) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2849) 	struct nlattr *tstats, *ts;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2850) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2851) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2852) 	if (!stats)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2853) 		return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2854) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2855) #define PUT_STAT_U32(attr, data) do {				       \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2856) 		if (nla_put_u32(d->skb, TCA_CAKE_STATS_ ## attr, data)) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2857) 			goto nla_put_failure;			       \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2858) 	} while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2859) #define PUT_STAT_U64(attr, data) do {				       \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2860) 		if (nla_put_u64_64bit(d->skb, TCA_CAKE_STATS_ ## attr, \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2861) 					data, TCA_CAKE_STATS_PAD)) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2862) 			goto nla_put_failure;			       \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2863) 	} while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2864) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2865) 	PUT_STAT_U64(CAPACITY_ESTIMATE64, q->avg_peak_bandwidth);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2866) 	PUT_STAT_U32(MEMORY_LIMIT, q->buffer_limit);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2867) 	PUT_STAT_U32(MEMORY_USED, q->buffer_max_used);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2868) 	PUT_STAT_U32(AVG_NETOFF, ((q->avg_netoff + 0x8000) >> 16));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2869) 	PUT_STAT_U32(MAX_NETLEN, q->max_netlen);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2870) 	PUT_STAT_U32(MAX_ADJLEN, q->max_adjlen);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2871) 	PUT_STAT_U32(MIN_NETLEN, q->min_netlen);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2872) 	PUT_STAT_U32(MIN_ADJLEN, q->min_adjlen);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2873) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2874) #undef PUT_STAT_U32
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2875) #undef PUT_STAT_U64
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2876) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2877) 	tstats = nla_nest_start_noflag(d->skb, TCA_CAKE_STATS_TIN_STATS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2878) 	if (!tstats)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2879) 		goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2880) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2881) #define PUT_TSTAT_U32(attr, data) do {					\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2882) 		if (nla_put_u32(d->skb, TCA_CAKE_TIN_STATS_ ## attr, data)) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2883) 			goto nla_put_failure;				\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2884) 	} while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2885) #define PUT_TSTAT_U64(attr, data) do {					\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2886) 		if (nla_put_u64_64bit(d->skb, TCA_CAKE_TIN_STATS_ ## attr, \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2887) 					data, TCA_CAKE_TIN_STATS_PAD))	\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2888) 			goto nla_put_failure;				\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2889) 	} while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2890) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2891) 	for (i = 0; i < q->tin_cnt; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2892) 		struct cake_tin_data *b = &q->tins[q->tin_order[i]];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2893) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2894) 		ts = nla_nest_start_noflag(d->skb, i + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2895) 		if (!ts)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2896) 			goto nla_put_failure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2897) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2898) 		PUT_TSTAT_U64(THRESHOLD_RATE64, b->tin_rate_bps);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2899) 		PUT_TSTAT_U64(SENT_BYTES64, b->bytes);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2900) 		PUT_TSTAT_U32(BACKLOG_BYTES, b->tin_backlog);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2901) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2902) 		PUT_TSTAT_U32(TARGET_US,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2903) 			      ktime_to_us(ns_to_ktime(b->cparams.target)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2904) 		PUT_TSTAT_U32(INTERVAL_US,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2905) 			      ktime_to_us(ns_to_ktime(b->cparams.interval)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2906) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2907) 		PUT_TSTAT_U32(SENT_PACKETS, b->packets);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2908) 		PUT_TSTAT_U32(DROPPED_PACKETS, b->tin_dropped);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2909) 		PUT_TSTAT_U32(ECN_MARKED_PACKETS, b->tin_ecn_mark);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2910) 		PUT_TSTAT_U32(ACKS_DROPPED_PACKETS, b->ack_drops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2911) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2912) 		PUT_TSTAT_U32(PEAK_DELAY_US,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2913) 			      ktime_to_us(ns_to_ktime(b->peak_delay)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2914) 		PUT_TSTAT_U32(AVG_DELAY_US,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2915) 			      ktime_to_us(ns_to_ktime(b->avge_delay)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2916) 		PUT_TSTAT_U32(BASE_DELAY_US,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2917) 			      ktime_to_us(ns_to_ktime(b->base_delay)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2918) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2919) 		PUT_TSTAT_U32(WAY_INDIRECT_HITS, b->way_hits);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2920) 		PUT_TSTAT_U32(WAY_MISSES, b->way_misses);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2921) 		PUT_TSTAT_U32(WAY_COLLISIONS, b->way_collisions);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2922) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2923) 		PUT_TSTAT_U32(SPARSE_FLOWS, b->sparse_flow_count +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2924) 					    b->decaying_flow_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2925) 		PUT_TSTAT_U32(BULK_FLOWS, b->bulk_flow_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2926) 		PUT_TSTAT_U32(UNRESPONSIVE_FLOWS, b->unresponsive_flow_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2927) 		PUT_TSTAT_U32(MAX_SKBLEN, b->max_skblen);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2928) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2929) 		PUT_TSTAT_U32(FLOW_QUANTUM, b->flow_quantum);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2930) 		nla_nest_end(d->skb, ts);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2931) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2932) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2933) #undef PUT_TSTAT_U32
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2934) #undef PUT_TSTAT_U64
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2935) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2936) 	nla_nest_end(d->skb, tstats);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2937) 	return nla_nest_end(d->skb, stats);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2938) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2939) nla_put_failure:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2940) 	nla_nest_cancel(d->skb, stats);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2941) 	return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2942) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2943) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2944) static struct Qdisc *cake_leaf(struct Qdisc *sch, unsigned long arg)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2945) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2946) 	return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2947) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2948) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2949) static unsigned long cake_find(struct Qdisc *sch, u32 classid)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2950) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2951) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2952) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2953) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2954) static unsigned long cake_bind(struct Qdisc *sch, unsigned long parent,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2955) 			       u32 classid)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2956) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2957) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2958) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2959) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2960) static void cake_unbind(struct Qdisc *q, unsigned long cl)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2961) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2962) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2963) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2964) static struct tcf_block *cake_tcf_block(struct Qdisc *sch, unsigned long cl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2965) 					struct netlink_ext_ack *extack)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2966) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2967) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2968) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2969) 	if (cl)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2970) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2971) 	return q->block;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2972) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2973) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2974) static int cake_dump_class(struct Qdisc *sch, unsigned long cl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2975) 			   struct sk_buff *skb, struct tcmsg *tcm)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2976) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2977) 	tcm->tcm_handle |= TC_H_MIN(cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2978) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2979) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2980) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2981) static int cake_dump_class_stats(struct Qdisc *sch, unsigned long cl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2982) 				 struct gnet_dump *d)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2983) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2984) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2985) 	const struct cake_flow *flow = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2986) 	struct gnet_stats_queue qs = { 0 };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2987) 	struct nlattr *stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2988) 	u32 idx = cl - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2989) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2990) 	if (idx < CAKE_QUEUES * q->tin_cnt) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2991) 		const struct cake_tin_data *b = \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2992) 			&q->tins[q->tin_order[idx / CAKE_QUEUES]];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2993) 		const struct sk_buff *skb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2994) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2995) 		flow = &b->flows[idx % CAKE_QUEUES];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2996) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2997) 		if (flow->head) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2998) 			sch_tree_lock(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2999) 			skb = flow->head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3000) 			while (skb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3001) 				qs.qlen++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3002) 				skb = skb->next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3003) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3004) 			sch_tree_unlock(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3005) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3006) 		qs.backlog = b->backlogs[idx % CAKE_QUEUES];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3007) 		qs.drops = flow->dropped;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3008) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3009) 	if (gnet_stats_copy_queue(d, NULL, &qs, qs.qlen) < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3010) 		return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3011) 	if (flow) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3012) 		ktime_t now = ktime_get();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3013) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3014) 		stats = nla_nest_start_noflag(d->skb, TCA_STATS_APP);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3015) 		if (!stats)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3016) 			return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3017) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3018) #define PUT_STAT_U32(attr, data) do {				       \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3019) 		if (nla_put_u32(d->skb, TCA_CAKE_STATS_ ## attr, data)) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3020) 			goto nla_put_failure;			       \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3021) 	} while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3022) #define PUT_STAT_S32(attr, data) do {				       \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3023) 		if (nla_put_s32(d->skb, TCA_CAKE_STATS_ ## attr, data)) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3024) 			goto nla_put_failure;			       \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3025) 	} while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3026) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3027) 		PUT_STAT_S32(DEFICIT, flow->deficit);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3028) 		PUT_STAT_U32(DROPPING, flow->cvars.dropping);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3029) 		PUT_STAT_U32(COBALT_COUNT, flow->cvars.count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3030) 		PUT_STAT_U32(P_DROP, flow->cvars.p_drop);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3031) 		if (flow->cvars.p_drop) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3032) 			PUT_STAT_S32(BLUE_TIMER_US,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3033) 				     ktime_to_us(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3034) 					     ktime_sub(now,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3035) 						       flow->cvars.blue_timer)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3036) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3037) 		if (flow->cvars.dropping) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3038) 			PUT_STAT_S32(DROP_NEXT_US,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3039) 				     ktime_to_us(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3040) 					     ktime_sub(now,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3041) 						       flow->cvars.drop_next)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3042) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3043) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3044) 		if (nla_nest_end(d->skb, stats) < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3045) 			return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3046) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3047) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3048) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3049) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3050) nla_put_failure:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3051) 	nla_nest_cancel(d->skb, stats);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3052) 	return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3053) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3054) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3055) static void cake_walk(struct Qdisc *sch, struct qdisc_walker *arg)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3056) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3057) 	struct cake_sched_data *q = qdisc_priv(sch);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3058) 	unsigned int i, j;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3059) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3060) 	if (arg->stop)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3061) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3062) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3063) 	for (i = 0; i < q->tin_cnt; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3064) 		struct cake_tin_data *b = &q->tins[q->tin_order[i]];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3065) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3066) 		for (j = 0; j < CAKE_QUEUES; j++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3067) 			if (list_empty(&b->flows[j].flowchain) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3068) 			    arg->count < arg->skip) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3069) 				arg->count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3070) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3071) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3072) 			if (arg->fn(sch, i * CAKE_QUEUES + j + 1, arg) < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3073) 				arg->stop = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3074) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3075) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3076) 			arg->count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3077) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3078) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3079) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3080) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3081) static const struct Qdisc_class_ops cake_class_ops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3082) 	.leaf		=	cake_leaf,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3083) 	.find		=	cake_find,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3084) 	.tcf_block	=	cake_tcf_block,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3085) 	.bind_tcf	=	cake_bind,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3086) 	.unbind_tcf	=	cake_unbind,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3087) 	.dump		=	cake_dump_class,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3088) 	.dump_stats	=	cake_dump_class_stats,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3089) 	.walk		=	cake_walk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3090) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3091) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3092) static struct Qdisc_ops cake_qdisc_ops __read_mostly = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3093) 	.cl_ops		=	&cake_class_ops,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3094) 	.id		=	"cake",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3095) 	.priv_size	=	sizeof(struct cake_sched_data),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3096) 	.enqueue	=	cake_enqueue,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3097) 	.dequeue	=	cake_dequeue,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3098) 	.peek		=	qdisc_peek_dequeued,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3099) 	.init		=	cake_init,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3100) 	.reset		=	cake_reset,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3101) 	.destroy	=	cake_destroy,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3102) 	.change		=	cake_change,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3103) 	.dump		=	cake_dump,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3104) 	.dump_stats	=	cake_dump_stats,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3105) 	.owner		=	THIS_MODULE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3106) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3107) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3108) static int __init cake_module_init(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3109) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3110) 	return register_qdisc(&cake_qdisc_ops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3111) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3112) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3113) static void __exit cake_module_exit(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3114) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3115) 	unregister_qdisc(&cake_qdisc_ops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3116) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3117) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3118) module_init(cake_module_init)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3119) module_exit(cake_module_exit)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3120) MODULE_AUTHOR("Jonathan Morton");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3121) MODULE_LICENSE("Dual BSD/GPL");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3122) MODULE_DESCRIPTION("The CAKE shaper.");