Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) // SPDX-License-Identifier: GPL-2.0-only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) /* Copyright (c) 2017 Covalent IO, Inc. http://covalent.io
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5) /* Devmaps primary use is as a backend map for XDP BPF helper call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6)  * bpf_redirect_map(). Because XDP is mostly concerned with performance we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7)  * spent some effort to ensure the datapath with redirect maps does not use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8)  * any locking. This is a quick note on the details.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10)  * We have three possible paths to get into the devmap control plane bpf
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11)  * syscalls, bpf programs, and driver side xmit/flush operations. A bpf syscall
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12)  * will invoke an update, delete, or lookup operation. To ensure updates and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13)  * deletes appear atomic from the datapath side xchg() is used to modify the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14)  * netdev_map array. Then because the datapath does a lookup into the netdev_map
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15)  * array (read-only) from an RCU critical section we use call_rcu() to wait for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16)  * an rcu grace period before free'ing the old data structures. This ensures the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17)  * datapath always has a valid copy. However, the datapath does a "flush"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18)  * operation that pushes any pending packets in the driver outside the RCU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19)  * critical section. Each bpf_dtab_netdev tracks these pending operations using
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20)  * a per-cpu flush list. The bpf_dtab_netdev object will not be destroyed  until
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21)  * this list is empty, indicating outstanding flush operations have completed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23)  * BPF syscalls may race with BPF program calls on any of the update, delete
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24)  * or lookup operations. As noted above the xchg() operation also keep the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25)  * netdev_map consistent in this case. From the devmap side BPF programs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26)  * calling into these operations are the same as multiple user space threads
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27)  * making system calls.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29)  * Finally, any of the above may race with a netdev_unregister notifier. The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30)  * unregister notifier must search for net devices in the map structure that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31)  * contain a reference to the net device and remove them. This is a two step
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32)  * process (a) dereference the bpf_dtab_netdev object in netdev_map and (b)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33)  * check to see if the ifindex is the same as the net_device being removed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34)  * When removing the dev a cmpxchg() is used to ensure the correct dev is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35)  * removed, in the case of a concurrent update or delete operation it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36)  * possible that the initially referenced dev is no longer in the map. As the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37)  * notifier hook walks the map we know that new dev references can not be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38)  * added by the user because core infrastructure ensures dev_get_by_index()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39)  * calls will fail at this point.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41)  * The devmap_hash type is a map type which interprets keys as ifindexes and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42)  * indexes these using a hashmap. This allows maps that use ifindex as key to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43)  * densely packed instead of having holes in the lookup array for unused
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44)  * ifindexes. The setup and packet enqueue/send code is shared between the two
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45)  * types of devmap; only the lookup and insertion is different.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47) #include <linux/bpf.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48) #include <net/xdp.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49) #include <linux/filter.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50) #include <trace/events/xdp.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52) #define DEV_CREATE_FLAG_MASK \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53) 	(BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55) struct xdp_dev_bulk_queue {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56) 	struct xdp_frame *q[DEV_MAP_BULK_SIZE];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57) 	struct list_head flush_node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58) 	struct net_device *dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59) 	struct net_device *dev_rx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60) 	unsigned int count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63) struct bpf_dtab_netdev {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64) 	struct net_device *dev; /* must be first member, due to tracepoint */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65) 	struct hlist_node index_hlist;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66) 	struct bpf_dtab *dtab;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67) 	struct bpf_prog *xdp_prog;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68) 	struct rcu_head rcu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69) 	unsigned int idx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70) 	struct bpf_devmap_val val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73) struct bpf_dtab {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74) 	struct bpf_map map;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75) 	struct bpf_dtab_netdev **netdev_map; /* DEVMAP type only */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76) 	struct list_head list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78) 	/* these are only used for DEVMAP_HASH type maps */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79) 	struct hlist_head *dev_index_head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80) 	spinlock_t index_lock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81) 	unsigned int items;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82) 	u32 n_buckets;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85) static DEFINE_PER_CPU(struct list_head, dev_flush_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86) static DEFINE_SPINLOCK(dev_map_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87) static LIST_HEAD(dev_map_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89) static struct hlist_head *dev_map_create_hash(unsigned int entries,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90) 					      int numa_node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93) 	struct hlist_head *hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95) 	hash = bpf_map_area_alloc((u64) entries * sizeof(*hash), numa_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96) 	if (hash != NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97) 		for (i = 0; i < entries; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98) 			INIT_HLIST_HEAD(&hash[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) 	return hash;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) static inline struct hlist_head *dev_map_index_hash(struct bpf_dtab *dtab,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) 						    int idx)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) 	return &dtab->dev_index_head[idx & (dtab->n_buckets - 1)];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) 	u32 valsize = attr->value_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) 	u64 cost = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) 	/* check sanity of attributes. 2 value sizes supported:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) 	 * 4 bytes: ifindex
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) 	 * 8 bytes: ifindex + prog fd
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) 	if (attr->max_entries == 0 || attr->key_size != 4 ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) 	    (valsize != offsetofend(struct bpf_devmap_val, ifindex) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) 	     valsize != offsetofend(struct bpf_devmap_val, bpf_prog.fd)) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) 	    attr->map_flags & ~DEV_CREATE_FLAG_MASK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) 	/* Lookup returns a pointer straight to dev->ifindex, so make sure the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) 	 * verifier prevents writes from the BPF side
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) 	attr->map_flags |= BPF_F_RDONLY_PROG;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) 	bpf_map_init_from_attr(&dtab->map, attr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) 	if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) 		dtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) 		if (!dtab->n_buckets) /* Overflow check */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) 			return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) 		cost += (u64) sizeof(struct hlist_head) * dtab->n_buckets;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) 		cost += (u64) dtab->map.max_entries * sizeof(struct bpf_dtab_netdev *);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) 	/* if map size is larger than memlock limit, reject it */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) 	err = bpf_map_charge_init(&dtab->map.memory, cost);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) 	if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) 	if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) 		dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) 							   dtab->map.numa_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) 		if (!dtab->dev_index_head)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) 			goto free_charge;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) 		spin_lock_init(&dtab->index_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) 		dtab->netdev_map = bpf_map_area_alloc((u64) dtab->map.max_entries *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) 						      sizeof(struct bpf_dtab_netdev *),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) 						      dtab->map.numa_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) 		if (!dtab->netdev_map)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) 			goto free_charge;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) free_charge:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) 	bpf_map_charge_finish(&dtab->map.memory);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) 	return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) static struct bpf_map *dev_map_alloc(union bpf_attr *attr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) 	struct bpf_dtab *dtab;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) 	if (!capable(CAP_NET_ADMIN))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) 		return ERR_PTR(-EPERM);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) 	dtab = kzalloc(sizeof(*dtab), GFP_USER);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) 	if (!dtab)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) 		return ERR_PTR(-ENOMEM);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) 	err = dev_map_init_map(dtab, attr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) 	if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) 		kfree(dtab);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) 		return ERR_PTR(err);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) 	spin_lock(&dev_map_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) 	list_add_tail_rcu(&dtab->list, &dev_map_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) 	spin_unlock(&dev_map_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) 	return &dtab->map;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) static void dev_map_free(struct bpf_map *map)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) 	/* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) 	 * so the programs (can be more than one that used this map) were
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) 	 * disconnected from events. The following synchronize_rcu() guarantees
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) 	 * both rcu read critical sections complete and waits for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) 	 * preempt-disable regions (NAPI being the relevant context here) so we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) 	 * are certain there will be no further reads against the netdev_map and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) 	 * all flush operations are complete. Flush operations can only be done
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) 	 * from NAPI context for this reason.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) 	spin_lock(&dev_map_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) 	list_del_rcu(&dtab->list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) 	spin_unlock(&dev_map_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) 	bpf_clear_redirect_map(map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) 	synchronize_rcu();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) 	/* Make sure prior __dev_map_entry_free() have completed. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) 	rcu_barrier();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) 	if (dtab->map.map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) 		for (i = 0; i < dtab->n_buckets; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) 			struct bpf_dtab_netdev *dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) 			struct hlist_head *head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 			struct hlist_node *next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) 			head = dev_map_index_hash(dtab, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) 			hlist_for_each_entry_safe(dev, next, head, index_hlist) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) 				hlist_del_rcu(&dev->index_hlist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) 				if (dev->xdp_prog)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) 					bpf_prog_put(dev->xdp_prog);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) 				dev_put(dev->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) 				kfree(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) 		bpf_map_area_free(dtab->dev_index_head);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) 		for (i = 0; i < dtab->map.max_entries; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) 			struct bpf_dtab_netdev *dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) 			dev = dtab->netdev_map[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) 			if (!dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) 			if (dev->xdp_prog)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) 				bpf_prog_put(dev->xdp_prog);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) 			dev_put(dev->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) 			kfree(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) 		bpf_map_area_free(dtab->netdev_map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) 	kfree(dtab);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) static int dev_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) 	u32 index = key ? *(u32 *)key : U32_MAX;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) 	u32 *next = next_key;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) 	if (index >= dtab->map.max_entries) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) 		*next = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) 	if (index == dtab->map.max_entries - 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) 		return -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) 	*next = index + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) struct bpf_dtab_netdev *__dev_map_hash_lookup_elem(struct bpf_map *map, u32 key)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) 	struct hlist_head *head = dev_map_index_hash(dtab, key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) 	struct bpf_dtab_netdev *dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) 	hlist_for_each_entry_rcu(dev, head, index_hlist,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) 				 lockdep_is_held(&dtab->index_lock))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) 		if (dev->idx == key)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) 			return dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) 	return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) static int dev_map_hash_get_next_key(struct bpf_map *map, void *key,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) 				    void *next_key)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) 	u32 idx, *next = next_key;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) 	struct bpf_dtab_netdev *dev, *next_dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) 	struct hlist_head *head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) 	int i = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) 	if (!key)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) 		goto find_first;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) 	idx = *(u32 *)key;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) 	dev = __dev_map_hash_lookup_elem(map, idx);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) 	if (!dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) 		goto find_first;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) 	next_dev = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu(&dev->index_hlist)),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) 				    struct bpf_dtab_netdev, index_hlist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) 	if (next_dev) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) 		*next = next_dev->idx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) 	i = idx & (dtab->n_buckets - 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) 	i++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318)  find_first:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) 	for (; i < dtab->n_buckets; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) 		head = dev_map_index_hash(dtab, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) 		next_dev = hlist_entry_safe(rcu_dereference_raw(hlist_first_rcu(head)),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) 					    struct bpf_dtab_netdev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) 					    index_hlist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) 		if (next_dev) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) 			*next = next_dev->idx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) 			return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) 	return -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) bool dev_map_can_have_prog(struct bpf_map *map)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) 	if ((map->map_type == BPF_MAP_TYPE_DEVMAP ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) 	     map->map_type == BPF_MAP_TYPE_DEVMAP_HASH) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) 	    map->value_size != offsetofend(struct bpf_devmap_val, ifindex))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) 		return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) 	return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) static void bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) 	struct net_device *dev = bq->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) 	int sent = 0, drops = 0, err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) 	if (unlikely(!bq->count))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) 	for (i = 0; i < bq->count; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) 		struct xdp_frame *xdpf = bq->q[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) 		prefetch(xdpf);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) 	sent = dev->netdev_ops->ndo_xdp_xmit(dev, bq->count, bq->q, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) 	if (sent < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) 		err = sent;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) 		sent = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) 		goto error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) 	drops = bq->count - sent;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) 	bq->count = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) 	trace_xdp_devmap_xmit(bq->dev_rx, dev, sent, drops, err);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) 	bq->dev_rx = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) 	__list_del_clearprev(&bq->flush_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372) 	return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) error:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) 	/* If ndo_xdp_xmit fails with an errno, no frames have been
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) 	 * xmit'ed and it's our responsibility to them free all.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) 	for (i = 0; i < bq->count; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) 		struct xdp_frame *xdpf = bq->q[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) 		xdp_return_frame_rx_napi(xdpf);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381) 		drops++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383) 	goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) /* __dev_flush is called from xdp_do_flush() which _must_ be signaled
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387)  * from the driver before returning from its napi->poll() routine. The poll()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388)  * routine is called either from busy_poll context or net_rx_action signaled
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389)  * from NET_RX_SOFTIRQ. Either way the poll routine must complete before the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390)  * net device can be torn down. On devmap tear down we ensure the flush list
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391)  * is empty before completing to ensure all flush operations have completed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392)  * When drivers update the bpf program they may need to ensure any flush ops
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393)  * are also complete. Using synchronize_rcu or call_rcu will suffice for this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394)  * because both wait for napi context to exit.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396) void __dev_flush(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) 	struct list_head *flush_list = this_cpu_ptr(&dev_flush_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) 	struct xdp_dev_bulk_queue *bq, *tmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401) 	list_for_each_entry_safe(bq, tmp, flush_list, flush_node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) 		bq_xmit_all(bq, XDP_XMIT_FLUSH);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) /* rcu_read_lock (from syscall and BPF contexts) ensures that if a delete and/or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406)  * update happens in parallel here a dev_put wont happen until after reading the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407)  * ifindex.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) struct bpf_dtab_netdev *__dev_map_lookup_elem(struct bpf_map *map, u32 key)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) 	struct bpf_dtab_netdev *obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) 	if (key >= map->max_entries)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) 	obj = READ_ONCE(dtab->netdev_map[key]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418) 	return obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421) /* Runs under RCU-read-side, plus in softirq under NAPI protection.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422)  * Thus, safe percpu variable access.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424) static void bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425) 		       struct net_device *dev_rx)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) 	struct list_head *flush_list = this_cpu_ptr(&dev_flush_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428) 	struct xdp_dev_bulk_queue *bq = this_cpu_ptr(dev->xdp_bulkq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430) 	if (unlikely(bq->count == DEV_MAP_BULK_SIZE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431) 		bq_xmit_all(bq, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433) 	/* Ingress dev_rx will be the same for all xdp_frame's in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) 	 * bulk_queue, because bq stored per-CPU and must be flushed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435) 	 * from net_device drivers NAPI func end.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437) 	if (!bq->dev_rx)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438) 		bq->dev_rx = dev_rx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440) 	bq->q[bq->count++] = xdpf;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442) 	if (!bq->flush_node.prev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443) 		list_add(&bq->flush_node, flush_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446) static inline int __xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447) 			       struct net_device *dev_rx)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449) 	struct xdp_frame *xdpf;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452) 	if (!dev->netdev_ops->ndo_xdp_xmit)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455) 	err = xdp_ok_fwd_dev(dev, xdp->data_end - xdp->data);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456) 	if (unlikely(err))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457) 		return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459) 	xdpf = xdp_convert_buff_to_frame(xdp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 460) 	if (unlikely(!xdpf))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 461) 		return -EOVERFLOW;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 462) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 463) 	bq_enqueue(dev, xdpf, dev_rx);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 464) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 465) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 466) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 467) static struct xdp_buff *dev_map_run_prog(struct net_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 468) 					 struct xdp_buff *xdp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 469) 					 struct bpf_prog *xdp_prog)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 470) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 471) 	struct xdp_txq_info txq = { .dev = dev };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 472) 	u32 act;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 473) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 474) 	xdp_set_data_meta_invalid(xdp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 475) 	xdp->txq = &txq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 476) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 477) 	act = bpf_prog_run_xdp(xdp_prog, xdp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 478) 	switch (act) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 479) 	case XDP_PASS:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 480) 		return xdp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 481) 	case XDP_DROP:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 482) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 483) 	default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 484) 		bpf_warn_invalid_xdp_action(act);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 485) 		fallthrough;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 486) 	case XDP_ABORTED:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 487) 		trace_xdp_exception(dev, xdp_prog, act);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 488) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 489) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 490) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 491) 	xdp_return_buff(xdp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 492) 	return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 493) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 494) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 495) int dev_xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 496) 		    struct net_device *dev_rx)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 497) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 498) 	return __xdp_enqueue(dev, xdp, dev_rx);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 499) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 500) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 501) int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 502) 		    struct net_device *dev_rx)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 503) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 504) 	struct net_device *dev = dst->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 505) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 506) 	if (dst->xdp_prog) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 507) 		xdp = dev_map_run_prog(dev, xdp, dst->xdp_prog);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 508) 		if (!xdp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 509) 			return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 510) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 511) 	return __xdp_enqueue(dev, xdp, dev_rx);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 512) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 513) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 514) int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 515) 			     struct bpf_prog *xdp_prog)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 516) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 517) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 518) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 519) 	err = xdp_ok_fwd_dev(dst->dev, skb->len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 520) 	if (unlikely(err))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 521) 		return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 522) 	skb->dev = dst->dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 523) 	generic_xdp_tx(skb, xdp_prog);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 524) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 525) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 526) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 527) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 528) static void *dev_map_lookup_elem(struct bpf_map *map, void *key)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 529) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 530) 	struct bpf_dtab_netdev *obj = __dev_map_lookup_elem(map, *(u32 *)key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 531) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 532) 	return obj ? &obj->val : NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 533) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 534) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 535) static void *dev_map_hash_lookup_elem(struct bpf_map *map, void *key)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 536) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 537) 	struct bpf_dtab_netdev *obj = __dev_map_hash_lookup_elem(map,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 538) 								*(u32 *)key);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 539) 	return obj ? &obj->val : NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 540) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 541) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 542) static void __dev_map_entry_free(struct rcu_head *rcu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 543) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 544) 	struct bpf_dtab_netdev *dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 545) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 546) 	dev = container_of(rcu, struct bpf_dtab_netdev, rcu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 547) 	if (dev->xdp_prog)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 548) 		bpf_prog_put(dev->xdp_prog);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 549) 	dev_put(dev->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 550) 	kfree(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 551) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 552) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 553) static int dev_map_delete_elem(struct bpf_map *map, void *key)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 554) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 555) 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 556) 	struct bpf_dtab_netdev *old_dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 557) 	int k = *(u32 *)key;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 558) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 559) 	if (k >= map->max_entries)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 560) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 561) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 562) 	/* Use call_rcu() here to ensure any rcu critical sections have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 563) 	 * completed as well as any flush operations because call_rcu
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 564) 	 * will wait for preempt-disable region to complete, NAPI in this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 565) 	 * context.  And additionally, the driver tear down ensures all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 566) 	 * soft irqs are complete before removing the net device in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 567) 	 * case of dev_put equals zero.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 568) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 569) 	old_dev = xchg(&dtab->netdev_map[k], NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 570) 	if (old_dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 571) 		call_rcu(&old_dev->rcu, __dev_map_entry_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 572) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 573) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 574) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 575) static int dev_map_hash_delete_elem(struct bpf_map *map, void *key)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 576) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 577) 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 578) 	struct bpf_dtab_netdev *old_dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 579) 	int k = *(u32 *)key;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 580) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 581) 	int ret = -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 582) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 583) 	spin_lock_irqsave(&dtab->index_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 584) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 585) 	old_dev = __dev_map_hash_lookup_elem(map, k);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 586) 	if (old_dev) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 587) 		dtab->items--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 588) 		hlist_del_init_rcu(&old_dev->index_hlist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 589) 		call_rcu(&old_dev->rcu, __dev_map_entry_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 590) 		ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 591) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 592) 	spin_unlock_irqrestore(&dtab->index_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 593) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 594) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 595) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 596) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 597) static struct bpf_dtab_netdev *__dev_map_alloc_node(struct net *net,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 598) 						    struct bpf_dtab *dtab,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 599) 						    struct bpf_devmap_val *val,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 600) 						    unsigned int idx)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 601) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 602) 	struct bpf_prog *prog = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 603) 	struct bpf_dtab_netdev *dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 604) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 605) 	dev = kmalloc_node(sizeof(*dev), GFP_ATOMIC | __GFP_NOWARN,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 606) 			   dtab->map.numa_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 607) 	if (!dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 608) 		return ERR_PTR(-ENOMEM);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 609) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 610) 	dev->dev = dev_get_by_index(net, val->ifindex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 611) 	if (!dev->dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 612) 		goto err_out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 613) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 614) 	if (val->bpf_prog.fd > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 615) 		prog = bpf_prog_get_type_dev(val->bpf_prog.fd,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 616) 					     BPF_PROG_TYPE_XDP, false);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 617) 		if (IS_ERR(prog))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 618) 			goto err_put_dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 619) 		if (prog->expected_attach_type != BPF_XDP_DEVMAP)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 620) 			goto err_put_prog;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 621) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 622) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 623) 	dev->idx = idx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 624) 	dev->dtab = dtab;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 625) 	if (prog) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 626) 		dev->xdp_prog = prog;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 627) 		dev->val.bpf_prog.id = prog->aux->id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 628) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 629) 		dev->xdp_prog = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 630) 		dev->val.bpf_prog.id = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 631) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 632) 	dev->val.ifindex = val->ifindex;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 633) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 634) 	return dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 635) err_put_prog:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 636) 	bpf_prog_put(prog);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 637) err_put_dev:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 638) 	dev_put(dev->dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 639) err_out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 640) 	kfree(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 641) 	return ERR_PTR(-EINVAL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 642) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 643) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 644) static int __dev_map_update_elem(struct net *net, struct bpf_map *map,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 645) 				 void *key, void *value, u64 map_flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 646) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 647) 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 648) 	struct bpf_dtab_netdev *dev, *old_dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 649) 	struct bpf_devmap_val val = {};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 650) 	u32 i = *(u32 *)key;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 651) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 652) 	if (unlikely(map_flags > BPF_EXIST))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 653) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 654) 	if (unlikely(i >= dtab->map.max_entries))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 655) 		return -E2BIG;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 656) 	if (unlikely(map_flags == BPF_NOEXIST))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 657) 		return -EEXIST;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 658) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 659) 	/* already verified value_size <= sizeof val */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 660) 	memcpy(&val, value, map->value_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 661) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 662) 	if (!val.ifindex) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 663) 		dev = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 664) 		/* can not specify fd if ifindex is 0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 665) 		if (val.bpf_prog.fd > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 666) 			return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 667) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 668) 		dev = __dev_map_alloc_node(net, dtab, &val, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 669) 		if (IS_ERR(dev))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 670) 			return PTR_ERR(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 671) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 672) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 673) 	/* Use call_rcu() here to ensure rcu critical sections have completed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 674) 	 * Remembering the driver side flush operation will happen before the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 675) 	 * net device is removed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 676) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 677) 	old_dev = xchg(&dtab->netdev_map[i], dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 678) 	if (old_dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 679) 		call_rcu(&old_dev->rcu, __dev_map_entry_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 680) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 681) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 682) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 683) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 684) static int dev_map_update_elem(struct bpf_map *map, void *key, void *value,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 685) 			       u64 map_flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 686) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 687) 	return __dev_map_update_elem(current->nsproxy->net_ns,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 688) 				     map, key, value, map_flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 689) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 690) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 691) static int __dev_map_hash_update_elem(struct net *net, struct bpf_map *map,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 692) 				     void *key, void *value, u64 map_flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 693) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 694) 	struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 695) 	struct bpf_dtab_netdev *dev, *old_dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 696) 	struct bpf_devmap_val val = {};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 697) 	u32 idx = *(u32 *)key;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 698) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 699) 	int err = -EEXIST;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 700) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 701) 	/* already verified value_size <= sizeof val */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 702) 	memcpy(&val, value, map->value_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 703) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 704) 	if (unlikely(map_flags > BPF_EXIST || !val.ifindex))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 705) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 706) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 707) 	spin_lock_irqsave(&dtab->index_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 708) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 709) 	old_dev = __dev_map_hash_lookup_elem(map, idx);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 710) 	if (old_dev && (map_flags & BPF_NOEXIST))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 711) 		goto out_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 712) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 713) 	dev = __dev_map_alloc_node(net, dtab, &val, idx);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 714) 	if (IS_ERR(dev)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 715) 		err = PTR_ERR(dev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 716) 		goto out_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 717) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 718) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 719) 	if (old_dev) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 720) 		hlist_del_rcu(&old_dev->index_hlist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 721) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 722) 		if (dtab->items >= dtab->map.max_entries) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 723) 			spin_unlock_irqrestore(&dtab->index_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 724) 			call_rcu(&dev->rcu, __dev_map_entry_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 725) 			return -E2BIG;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 726) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 727) 		dtab->items++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 728) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 729) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 730) 	hlist_add_head_rcu(&dev->index_hlist,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 731) 			   dev_map_index_hash(dtab, idx));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 732) 	spin_unlock_irqrestore(&dtab->index_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 733) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 734) 	if (old_dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 735) 		call_rcu(&old_dev->rcu, __dev_map_entry_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 736) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 737) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 738) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 739) out_err:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 740) 	spin_unlock_irqrestore(&dtab->index_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 741) 	return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 742) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 743) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 744) static int dev_map_hash_update_elem(struct bpf_map *map, void *key, void *value,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 745) 				   u64 map_flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 746) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 747) 	return __dev_map_hash_update_elem(current->nsproxy->net_ns,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 748) 					 map, key, value, map_flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 749) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 750) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 751) static int dev_map_btf_id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 752) const struct bpf_map_ops dev_map_ops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 753) 	.map_meta_equal = bpf_map_meta_equal,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 754) 	.map_alloc = dev_map_alloc,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 755) 	.map_free = dev_map_free,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 756) 	.map_get_next_key = dev_map_get_next_key,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 757) 	.map_lookup_elem = dev_map_lookup_elem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 758) 	.map_update_elem = dev_map_update_elem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 759) 	.map_delete_elem = dev_map_delete_elem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 760) 	.map_check_btf = map_check_no_btf,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 761) 	.map_btf_name = "bpf_dtab",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 762) 	.map_btf_id = &dev_map_btf_id,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 763) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 764) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 765) static int dev_map_hash_map_btf_id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 766) const struct bpf_map_ops dev_map_hash_ops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 767) 	.map_meta_equal = bpf_map_meta_equal,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 768) 	.map_alloc = dev_map_alloc,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 769) 	.map_free = dev_map_free,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 770) 	.map_get_next_key = dev_map_hash_get_next_key,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 771) 	.map_lookup_elem = dev_map_hash_lookup_elem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 772) 	.map_update_elem = dev_map_hash_update_elem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 773) 	.map_delete_elem = dev_map_hash_delete_elem,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 774) 	.map_check_btf = map_check_no_btf,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 775) 	.map_btf_name = "bpf_dtab",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 776) 	.map_btf_id = &dev_map_hash_map_btf_id,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 777) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 778) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 779) static void dev_map_hash_remove_netdev(struct bpf_dtab *dtab,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 780) 				       struct net_device *netdev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 781) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 782) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 783) 	u32 i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 784) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 785) 	spin_lock_irqsave(&dtab->index_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 786) 	for (i = 0; i < dtab->n_buckets; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 787) 		struct bpf_dtab_netdev *dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 788) 		struct hlist_head *head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 789) 		struct hlist_node *next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 790) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 791) 		head = dev_map_index_hash(dtab, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 792) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 793) 		hlist_for_each_entry_safe(dev, next, head, index_hlist) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 794) 			if (netdev != dev->dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 795) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 796) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 797) 			dtab->items--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 798) 			hlist_del_rcu(&dev->index_hlist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 799) 			call_rcu(&dev->rcu, __dev_map_entry_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 800) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 801) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 802) 	spin_unlock_irqrestore(&dtab->index_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 803) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 804) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 805) static int dev_map_notification(struct notifier_block *notifier,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 806) 				ulong event, void *ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 807) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 808) 	struct net_device *netdev = netdev_notifier_info_to_dev(ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 809) 	struct bpf_dtab *dtab;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 810) 	int i, cpu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 811) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 812) 	switch (event) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 813) 	case NETDEV_REGISTER:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 814) 		if (!netdev->netdev_ops->ndo_xdp_xmit || netdev->xdp_bulkq)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 815) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 816) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 817) 		/* will be freed in free_netdev() */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 818) 		netdev->xdp_bulkq = alloc_percpu(struct xdp_dev_bulk_queue);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 819) 		if (!netdev->xdp_bulkq)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 820) 			return NOTIFY_BAD;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 821) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 822) 		for_each_possible_cpu(cpu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 823) 			per_cpu_ptr(netdev->xdp_bulkq, cpu)->dev = netdev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 824) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 825) 	case NETDEV_UNREGISTER:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 826) 		/* This rcu_read_lock/unlock pair is needed because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 827) 		 * dev_map_list is an RCU list AND to ensure a delete
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 828) 		 * operation does not free a netdev_map entry while we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 829) 		 * are comparing it against the netdev being unregistered.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 830) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 831) 		rcu_read_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 832) 		list_for_each_entry_rcu(dtab, &dev_map_list, list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 833) 			if (dtab->map.map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 834) 				dev_map_hash_remove_netdev(dtab, netdev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 835) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 836) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 837) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 838) 			for (i = 0; i < dtab->map.max_entries; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 839) 				struct bpf_dtab_netdev *dev, *odev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 840) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 841) 				dev = READ_ONCE(dtab->netdev_map[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 842) 				if (!dev || netdev != dev->dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 843) 					continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 844) 				odev = cmpxchg(&dtab->netdev_map[i], dev, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 845) 				if (dev == odev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 846) 					call_rcu(&dev->rcu,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 847) 						 __dev_map_entry_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 848) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 849) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 850) 		rcu_read_unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 851) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 852) 	default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 853) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 854) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 855) 	return NOTIFY_OK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 856) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 857) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 858) static struct notifier_block dev_map_notifier = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 859) 	.notifier_call = dev_map_notification,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 860) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 861) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 862) static int __init dev_map_init(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 863) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 864) 	int cpu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 865) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 866) 	/* Assure tracepoint shadow struct _bpf_dtab_netdev is in sync */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 867) 	BUILD_BUG_ON(offsetof(struct bpf_dtab_netdev, dev) !=
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 868) 		     offsetof(struct _bpf_dtab_netdev, dev));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 869) 	register_netdevice_notifier(&dev_map_notifier);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 870) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 871) 	for_each_possible_cpu(cpu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 872) 		INIT_LIST_HEAD(&per_cpu(dev_flush_list, cpu));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 873) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 874) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 875) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 876) subsys_initcall(dev_map_init);