Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) /* SPDX-License-Identifier: GPL-2.0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) #ifndef _LINUX_CLOSURE_H
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3) #define _LINUX_CLOSURE_H
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5) #include <linux/llist.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6) #include <linux/sched.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7) #include <linux/sched/task_stack.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8) #include <linux/workqueue.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11)  * Closure is perhaps the most overused and abused term in computer science, but
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12)  * since I've been unable to come up with anything better you're stuck with it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13)  * again.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15)  * What are closures?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17)  * They embed a refcount. The basic idea is they count "things that are in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18)  * progress" - in flight bios, some other thread that's doing something else -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19)  * anything you might want to wait on.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21)  * The refcount may be manipulated with closure_get() and closure_put().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22)  * closure_put() is where many of the interesting things happen, when it causes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23)  * the refcount to go to 0.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25)  * Closures can be used to wait on things both synchronously and asynchronously,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26)  * and synchronous and asynchronous use can be mixed without restriction. To
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27)  * wait synchronously, use closure_sync() - you will sleep until your closure's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28)  * refcount hits 1.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30)  * To wait asynchronously, use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31)  *   continue_at(cl, next_function, workqueue);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33)  * passing it, as you might expect, the function to run when nothing is pending
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34)  * and the workqueue to run that function out of.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36)  * continue_at() also, critically, requires a 'return' immediately following the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37)  * location where this macro is referenced, to return to the calling function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38)  * There's good reason for this.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40)  * To use safely closures asynchronously, they must always have a refcount while
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41)  * they are running owned by the thread that is running them. Otherwise, suppose
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42)  * you submit some bios and wish to have a function run when they all complete:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44)  * foo_endio(struct bio *bio)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45)  * {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46)  *	closure_put(cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47)  * }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49)  * closure_init(cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51)  * do_stuff();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52)  * closure_get(cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53)  * bio1->bi_endio = foo_endio;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54)  * bio_submit(bio1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56)  * do_more_stuff();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57)  * closure_get(cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58)  * bio2->bi_endio = foo_endio;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59)  * bio_submit(bio2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61)  * continue_at(cl, complete_some_read, system_wq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63)  * If closure's refcount started at 0, complete_some_read() could run before the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64)  * second bio was submitted - which is almost always not what you want! More
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65)  * importantly, it wouldn't be possible to say whether the original thread or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66)  * complete_some_read()'s thread owned the closure - and whatever state it was
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67)  * associated with!
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69)  * So, closure_init() initializes a closure's refcount to 1 - and when a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70)  * closure_fn is run, the refcount will be reset to 1 first.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72)  * Then, the rule is - if you got the refcount with closure_get(), release it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73)  * with closure_put() (i.e, in a bio->bi_endio function). If you have a refcount
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74)  * on a closure because you called closure_init() or you were run out of a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75)  * closure - _always_ use continue_at(). Doing so consistently will help
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76)  * eliminate an entire class of particularly pernicious races.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78)  * Lastly, you might have a wait list dedicated to a specific event, and have no
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79)  * need for specifying the condition - you just want to wait until someone runs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80)  * closure_wake_up() on the appropriate wait list. In that case, just use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81)  * closure_wait(). It will return either true or false, depending on whether the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82)  * closure was already on a wait list or not - a closure can only be on one wait
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83)  * list at a time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85)  * Parents:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87)  * closure_init() takes two arguments - it takes the closure to initialize, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88)  * a (possibly null) parent.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90)  * If parent is non null, the new closure will have a refcount for its lifetime;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91)  * a closure is considered to be "finished" when its refcount hits 0 and the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92)  * function to run is null. Hence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94)  * continue_at(cl, NULL, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96)  * returns up the (spaghetti) stack of closures, precisely like normal return
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97)  * returns up the C stack. continue_at() with non null fn is better thought of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98)  * as doing a tail call.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100)  * All this implies that a closure should typically be embedded in a particular
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101)  * struct (which its refcount will normally control the lifetime of), and that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102)  * struct can very much be thought of as a stack frame.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) struct closure;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) struct closure_syncer;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) typedef void (closure_fn) (struct closure *);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) extern struct dentry *bcache_debug;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) struct closure_waitlist {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) 	struct llist_head	list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) enum closure_state {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) 	 * CLOSURE_WAITING: Set iff the closure is on a waitlist. Must be set by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) 	 * the thread that owns the closure, and cleared by the thread that's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) 	 * waking up the closure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) 	 * The rest are for debugging and don't affect behaviour:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) 	 * CLOSURE_RUNNING: Set when a closure is running (i.e. by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) 	 * closure_init() and when closure_put() runs then next function), and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) 	 * must be cleared before remaining hits 0. Primarily to help guard
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) 	 * against incorrect usage and accidentally transferring references.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) 	 * continue_at() and closure_return() clear it for you, if you're doing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) 	 * something unusual you can use closure_set_dead() which also helps
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) 	 * annotate where references are being transferred.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) 	CLOSURE_BITS_START	= (1U << 26),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) 	CLOSURE_DESTRUCTOR	= (1U << 26),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) 	CLOSURE_WAITING		= (1U << 28),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) 	CLOSURE_RUNNING		= (1U << 30),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) #define CLOSURE_GUARD_MASK					\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) 	((CLOSURE_DESTRUCTOR|CLOSURE_WAITING|CLOSURE_RUNNING) << 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) #define CLOSURE_REMAINING_MASK		(CLOSURE_BITS_START - 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) #define CLOSURE_REMAINING_INITIALIZER	(1|CLOSURE_RUNNING)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) struct closure {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) 	union {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) 		struct {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) 			struct workqueue_struct *wq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) 			struct closure_syncer	*s;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) 			struct llist_node	list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) 			closure_fn		*fn;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) 		};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) 		struct work_struct	work;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) 	};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) 	struct closure		*parent;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) 	atomic_t		remaining;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) #ifdef CONFIG_BCACHE_CLOSURES_DEBUG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) #define CLOSURE_MAGIC_DEAD	0xc054dead
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) #define CLOSURE_MAGIC_ALIVE	0xc054a11e
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) 	unsigned int		magic;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) 	struct list_head	all;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) 	unsigned long		ip;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) 	unsigned long		waiting_on;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) void closure_sub(struct closure *cl, int v);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) void closure_put(struct closure *cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) void __closure_wake_up(struct closure_waitlist *list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) bool closure_wait(struct closure_waitlist *list, struct closure *cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) void __closure_sync(struct closure *cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176)  * closure_sync - sleep until a closure a closure has nothing left to wait on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178)  * Sleeps until the refcount hits 1 - the thread that's running the closure owns
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179)  * the last refcount.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) static inline void closure_sync(struct closure *cl)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) 	if ((atomic_read(&cl->remaining) & CLOSURE_REMAINING_MASK) != 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) 		__closure_sync(cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) #ifdef CONFIG_BCACHE_CLOSURES_DEBUG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) void closure_debug_init(void);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) void closure_debug_create(struct closure *cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) void closure_debug_destroy(struct closure *cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) static inline void closure_debug_init(void) {}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) static inline void closure_debug_create(struct closure *cl) {}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) static inline void closure_debug_destroy(struct closure *cl) {}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) static inline void closure_set_ip(struct closure *cl)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) #ifdef CONFIG_BCACHE_CLOSURES_DEBUG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) 	cl->ip = _THIS_IP_;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) static inline void closure_set_ret_ip(struct closure *cl)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) #ifdef CONFIG_BCACHE_CLOSURES_DEBUG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) 	cl->ip = _RET_IP_;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) static inline void closure_set_waiting(struct closure *cl, unsigned long f)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) #ifdef CONFIG_BCACHE_CLOSURES_DEBUG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) 	cl->waiting_on = f;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) static inline void closure_set_stopped(struct closure *cl)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 	atomic_sub(CLOSURE_RUNNING, &cl->remaining);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) static inline void set_closure_fn(struct closure *cl, closure_fn *fn,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) 				  struct workqueue_struct *wq)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) 	closure_set_ip(cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) 	cl->fn = fn;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) 	cl->wq = wq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) 	/* between atomic_dec() in closure_put() */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) 	smp_mb__before_atomic();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) static inline void closure_queue(struct closure *cl)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) 	struct workqueue_struct *wq = cl->wq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) 	/**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) 	 * Changes made to closure, work_struct, or a couple of other structs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) 	 * may cause work.func not pointing to the right location.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) 	BUILD_BUG_ON(offsetof(struct closure, fn)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) 		     != offsetof(struct work_struct, func));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) 	if (wq) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) 		INIT_WORK(&cl->work, cl->work.func);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) 		BUG_ON(!queue_work(wq, &cl->work));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) 	} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) 		cl->fn(cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254)  * closure_get - increment a closure's refcount
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) static inline void closure_get(struct closure *cl)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) #ifdef CONFIG_BCACHE_CLOSURES_DEBUG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) 	BUG_ON((atomic_inc_return(&cl->remaining) &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) 		CLOSURE_REMAINING_MASK) <= 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) 	atomic_inc(&cl->remaining);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267)  * closure_init - Initialize a closure, setting the refcount to 1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268)  * @cl:		closure to initialize
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269)  * @parent:	parent of the new closure. cl will take a refcount on it for its
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270)  *		lifetime; may be NULL.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) static inline void closure_init(struct closure *cl, struct closure *parent)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) 	memset(cl, 0, sizeof(struct closure));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) 	cl->parent = parent;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) 	if (parent)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) 		closure_get(parent);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) 	atomic_set(&cl->remaining, CLOSURE_REMAINING_INITIALIZER);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) 	closure_debug_create(cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) 	closure_set_ip(cl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) static inline void closure_init_stack(struct closure *cl)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) 	memset(cl, 0, sizeof(struct closure));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) 	atomic_set(&cl->remaining, CLOSURE_REMAINING_INITIALIZER);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292)  * closure_wake_up - wake up all closures on a wait list,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293)  *		     with memory barrier
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) static inline void closure_wake_up(struct closure_waitlist *list)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) 	/* Memory barrier for the wait list */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) 	smp_mb();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) 	__closure_wake_up(list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303)  * continue_at - jump to another function with barrier
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305)  * After @cl is no longer waiting on anything (i.e. all outstanding refs have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306)  * been dropped with closure_put()), it will resume execution at @fn running out
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307)  * of @wq (or, if @wq is NULL, @fn will be called by closure_put() directly).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309)  * This is because after calling continue_at() you no longer have a ref on @cl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310)  * and whatever @cl owns may be freed out from under you - a running closure fn
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311)  * has a ref on its own closure which continue_at() drops.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313)  * Note you are expected to immediately return after using this macro.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) #define continue_at(_cl, _fn, _wq)					\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) do {									\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) 	set_closure_fn(_cl, _fn, _wq);					\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) 	closure_sub(_cl, CLOSURE_RUNNING + 1);				\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) } while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322)  * closure_return - finish execution of a closure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324)  * This is used to indicate that @cl is finished: when all outstanding refs on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325)  * @cl have been dropped @cl's ref on its parent closure (as passed to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326)  * closure_init()) will be dropped, if one was specified - thus this can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327)  * thought of as returning to the parent closure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) #define closure_return(_cl)	continue_at((_cl), NULL, NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332)  * continue_at_nobarrier - jump to another function without barrier
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334)  * Causes @fn to be executed out of @cl, in @wq context (or called directly if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335)  * @wq is NULL).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337)  * The ref the caller of continue_at_nobarrier() had on @cl is now owned by @fn,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338)  * thus it's not safe to touch anything protected by @cl after a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339)  * continue_at_nobarrier().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) #define continue_at_nobarrier(_cl, _fn, _wq)				\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) do {									\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) 	set_closure_fn(_cl, _fn, _wq);					\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) 	closure_queue(_cl);						\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) } while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348)  * closure_return_with_destructor - finish execution of a closure,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349)  *				    with destructor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351)  * Works like closure_return(), except @destructor will be called when all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352)  * outstanding refs on @cl have been dropped; @destructor may be used to safely
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353)  * free the memory occupied by @cl, and it is called with the ref on the parent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354)  * closure still held - so @destructor could safely return an item to a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355)  * freelist protected by @cl's parent.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) #define closure_return_with_destructor(_cl, _destructor)		\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) do {									\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) 	set_closure_fn(_cl, _destructor, NULL);				\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) 	closure_sub(_cl, CLOSURE_RUNNING - CLOSURE_DESTRUCTOR + 1);	\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) } while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364)  * closure_call - execute @fn out of a new, uninitialized closure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366)  * Typically used when running out of one closure, and we want to run @fn
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367)  * asynchronously out of a new closure - @parent will then wait for @cl to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368)  * finish.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) static inline void closure_call(struct closure *cl, closure_fn fn,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) 				struct workqueue_struct *wq,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372) 				struct closure *parent)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) 	closure_init(cl, parent);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) 	continue_at_nobarrier(cl, fn, wq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) #endif /* _LINUX_CLOSURE_H */