^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) ================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) Completions - "wait for completion" barrier APIs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) ================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) Introduction:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) -------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) If you have one or more threads that must wait for some kernel activity
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) to have reached a point or a specific state, completions can provide a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) race-free solution to this problem. Semantically they are somewhat like a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) pthread_barrier() and have similar use-cases.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) Completions are a code synchronization mechanism which is preferable to any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) misuse of locks/semaphores and busy-loops. Any time you think of using
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) yield() or some quirky msleep(1) loop to allow something else to proceed,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) you probably want to look into using one of the wait_for_completion*()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) calls and complete() instead.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) The advantage of using completions is that they have a well defined, focused
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) purpose which makes it very easy to see the intent of the code, but they
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21) also result in more efficient code as all threads can continue execution
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) until the result is actually needed, and both the waiting and the signalling
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) is highly efficient using low level scheduler sleep/wakeup facilities.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) Completions are built on top of the waitqueue and wakeup infrastructure of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) the Linux scheduler. The event the threads on the waitqueue are waiting for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) is reduced to a simple flag in 'struct completion', appropriately called "done".
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) As completions are scheduling related, the code can be found in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) kernel/sched/completion.c.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33) Usage:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) ------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) There are three main parts to using completions:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) - the initialization of the 'struct completion' synchronization object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) - the waiting part through a call to one of the variants of wait_for_completion(),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) - the signaling side through a call to complete() or complete_all().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) There are also some helper functions for checking the state of completions.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) Note that while initialization must happen first, the waiting and signaling
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) part can happen in any order. I.e. it's entirely normal for a thread
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) to have marked a completion as 'done' before another thread checks whether
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) it has to wait for it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) To use completions you need to #include <linux/completion.h> and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) create a static or dynamic variable of type 'struct completion',
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) which has only two fields::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) struct completion {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) unsigned int done;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) wait_queue_head_t wait;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57) This provides the ->wait waitqueue to place tasks on for waiting (if any), and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) the ->done completion flag for indicating whether it's completed or not.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) Completions should be named to refer to the event that is being synchronized on.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) A good example is::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) wait_for_completion(&early_console_added);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) complete(&early_console_added);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67) Good, intuitive naming (as always) helps code readability. Naming a completion
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) 'complete' is not helpful unless the purpose is super obvious...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) Initializing completions:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) -------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) Dynamically allocated completion objects should preferably be embedded in data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) structures that are assured to be alive for the life-time of the function/driver,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) to prevent races with asynchronous complete() calls from occurring.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) Particular care should be taken when using the _timeout() or _killable()/_interruptible()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) variants of wait_for_completion(), as it must be assured that memory de-allocation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80) does not happen until all related activities (complete() or reinit_completion())
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) have taken place, even if these wait functions return prematurely due to a timeout
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) or a signal triggering.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) Initializing of dynamically allocated completion objects is done via a call to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) init_completion()::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) init_completion(&dynamic_object->done);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) In this call we initialize the waitqueue and set ->done to 0, i.e. "not completed"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) or "not done".
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) The re-initialization function, reinit_completion(), simply resets the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) ->done field to 0 ("not done"), without touching the waitqueue.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) Callers of this function must make sure that there are no racy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95) wait_for_completion() calls going on in parallel.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) Calling init_completion() on the same completion object twice is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) most likely a bug as it re-initializes the queue to an empty queue and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) enqueued tasks could get "lost" - use reinit_completion() in that case,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) but be aware of other races.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) For static declaration and initialization, macros are available.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) For static (or global) declarations in file scope you can use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) DECLARE_COMPLETION()::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) static DECLARE_COMPLETION(setup_done);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) DECLARE_COMPLETION(setup_done);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) Note that in this case the completion is boot time (or module load time)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) initialized to 'not done' and doesn't require an init_completion() call.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) When a completion is declared as a local variable within a function,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) then the initialization should always use DECLARE_COMPLETION_ONSTACK()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) explicitly, not just to make lockdep happy, but also to make it clear
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) that limited scope had been considered and is intentional::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) DECLARE_COMPLETION_ONSTACK(setup_done)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) Note that when using completion objects as local variables you must be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) acutely aware of the short life time of the function stack: the function
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) must not return to a calling context until all activities (such as waiting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) threads) have ceased and the completion object is completely unused.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) To emphasise this again: in particular when using some of the waiting API variants
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) with more complex outcomes, such as the timeout or signalling (_timeout(),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) _killable() and _interruptible()) variants, the wait might complete
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) prematurely while the object might still be in use by another thread - and a return
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) from the wait_on_completion*() caller function will deallocate the function
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) stack and cause subtle data corruption if a complete() is done in some
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) other thread. Simple testing might not trigger these kinds of races.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) If unsure, use dynamically allocated completion objects, preferably embedded
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) in some other long lived object that has a boringly long life time which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) exceeds the life time of any helper threads using the completion object,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) or has a lock or other synchronization mechanism to make sure complete()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) is not called on a freed object.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) A naive DECLARE_COMPLETION() on the stack triggers a lockdep warning.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) Waiting for completions:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) ------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) For a thread to wait for some concurrent activity to finish, it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) calls wait_for_completion() on the initialized completion structure::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) void wait_for_completion(struct completion *done)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) A typical usage scenario is::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) CPU#1 CPU#2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) struct completion setup_done;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) init_completion(&setup_done);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) initialize_work(...,&setup_done,...);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) /* run non-dependent code */ /* do setup */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) wait_for_completion(&setup_done); complete(setup_done);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) This is not implying any particular order between wait_for_completion() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) the call to complete() - if the call to complete() happened before the call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) to wait_for_completion() then the waiting side simply will continue
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) immediately as all dependencies are satisfied; if not, it will block until
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) completion is signaled by complete().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) Note that wait_for_completion() is calling spin_lock_irq()/spin_unlock_irq(),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) so it can only be called safely when you know that interrupts are enabled.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) Calling it from IRQs-off atomic contexts will result in hard-to-detect
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) spurious enabling of interrupts.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) The default behavior is to wait without a timeout and to mark the task as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) uninterruptible. wait_for_completion() and its variants are only safe
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) in process context (as they can sleep) but not in atomic context,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) interrupt context, with disabled IRQs, or preemption is disabled - see also
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) try_wait_for_completion() below for handling completion in atomic/interrupt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) As all variants of wait_for_completion() can (obviously) block for a long
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) time depending on the nature of the activity they are waiting for, so in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) most cases you probably don't want to call this with held mutexes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) wait_for_completion*() variants available:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) ------------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) The below variants all return status and this status should be checked in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) most(/all) cases - in cases where the status is deliberately not checked you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) probably want to make a note explaining this (e.g. see
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) arch/arm/kernel/smp.c:__cpu_up()).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) A common problem that occurs is to have unclean assignment of return types,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) so take care to assign return-values to variables of the proper type.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) Checking for the specific meaning of return values also has been found
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) to be quite inaccurate, e.g. constructs like::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) if (!wait_for_completion_interruptible_timeout(...))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) ... would execute the same code path for successful completion and for the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) interrupted case - which is probably not what you want::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) int wait_for_completion_interruptible(struct completion *done)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) This function marks the task TASK_INTERRUPTIBLE while it is waiting.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) If a signal was received while waiting it will return -ERESTARTSYS; 0 otherwise::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) unsigned long wait_for_completion_timeout(struct completion *done, unsigned long timeout)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) The task is marked as TASK_UNINTERRUPTIBLE and will wait at most 'timeout'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) jiffies. If a timeout occurs it returns 0, else the remaining time in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) jiffies (but at least 1).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) Timeouts are preferably calculated with msecs_to_jiffies() or usecs_to_jiffies(),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) to make the code largely HZ-invariant.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) If the returned timeout value is deliberately ignored a comment should probably explain
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) why (e.g. see drivers/mfd/wm8350-core.c wm8350_read_auxadc())::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) long wait_for_completion_interruptible_timeout(struct completion *done, unsigned long timeout)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) This function passes a timeout in jiffies and marks the task as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) TASK_INTERRUPTIBLE. If a signal was received it will return -ERESTARTSYS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) otherwise it returns 0 if the completion timed out, or the remaining time in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) jiffies if completion occurred.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) Further variants include _killable which uses TASK_KILLABLE as the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) designated tasks state and will return -ERESTARTSYS if it is interrupted,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) or 0 if completion was achieved. There is a _timeout variant as well::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) long wait_for_completion_killable(struct completion *done)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) long wait_for_completion_killable_timeout(struct completion *done, unsigned long timeout)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) The _io variants wait_for_completion_io() behave the same as the non-_io
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) variants, except for accounting waiting time as 'waiting on IO', which has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) an impact on how the task is accounted in scheduling/IO stats::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) void wait_for_completion_io(struct completion *done)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) unsigned long wait_for_completion_io_timeout(struct completion *done, unsigned long timeout)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) Signaling completions:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) ----------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) A thread that wants to signal that the conditions for continuation have been
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) achieved calls complete() to signal exactly one of the waiters that it can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) continue::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) void complete(struct completion *done)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) ... or calls complete_all() to signal all current and future waiters::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) void complete_all(struct completion *done)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) The signaling will work as expected even if completions are signaled before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) a thread starts waiting. This is achieved by the waiter "consuming"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) (decrementing) the done field of 'struct completion'. Waiting threads
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) wakeup order is the same in which they were enqueued (FIFO order).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) If complete() is called multiple times then this will allow for that number
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) of waiters to continue - each call to complete() will simply increment the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) done field. Calling complete_all() multiple times is a bug though. Both
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) complete() and complete_all() can be called in IRQ/atomic context safely.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) There can only be one thread calling complete() or complete_all() on a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) particular 'struct completion' at any time - serialized through the wait
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) queue spinlock. Any such concurrent calls to complete() or complete_all()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) probably are a design bug.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) Signaling completion from IRQ context is fine as it will appropriately
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) lock with spin_lock_irqsave()/spin_unlock_irqrestore() and it will never
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) try_wait_for_completion()/completion_done():
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) --------------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) The try_wait_for_completion() function will not put the thread on the wait
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) queue but rather returns false if it would need to enqueue (block) the thread,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) else it consumes one posted completion and returns true::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) bool try_wait_for_completion(struct completion *done)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) Finally, to check the state of a completion without changing it in any way,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) call completion_done(), which returns false if there are no posted
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) completions that were not yet consumed by waiters (implying that there are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) waiters) and true otherwise::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) bool completion_done(struct completion *done)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) Both try_wait_for_completion() and completion_done() are safe to be called in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) IRQ or atomic context.