^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) .. _rcu_barrier:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) RCU and Unloadable Modules
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4) ==========================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) [Originally published in LWN Jan. 14, 2007: http://lwn.net/Articles/217484/]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) RCU (read-copy update) is a synchronization mechanism that can be thought
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) of as a replacement for read-writer locking (among other things), but with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) very low-overhead readers that are immune to deadlock, priority inversion,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) and unbounded latency. RCU read-side critical sections are delimited
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) kernels, generate no code whatsoever.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) This means that RCU writers are unaware of the presence of concurrent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) readers, so that RCU updates to shared data must be undertaken quite
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) carefully, leaving an old version of the data structure in place until all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) pre-existing readers have finished. These old versions are needed because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) such readers might hold a reference to them. RCU updates can therefore be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) rather expensive, and RCU is thus best suited for read-mostly situations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) How can an RCU writer possibly determine when all readers are finished,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) given that readers might well leave absolutely no trace of their
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) presence? There is a synchronize_rcu() primitive that blocks until all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) pre-existing readers have completed. An updater wishing to delete an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) element p from a linked list might do the following, while holding an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) appropriate lock, of course::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) list_del_rcu(p);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) synchronize_rcu();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) kfree(p);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33) But the above code cannot be used in IRQ context -- the call_rcu()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) primitive must be used instead. This primitive takes a pointer to an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) rcu_head struct placed within the RCU-protected data structure and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) another pointer to a function that may be invoked later to free that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) structure. Code to delete an element p from the linked list from IRQ
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) context might then be as follows::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) list_del_rcu(p);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) call_rcu(&p->rcu, p_callback);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) Since call_rcu() never blocks, this code can safely be used from within
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) IRQ context. The function p_callback() might be defined as follows::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) static void p_callback(struct rcu_head *rp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) struct pstruct *p = container_of(rp, struct pstruct, rcu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) kfree(p);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) Unloading Modules That Use call_rcu()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) -------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57) But what if p_callback is defined in an unloadable module?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) If we unload the module while some RCU callbacks are pending,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) the CPUs executing these callbacks are going to be severely
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) disappointed when they are later invoked, as fancifully depicted at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) http://lwn.net/images/ns/kernel/rcu-drop.jpg.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64) We could try placing a synchronize_rcu() in the module-exit code path,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) but this is not sufficient. Although synchronize_rcu() does wait for a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) grace period to elapse, it does not wait for the callbacks to complete.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) One might be tempted to try several back-to-back synchronize_rcu()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) calls, but this is still not guaranteed to work. If there is a very
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70) heavy RCU-callback load, then some of the callbacks might be deferred
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) in order to allow other processing to proceed. Such deferral is required
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) in realtime kernels in order to avoid excessive scheduling latencies.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) rcu_barrier()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) -------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) We instead need the rcu_barrier() primitive. Rather than waiting for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) a grace period to elapse, rcu_barrier() waits for all outstanding RCU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80) callbacks to complete. Please note that rcu_barrier() does **not** imply
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) synchronize_rcu(), in particular, if there are no RCU callbacks queued
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) anywhere, rcu_barrier() is within its rights to return immediately,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) without waiting for a grace period to elapse.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) Pseudo-code using rcu_barrier() is as follows:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) 1. Prevent any new RCU callbacks from being posted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) 2. Execute rcu_barrier().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) 3. Allow the module to be unloaded.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) There is also an srcu_barrier() function for SRCU, and you of course
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) must match the flavor of rcu_barrier() with that of call_rcu(). If your
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) module uses multiple flavors of call_rcu(), then it must also use multiple
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) flavors of rcu_barrier() when unloading that module. For example, if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95) it uses call_rcu(), call_srcu() on srcu_struct_1, and call_srcu() on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96) srcu_struct_2, then the following three lines of code will be required
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) when unloading::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) 1 rcu_barrier();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) 2 srcu_barrier(&srcu_struct_1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) 3 srcu_barrier(&srcu_struct_2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) The rcutorture module makes use of rcu_barrier() in its exit function
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) as follows::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) 1 static void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) 2 rcu_torture_cleanup(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) 3 {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) 4 int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) 5
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) 6 fullstop = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) 7 if (shuffler_task != NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) 8 VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) 9 kthread_stop(shuffler_task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) 10 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) 11 shuffler_task = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) 12
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) 13 if (writer_task != NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) 14 VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) 15 kthread_stop(writer_task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) 16 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) 17 writer_task = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) 18
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) 19 if (reader_tasks != NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) 20 for (i = 0; i < nrealreaders; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) 21 if (reader_tasks[i] != NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) 22 VERBOSE_PRINTK_STRING(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) 23 "Stopping rcu_torture_reader task");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) 24 kthread_stop(reader_tasks[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) 25 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) 26 reader_tasks[i] = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) 27 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) 28 kfree(reader_tasks);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) 29 reader_tasks = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) 30 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) 31 rcu_torture_current = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) 32
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) 33 if (fakewriter_tasks != NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) 34 for (i = 0; i < nfakewriters; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) 35 if (fakewriter_tasks[i] != NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) 36 VERBOSE_PRINTK_STRING(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) 37 "Stopping rcu_torture_fakewriter task");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) 38 kthread_stop(fakewriter_tasks[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) 39 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) 40 fakewriter_tasks[i] = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) 41 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) 42 kfree(fakewriter_tasks);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) 43 fakewriter_tasks = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) 44 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) 45
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) 46 if (stats_task != NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) 47 VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) 48 kthread_stop(stats_task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) 49 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) 50 stats_task = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) 51
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) 52 /* Wait for all RCU callbacks to fire. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) 53 rcu_barrier();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) 54
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) 55 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) 56
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) 57 if (cur_ops->cleanup != NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) 58 cur_ops->cleanup();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) 59 if (atomic_read(&n_rcu_torture_error))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) 60 rcu_torture_print_module_parms("End of test: FAILURE");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) 61 else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) 62 rcu_torture_print_module_parms("End of test: SUCCESS");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) 63 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) Line 6 sets a global variable that prevents any RCU callbacks from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) re-posting themselves. This will not be necessary in most cases, since
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) RCU callbacks rarely include calls to call_rcu(). However, the rcutorture
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) module is an exception to this rule, and therefore needs to set this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) global variable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) Lines 7-50 stop all the kernel tasks associated with the rcutorture
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) module. Therefore, once execution reaches line 53, no more rcutorture
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) RCU callbacks will be posted. The rcu_barrier() call on line 53 waits
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) for any pre-existing callbacks to complete.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) Then lines 55-62 print status and do operation-specific cleanup, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) then return, permitting the module-unload operation to be completed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) .. _rcubarrier_quiz_1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) Quick Quiz #1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) Is there any other situation where rcu_barrier() might
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) be required?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) :ref:`Answer to Quick Quiz #1 <answer_rcubarrier_quiz_1>`
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) Your module might have additional complications. For example, if your
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) module invokes call_rcu() from timers, you will need to first cancel all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) the timers, and only then invoke rcu_barrier() to wait for any remaining
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) RCU callbacks to complete.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) Of course, if you module uses call_rcu(), you will need to invoke
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) rcu_barrier() before unloading. Similarly, if your module uses
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) call_srcu(), you will need to invoke srcu_barrier() before unloading,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) and on the same srcu_struct structure. If your module uses call_rcu()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) **and** call_srcu(), then you will need to invoke rcu_barrier() **and**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) srcu_barrier().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) Implementing rcu_barrier()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) --------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) Dipankar Sarma's implementation of rcu_barrier() makes use of the fact
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) that RCU callbacks are never reordered once queued on one of the per-CPU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) queues. His implementation queues an RCU callback on each of the per-CPU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) callback queues, and then waits until they have all started executing, at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) which point, all earlier RCU callbacks are guaranteed to have completed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) The original code for rcu_barrier() was as follows::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) 1 void rcu_barrier(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) 2 {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) 3 BUG_ON(in_interrupt());
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) 4 /* Take cpucontrol mutex to protect against CPU hotplug */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) 5 mutex_lock(&rcu_barrier_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) 6 init_completion(&rcu_barrier_completion);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) 7 atomic_set(&rcu_barrier_cpu_count, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) 8 on_each_cpu(rcu_barrier_func, NULL, 0, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 9 wait_for_completion(&rcu_barrier_completion);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) 10 mutex_unlock(&rcu_barrier_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) 11 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) Line 3 verifies that the caller is in process context, and lines 5 and 10
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) global completion and counters at a time, which are initialized on lines
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) 6 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) shown below. Note that the final "1" in on_each_cpu()'s argument list
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) ensures that all the calls to rcu_barrier_func() will have completed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) before on_each_cpu() returns. Line 9 then waits for the completion.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) This code was rewritten in 2008 and several times thereafter, but this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) still gives the general idea.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) to post an RCU callback, as follows::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) 1 static void rcu_barrier_func(void *notused)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) 2 {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) 3 int cpu = smp_processor_id();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) 4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) 5 struct rcu_head *head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) 6
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) 7 head = &rdp->barrier;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) 8 atomic_inc(&rcu_barrier_cpu_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) 9 call_rcu(head, rcu_barrier_callback);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) 10 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) which contains the struct rcu_head that needed for the later call to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) 8 increments a global counter. This counter will later be decremented
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) by the callback. Line 9 then registers the rcu_barrier_callback() on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) the current CPU's queue.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) The rcu_barrier_callback() function simply atomically decrements the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) rcu_barrier_cpu_count variable and finalizes the completion when it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) reaches zero, as follows::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) 1 static void rcu_barrier_callback(struct rcu_head *notused)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) 2 {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) 3 if (atomic_dec_and_test(&rcu_barrier_cpu_count))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) 4 complete(&rcu_barrier_completion);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) 5 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) .. _rcubarrier_quiz_2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) Quick Quiz #2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) What happens if CPU 0's rcu_barrier_func() executes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) immediately (thus incrementing rcu_barrier_cpu_count to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) value one), but the other CPU's rcu_barrier_func() invocations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) are delayed for a full grace period? Couldn't this result in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) rcu_barrier() returning prematurely?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) :ref:`Answer to Quick Quiz #2 <answer_rcubarrier_quiz_2>`
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) The current rcu_barrier() implementation is more complex, due to the need
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) to avoid disturbing idle CPUs (especially on battery-powered systems)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) and the need to minimally disturb non-idle CPUs in real-time systems.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) However, the code above illustrates the concepts.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) rcu_barrier() Summary
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) ---------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) The rcu_barrier() primitive has seen relatively little use, since most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) code using RCU is in the core kernel rather than in modules. However, if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) you are using RCU from an unloadable module, you need to use rcu_barrier()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) so that your module may be safely unloaded.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) Answers to Quick Quizzes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) ------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) .. _answer_rcubarrier_quiz_1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) Quick Quiz #1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) Is there any other situation where rcu_barrier() might
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) be required?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) Answer: Interestingly enough, rcu_barrier() was not originally
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) implemented for module unloading. Nikita Danilov was using
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) RCU in a filesystem, which resulted in a similar situation at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) filesystem-unmount time. Dipankar Sarma coded up rcu_barrier()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) in response, so that Nikita could invoke it during the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) filesystem-unmount process.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312) Much later, yours truly hit the RCU module-unload problem when
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) implementing rcutorture, and found that rcu_barrier() solves
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) this problem as well.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) :ref:`Back to Quick Quiz #1 <rcubarrier_quiz_1>`
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) .. _answer_rcubarrier_quiz_2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) Quick Quiz #2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) What happens if CPU 0's rcu_barrier_func() executes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) immediately (thus incrementing rcu_barrier_cpu_count to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) value one), but the other CPU's rcu_barrier_func() invocations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) are delayed for a full grace period? Couldn't this result in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) rcu_barrier() returning prematurely?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) Answer: This cannot happen. The reason is that on_each_cpu() has its last
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) argument, the wait flag, set to "1". This flag is passed through
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) to smp_call_function() and further to smp_call_function_on_cpu(),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) causing this latter to spin until the cross-CPU invocation of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) rcu_barrier_func() has completed. This by itself would prevent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) a grace period from completing on non-CONFIG_PREEMPT kernels,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) since each CPU must undergo a context switch (or other quiescent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) state) before the grace period can complete. However, this is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) of no use in CONFIG_PREEMPT kernels.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) Therefore, on_each_cpu() disables preemption across its call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) to smp_call_function() and also across the local call to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) rcu_barrier_func(). This prevents the local CPU from context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) switching, again preventing grace periods from completing. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) means that all CPUs have executed rcu_barrier_func() before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) the first rcu_barrier_callback() can possibly execute, in turn
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) preventing rcu_barrier_cpu_count from prematurely reaching zero.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) Currently, -rt implementations of RCU keep but a single global
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) queue for RCU callbacks, and thus do not suffer from this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) problem. However, when the -rt RCU eventually does have per-CPU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) callback queues, things will have to change. One simple change
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) is to add an rcu_read_lock() before line 8 of rcu_barrier()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) and an rcu_read_unlock() after line 8 of this same function. If
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) you can think of a better change, please let me know!
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) :ref:`Back to Quick Quiz #2 <rcubarrier_quiz_2>`