^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) =================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) A Tour Through TREE_RCU's Expedited Grace Periods
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) =================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) Introduction
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) ============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) This document describes RCU's expedited grace periods.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) Unlike RCU's normal grace periods, which accept long latencies to attain
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) high efficiency and minimal disturbance, expedited grace periods accept
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) lower efficiency and significant disturbance to attain shorter latencies.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) There are two flavors of RCU (RCU-preempt and RCU-sched), with an earlier
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) third RCU-bh flavor having been implemented in terms of the other two.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) Each of the two implementations is covered in its own section.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) Expedited Grace Period Design
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) =============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) The expedited RCU grace periods cannot be accused of being subtle,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21) given that they for all intents and purposes hammer every CPU that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) has not yet provided a quiescent state for the current expedited
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) grace period.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) The one saving grace is that the hammer has grown a bit smaller
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) over time: The old call to ``try_stop_cpus()`` has been
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) replaced with a set of calls to ``smp_call_function_single()``,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) each of which results in an IPI to the target CPU.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28) The corresponding handler function checks the CPU's state, motivating
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) a faster quiescent state where possible, and triggering a report
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) of that quiescent state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) As always for RCU, once everything has spent some time in a quiescent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) state, the expedited grace period has completed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) The details of the ``smp_call_function_single()`` handler's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) operation depend on the RCU flavor, as described in the following
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) sections.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) RCU-preempt Expedited Grace Periods
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) ===================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) ``CONFIG_PREEMPT=y`` kernels implement RCU-preempt.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) The overall flow of the handling of a given CPU by an RCU-preempt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) expedited grace period is shown in the following diagram:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) .. kernel-figure:: ExpRCUFlow.svg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) The solid arrows denote direct action, for example, a function call.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) The dotted arrows denote indirect action, for example, an IPI
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) or a state that is reached after some time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) If a given CPU is offline or idle, ``synchronize_rcu_expedited()``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) will ignore it because idle and offline CPUs are already residing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) in quiescent states.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) Otherwise, the expedited grace period will use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) ``smp_call_function_single()`` to send the CPU an IPI, which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56) is handled by ``rcu_exp_handler()``.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) However, because this is preemptible RCU, ``rcu_exp_handler()``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) can check to see if the CPU is currently running in an RCU read-side
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) critical section.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) If not, the handler can immediately report a quiescent state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) Otherwise, it sets flags so that the outermost ``rcu_read_unlock()``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) invocation will provide the needed quiescent-state report.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64) This flag-setting avoids the previous forced preemption of all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) CPUs that might have RCU read-side critical sections.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) In addition, this flag-setting is done so as to avoid increasing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67) the overhead of the common-case fastpath through the scheduler.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) Again because this is preemptible RCU, an RCU read-side critical section
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70) can be preempted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) When that happens, RCU will enqueue the task, which will the continue to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) block the current expedited grace period until it resumes and finds its
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73) outermost ``rcu_read_unlock()``.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) The CPU will report a quiescent state just after enqueuing the task because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) the CPU is no longer blocking the grace period.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) It is instead the preempted task doing the blocking.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77) The list of blocked tasks is managed by ``rcu_preempt_ctxt_queue()``,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) which is called from ``rcu_preempt_note_context_switch()``, which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) in turn is called from ``rcu_note_context_switch()``, which in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80) turn is called from the scheduler.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) | **Quick Quiz**: |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) | Why not just have the expedited grace period check the state of all |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) | the CPUs? After all, that would avoid all those real-time-unfriendly |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) | IPIs. |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) | **Answer**: |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) | Because we want the RCU read-side critical sections to run fast, |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) | which means no memory barriers. Therefore, it is not possible to |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) | safely check the state from some other CPU. And even if it was |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95) | possible to safely check the state, it would still be necessary to |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96) | IPI the CPU to safely interact with the upcoming |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) | ``rcu_read_unlock()`` invocation, which means that the remote state |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) | testing would not help the worst-case latency that real-time |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) | applications care about. |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) | |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) | One way to prevent your real-time application from getting hit with |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) | these IPIs is to build your kernel with ``CONFIG_NO_HZ_FULL=y``. RCU |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) | would then perceive the CPU running your application as being idle, |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) | and it would be able to safely detect that state without needing to |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) | IPI the CPU. |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) Please note that this is just the overall flow: Additional complications
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) can arise due to races with CPUs going idle or offline, among other
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) things.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) RCU-sched Expedited Grace Periods
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) ---------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) ``CONFIG_PREEMPT=n`` kernels implement RCU-sched. The overall flow of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) the handling of a given CPU by an RCU-sched expedited grace period is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) shown in the following diagram:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) .. kernel-figure:: ExpSchedFlow.svg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) As with RCU-preempt, RCU-sched's ``synchronize_rcu_expedited()`` ignores
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) offline and idle CPUs, again because they are in remotely detectable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) quiescent states. However, because the ``rcu_read_lock_sched()`` and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) ``rcu_read_unlock_sched()`` leave no trace of their invocation, in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) general it is not possible to tell whether or not the current CPU is in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) an RCU read-side critical section. The best that RCU-sched's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) ``rcu_exp_handler()`` can do is to check for idle, on the off-chance
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) that the CPU went idle while the IPI was in flight. If the CPU is idle,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) then ``rcu_exp_handler()`` reports the quiescent state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) Otherwise, the handler forces a future context switch by setting the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) NEED_RESCHED flag of the current task's thread flag and the CPU preempt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) counter. At the time of the context switch, the CPU reports the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) quiescent state. Should the CPU go offline first, it will report the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) quiescent state at that time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) Expedited Grace Period and CPU Hotplug
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) --------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) The expedited nature of expedited grace periods require a much tighter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) interaction with CPU hotplug operations than is required for normal
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) grace periods. In addition, attempting to IPI offline CPUs will result
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) in splats, but failing to IPI online CPUs can result in too-short grace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) periods. Neither option is acceptable in production kernels.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) The interaction between expedited grace periods and CPU hotplug
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) operations is carried out at several levels:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) #. The number of CPUs that have ever been online is tracked by the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) ``rcu_state`` structure's ``->ncpus`` field. The ``rcu_state``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) structure's ``->ncpus_snap`` field tracks the number of CPUs that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) have ever been online at the beginning of an RCU expedited grace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) period. Note that this number never decreases, at least in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) absence of a time machine.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) #. The identities of the CPUs that have ever been online is tracked by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) the ``rcu_node`` structure's ``->expmaskinitnext`` field. The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) ``rcu_node`` structure's ``->expmaskinit`` field tracks the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) identities of the CPUs that were online at least once at the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) beginning of the most recent RCU expedited grace period. The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) ``rcu_state`` structure's ``->ncpus`` and ``->ncpus_snap`` fields are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) used to detect when new CPUs have come online for the first time,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) that is, when the ``rcu_node`` structure's ``->expmaskinitnext``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) field has changed since the beginning of the last RCU expedited grace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) period, which triggers an update of each ``rcu_node`` structure's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) ``->expmaskinit`` field from its ``->expmaskinitnext`` field.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) #. Each ``rcu_node`` structure's ``->expmaskinit`` field is used to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) initialize that structure's ``->expmask`` at the beginning of each
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) RCU expedited grace period. This means that only those CPUs that have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) been online at least once will be considered for a given grace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) period.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) #. Any CPU that goes offline will clear its bit in its leaf ``rcu_node``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) structure's ``->qsmaskinitnext`` field, so any CPU with that bit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) clear can safely be ignored. However, it is possible for a CPU coming
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) online or going offline to have this bit set for some time while
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) ``cpu_online`` returns ``false``.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) #. For each non-idle CPU that RCU believes is currently online, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) grace period invokes ``smp_call_function_single()``. If this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) succeeds, the CPU was fully online. Failure indicates that the CPU is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) in the process of coming online or going offline, in which case it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) necessary to wait for a short time period and try again. The purpose
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) of this wait (or series of waits, as the case may be) is to permit a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) concurrent CPU-hotplug operation to complete.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) #. In the case of RCU-sched, one of the last acts of an outgoing CPU is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) to invoke ``rcu_report_dead()``, which reports a quiescent state for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) that CPU. However, this is likely paranoia-induced redundancy.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) | **Quick Quiz**: |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) | Why all the dancing around with multiple counters and masks tracking |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) | CPUs that were once online? Why not just have a single set of masks |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) | tracking the currently online CPUs and be done with it? |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) | **Answer**: |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) | Maintaining single set of masks tracking the online CPUs *sounds* |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) | easier, at least until you try working out all the race conditions |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) | between grace-period initialization and CPU-hotplug operations. For |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) | example, suppose initialization is progressing down the tree while a |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) | CPU-offline operation is progressing up the tree. This situation can |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) | result in bits set at the top of the tree that have no counterparts |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) | at the bottom of the tree. Those bits will never be cleared, which |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) | will result in grace-period hangs. In short, that way lies madness, |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) | to say nothing of a great many bugs, hangs, and deadlocks. |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) | In contrast, the current multi-mask multi-counter scheme ensures that |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) | grace-period initialization will always see consistent masks up and |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) | down the tree, which brings significant simplifications over the |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) | single-mask method. |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) | |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) | This is an instance of `deferring work in order to avoid |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) | synchronization <http://www.cs.columbia.edu/~library/TR-repository/re |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) | ports/reports-1992/cucs-039-92.ps.gz>`__. |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) | Lazily recording CPU-hotplug events at the beginning of the next |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) | grace period greatly simplifies maintenance of the CPU-tracking |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) | bitmasks in the ``rcu_node`` tree. |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) Expedited Grace Period Refinements
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) ----------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) Idle-CPU Checks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) ~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) Each expedited grace period checks for idle CPUs when initially forming
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) the mask of CPUs to be IPIed and again just before IPIing a CPU (both
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) checks are carried out by ``sync_rcu_exp_select_cpus()``). If the CPU is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) idle at any time between those two times, the CPU will not be IPIed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) Instead, the task pushing the grace period forward will include the idle
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) CPUs in the mask passed to ``rcu_report_exp_cpu_mult()``.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) For RCU-sched, there is an additional check: If the IPI has interrupted
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) the idle loop, then ``rcu_exp_handler()`` invokes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) ``rcu_report_exp_rdp()`` to report the corresponding quiescent state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) For RCU-preempt, there is no specific check for idle in the IPI handler
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) (``rcu_exp_handler()``), but because RCU read-side critical sections are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) not permitted within the idle loop, if ``rcu_exp_handler()`` sees that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) the CPU is within RCU read-side critical section, the CPU cannot
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) possibly be idle. Otherwise, ``rcu_exp_handler()`` invokes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) ``rcu_report_exp_rdp()`` to report the corresponding quiescent state,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) regardless of whether or not that quiescent state was due to the CPU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) being idle.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) In summary, RCU expedited grace periods check for idle when building the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) bitmask of CPUs that must be IPIed, just before sending each IPI, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) (either explicitly or implicitly) within the IPI handler.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) Batching via Sequence Counter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) If each grace-period request was carried out separately, expedited grace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) periods would have abysmal scalability and problematic high-load
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) characteristics. Because each grace-period operation can serve an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) unlimited number of updates, it is important to *batch* requests, so
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) that a single expedited grace-period operation will cover all requests
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) in the corresponding batch.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) This batching is controlled by a sequence counter named
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) ``->expedited_sequence`` in the ``rcu_state`` structure. This counter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) has an odd value when there is an expedited grace period in progress and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) an even value otherwise, so that dividing the counter value by two gives
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) the number of completed grace periods. During any given update request,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) the counter must transition from even to odd and then back to even, thus
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) indicating that a grace period has elapsed. Therefore, if the initial
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) value of the counter is ``s``, the updater must wait until the counter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) reaches at least the value ``(s+3)&~0x1``. This counter is managed by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) the following access functions:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) #. ``rcu_exp_gp_seq_start()``, which marks the start of an expedited
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) grace period.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) #. ``rcu_exp_gp_seq_end()``, which marks the end of an expedited grace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) period.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) #. ``rcu_exp_gp_seq_snap()``, which obtains a snapshot of the counter.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) #. ``rcu_exp_gp_seq_done()``, which returns ``true`` if a full expedited
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) grace period has elapsed since the corresponding call to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) ``rcu_exp_gp_seq_snap()``.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) Again, only one request in a given batch need actually carry out a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) grace-period operation, which means there must be an efficient way to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) identify which of many concurrent reqeusts will initiate the grace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) period, and that there be an efficient way for the remaining requests to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) wait for that grace period to complete. However, that is the topic of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) the next section.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) Funnel Locking and Wait/Wakeup
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) The natural way to sort out which of a batch of updaters will initiate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) the expedited grace period is to use the ``rcu_node`` combining tree, as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) implemented by the ``exp_funnel_lock()`` function. The first updater
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) corresponding to a given grace period arriving at a given ``rcu_node``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) structure records its desired grace-period sequence number in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) ``->exp_seq_rq`` field and moves up to the next level in the tree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) Otherwise, if the ``->exp_seq_rq`` field already contains the sequence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) number for the desired grace period or some later one, the updater
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) blocks on one of four wait queues in the ``->exp_wq[]`` array, using the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) second-from-bottom and third-from bottom bits as an index. An
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) ``->exp_lock`` field in the ``rcu_node`` structure synchronizes access
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) to these fields.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) An empty ``rcu_node`` tree is shown in the following diagram, with the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) white cells representing the ``->exp_seq_rq`` field and the red cells
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) representing the elements of the ``->exp_wq[]`` array.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) .. kernel-figure:: Funnel0.svg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) The next diagram shows the situation after the arrival of Task A and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) Task B at the leftmost and rightmost leaf ``rcu_node`` structures,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) respectively. The current value of the ``rcu_state`` structure's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) ``->expedited_sequence`` field is zero, so adding three and clearing the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) bottom bit results in the value two, which both tasks record in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312) ``->exp_seq_rq`` field of their respective ``rcu_node`` structures:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) .. kernel-figure:: Funnel1.svg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) Each of Tasks A and B will move up to the root ``rcu_node`` structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) Suppose that Task A wins, recording its desired grace-period sequence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) number and resulting in the state shown below:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) .. kernel-figure:: Funnel2.svg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) Task A now advances to initiate a new grace period, while Task B moves
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) up to the root ``rcu_node`` structure, and, seeing that its desired
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) sequence number is already recorded, blocks on ``->exp_wq[1]``.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) | **Quick Quiz**: |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) | Why ``->exp_wq[1]``? Given that the value of these tasks' desired |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) | sequence number is two, so shouldn't they instead block on |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) | ``->exp_wq[2]``? |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) | **Answer**: |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) | No. |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) | Recall that the bottom bit of the desired sequence number indicates |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) | whether or not a grace period is currently in progress. It is |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) | therefore necessary to shift the sequence number right one bit |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) | position to obtain the number of the grace period. This results in |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) | ``->exp_wq[1]``. |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) If Tasks C and D also arrive at this point, they will compute the same
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) desired grace-period sequence number, and see that both leaf
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) ``rcu_node`` structures already have that value recorded. They will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) therefore block on their respective ``rcu_node`` structures'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) ``->exp_wq[1]`` fields, as shown below:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) .. kernel-figure:: Funnel3.svg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) Task A now acquires the ``rcu_state`` structure's ``->exp_mutex`` and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) initiates the grace period, which increments ``->expedited_sequence``.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) Therefore, if Tasks E and F arrive, they will compute a desired sequence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) number of 4 and will record this value as shown below:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) .. kernel-figure:: Funnel4.svg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) Tasks E and F will propagate up the ``rcu_node`` combining tree, with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) Task F blocking on the root ``rcu_node`` structure and Task E wait for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) Task A to finish so that it can start the next grace period. The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) resulting state is as shown below:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) .. kernel-figure:: Funnel5.svg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) Once the grace period completes, Task A starts waking up the tasks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) waiting for this grace period to complete, increments the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) ``->expedited_sequence``, acquires the ``->exp_wake_mutex`` and then
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) releases the ``->exp_mutex``. This results in the following state:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) .. kernel-figure:: Funnel6.svg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372) Task E can then acquire ``->exp_mutex`` and increment
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) ``->expedited_sequence`` to the value three. If new tasks G and H arrive
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) and moves up the combining tree at the same time, the state will be as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) follows:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) .. kernel-figure:: Funnel7.svg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) Note that three of the root ``rcu_node`` structure's waitqueues are now
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) occupied. However, at some point, Task A will wake up the tasks blocked
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381) on the ``->exp_wq`` waitqueues, resulting in the following state:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383) .. kernel-figure:: Funnel8.svg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385) Execution will continue with Tasks E and H completing their grace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) periods and carrying out their wakeups.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389) | **Quick Quiz**: |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391) | What happens if Task A takes so long to do its wakeups that Task E's |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392) | grace period completes? |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394) | **Answer**: |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396) | Then Task E will block on the ``->exp_wake_mutex``, which will also |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) | prevent it from releasing ``->exp_mutex``, which in turn will prevent |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) | the next grace period from starting. This last is important in |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) | preventing overflow of the ``->exp_wq[]`` array. |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) Use of Workqueues
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) ~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) In earlier implementations, the task requesting the expedited grace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) period also drove it to completion. This straightforward approach had
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407) the disadvantage of needing to account for POSIX signals sent to user
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) tasks, so more recent implemementations use the Linux kernel's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) `workqueues <https://www.kernel.org/doc/Documentation/core-api/workqueue.rst>`__.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) The requesting task still does counter snapshotting and funnel-lock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) processing, but the task reaching the top of the funnel lock does a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413) ``schedule_work()`` (from ``_synchronize_rcu_expedited()`` so that a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) workqueue kthread does the actual grace-period processing. Because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415) workqueue kthreads do not accept POSIX signals, grace-period-wait
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416) processing need not allow for POSIX signals. In addition, this approach
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) allows wakeups for the previous expedited grace period to be overlapped
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418) with processing for the next expedited grace period. Because there are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) only four sets of waitqueues, it is necessary to ensure that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420) previous grace period's wakeups complete before the next grace period's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421) wakeups start. This is handled by having the ``->exp_mutex`` guard
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422) expedited grace-period processing and the ``->exp_wake_mutex`` guard
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423) wakeups. The key point is that the ``->exp_mutex`` is not released until
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424) the first wakeup is complete, which means that the ``->exp_wake_mutex``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425) has already been acquired at that point. This approach ensures that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426) previous grace period's wakeups can be carried out while the current
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) grace period is in process, but that these wakeups will complete before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428) the next grace period starts. This means that only three waitqueues are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429) required, guaranteeing that the four that are provided are sufficient.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431) Stall Warnings
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432) ~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) Expediting grace periods does nothing to speed things up when RCU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435) readers take too long, and therefore expedited grace periods check for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436) stalls just as normal grace periods do.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439) | **Quick Quiz**: |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441) | But why not just let the normal grace-period machinery detect the |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442) | stalls, given that a given reader must block both normal and |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443) | expedited grace periods? |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445) | **Answer**: |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447) | Because it is quite possible that at a given time there is no normal |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448) | grace period in progress, in which case the normal grace period |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449) | cannot emit a stall warning. |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450) +-----------------------------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452) The ``synchronize_sched_expedited_wait()`` function loops waiting for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453) the expedited grace period to end, but with a timeout set to the current
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454) RCU CPU stall-warning time. If this time is exceeded, any CPUs or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455) ``rcu_node`` structures blocking the current grace period are printed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456) Each stall warning results in another pass through the loop, but the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457) second and subsequent passes use longer stall times.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459) Mid-boot operation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 460) ~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 461)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 462) The use of workqueues has the advantage that the expedited grace-period
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 463) code need not worry about POSIX signals. Unfortunately, it has the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 464) corresponding disadvantage that workqueues cannot be used until they are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 465) initialized, which does not happen until some time after the scheduler
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 466) spawns the first task. Given that there are parts of the kernel that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 467) really do want to execute grace periods during this mid-boot “dead
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 468) zone”, expedited grace periods must do something else during thie time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 469)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 470) What they do is to fall back to the old practice of requiring that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 471) requesting task drive the expedited grace period, as was the case before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 472) the use of workqueues. However, the requesting task is only required to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 473) drive the grace period during the mid-boot dead zone. Before mid-boot, a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 474) synchronous grace period is a no-op. Some time after mid-boot,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 475) workqueues are used.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 476)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 477) Non-expedited non-SRCU synchronous grace periods must also operate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 478) normally during mid-boot. This is handled by causing non-expedited grace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 479) periods to take the expedited code path during mid-boot.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 480)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 481) The current code assumes that there are no POSIX signals during the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 482) mid-boot dead zone. However, if an overwhelming need for POSIX signals
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 483) somehow arises, appropriate adjustments can be made to the expedited
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 484) stall-warning code. One such adjustment would reinstate the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 485) pre-workqueue stall-warning checks, but only during the mid-boot dead
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 486) zone.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 487)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 488) With this refinement, synchronous grace periods can now be used from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 489) task context pretty much any time during the life of the kernel. That
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 490) is, aside from some points in the suspend, hibernate, or shutdown code
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 491) path.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 492)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 493) Summary
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 494) ~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 495)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 496) Expedited grace periods use a sequence-number approach to promote
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 497) batching, so that a single grace-period operation can serve numerous
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 498) requests. A funnel lock is used to efficiently identify the one task out
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 499) of a concurrent group that will request the grace period. All members of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 500) the group will block on waitqueues provided in the ``rcu_node``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 501) structure. The actual grace-period processing is carried out by a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 502) workqueue.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 503)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 504) CPU-hotplug operations are noted lazily in order to prevent the need for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 505) tight synchronization between expedited grace periods and CPU-hotplug
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 506) operations. The dyntick-idle counters are used to avoid sending IPIs to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 507) idle CPUs, at least in the common case. RCU-preempt and RCU-sched use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 508) different IPI handlers and different code to respond to the state
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 509) changes carried out by those handlers, but otherwise use common code.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 510)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 511) Quiescent states are tracked using the ``rcu_node`` tree, and once all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 512) necessary quiescent states have been reported, all tasks waiting on this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 513) expedited grace period are awakened. A pair of mutexes are used to allow
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 514) one grace period's wakeups to proceed concurrently with the next grace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 515) period's processing.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 516)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 517) This combination of mechanisms allows expedited grace periods to run
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 518) reasonably efficiently. However, for non-time-critical tasks, normal
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 519) grace periods should be used instead because their longer duration
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 520) permits much higher degrees of batching, and thus much lower per-request
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 521) overheads.