Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) This document provides options for those wishing to keep their
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) memory-ordering lives simple, as is necessary for those whose domain
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3) is complex.  After all, there are bugs other than memory-ordering bugs,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4) and the time spent gaining memory-ordering knowledge is not available
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5) for gaining domain knowledge.  Furthermore Linux-kernel memory model
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6) (LKMM) is quite complex, with subtle differences in code often having
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7) dramatic effects on correctness.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9) The options near the beginning of this list are quite simple.  The idea
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10) is not that kernel hackers don't already know about them, but rather
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11) that they might need the occasional reminder.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13) Please note that this is a generic guide, and that specific subsystems
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14) will often have special requirements or idioms.  For example, developers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15) of MMIO-based device drivers will often need to use mb(), rmb(), and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16) wmb(), and therefore might find smp_mb(), smp_rmb(), and smp_wmb()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17) to be more natural than smp_load_acquire() and smp_store_release().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18) On the other hand, those coming in from other environments will likely
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19) be more familiar with these last two.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22) Single-threaded code
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23) ====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25) In single-threaded code, there is no reordering, at least assuming
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26) that your toolchain and hardware are working correctly.  In addition,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27) it is generally a mistake to assume your code will only run in a single
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28) threaded context as the kernel can enter the same code path on multiple
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29) CPUs at the same time.  One important exception is a function that makes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30) no external data references.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32) In the general case, you will need to take explicit steps to ensure that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33) your code really is executed within a single thread that does not access
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34) shared variables.  A simple way to achieve this is to define a global lock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35) that you acquire at the beginning of your code and release at the end,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36) taking care to ensure that all references to your code's shared data are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37) also carried out under that same lock.  Because only one thread can hold
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38) this lock at a given time, your code will be executed single-threaded.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39) This approach is called "code locking".
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41) Code locking can severely limit both performance and scalability, so it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42) should be used with caution, and only on code paths that execute rarely.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43) After all, a huge amount of effort was required to remove the Linux
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44) kernel's old "Big Kernel Lock", so let's please be very careful about
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45) adding new "little kernel locks".
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47) One of the advantages of locking is that, in happy contrast with the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48) year 1981, almost all kernel developers are very familiar with locking.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49) The Linux kernel's lockdep (CONFIG_PROVE_LOCKING=y) is very helpful with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50) the formerly feared deadlock scenarios.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52) Please use the standard locking primitives provided by the kernel rather
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53) than rolling your own.  For one thing, the standard primitives interact
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54) properly with lockdep.  For another thing, these primitives have been
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55) tuned to deal better with high contention.  And for one final thing, it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56) surprisingly hard to correctly code production-quality lock acquisition
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57) and release functions.  After all, even simple non-production-quality
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58) locking functions must carefully prevent both the CPU and the compiler
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59) from moving code in either direction across the locking function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61) Despite the scalability limitations of single-threaded code, RCU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62) takes this approach for much of its grace-period processing and also
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63) for early-boot operation.  The reason RCU is able to scale despite
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64) single-threaded grace-period processing is use of batching, where all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65) updates that accumulated during one grace period are handled by the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66) next one.  In other words, slowing down grace-period processing makes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67) it more efficient.  Nor is RCU unique:  Similar batching optimizations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68) are used in many I/O operations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71) Packaged code
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72) =============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74) Even if performance and scalability concerns prevent your code from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75) being completely single-threaded, it is often possible to use library
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76) functions that handle the concurrency nearly or entirely on their own.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77) This approach delegates any LKMM worries to the library maintainer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79) In the kernel, what is the "library"?  Quite a bit.  It includes the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80) contents of the lib/ directory, much of the include/linux/ directory along
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81) with a lot of other heavily used APIs.  But heavily used examples include
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82) the list macros (for example, include/linux/{,rcu}list.h), workqueues,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83) smp_call_function(), and the various hash tables and search trees.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86) Data locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87) ============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89) With code locking, we use single-threaded code execution to guarantee
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90) serialized access to the data that the code is accessing.  However,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91) we can also achieve this by instead associating the lock with specific
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92) instances of the data structures.  This creates a "critical section"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93) in the code execution that will execute as though it is single threaded.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94) By placing all the accesses and modifications to a shared data structure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95) inside a critical section, we ensure that the execution context that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96) holds the lock has exclusive access to the shared data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98) The poster boy for this approach is the hash table, where placing a lock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99) in each hash bucket allows operations on different buckets to proceed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) concurrently.  This works because the buckets do not overlap with each
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) other, so that an operation on one bucket does not interfere with any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) other bucket.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) As the number of buckets increases, data locking scales naturally.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) In particular, if the amount of data increases with the number of CPUs,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) increasing the number of buckets as the number of CPUs increase results
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) in a naturally scalable data structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) Per-CPU processing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) ==================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) Partitioning processing and data over CPUs allows each CPU to take
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) a single-threaded approach while providing excellent performance and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) scalability.  Of course, there is no free lunch:  The dark side of this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) excellence is substantially increased memory footprint.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) In addition, it is sometimes necessary to occasionally update some global
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) view of this processing and data, in which case something like locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) must be used to protect this global view.  This is the approach taken
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) by the percpu_counter infrastructure. In many cases, there are already
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) generic/library variants of commonly used per-cpu constructs available.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) Please use them rather than rolling your own.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) RCU uses DEFINE_PER_CPU*() declaration to create a number of per-CPU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) data sets.  For example, each CPU does private quiescent-state processing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) within its instance of the per-CPU rcu_data structure, and then uses data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) locking to report quiescent states up the grace-period combining tree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) Packaged primitives: Sequence locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) =====================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) Lockless programming is considered by many to be more difficult than
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) lock-based programming, but there are a few lockless design patterns that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) have been built out into an API.  One of these APIs is sequence locking.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) Although this APIs can be used in extremely complex ways, there are simple
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) and effective ways of using it that avoid the need to pay attention to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) memory ordering.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) The basic keep-things-simple rule for sequence locking is "do not write
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) in read-side code".  Yes, you can do writes from within sequence-locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) readers, but it won't be so simple.  For example, such writes will be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) lockless and should be idempotent.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) For more sophisticated use cases, LKMM can guide you, including use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) cases involving combining sequence locking with other synchronization
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) primitives.  (LKMM does not yet know about sequence locking, so it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) currently necessary to open-code it in your litmus tests.)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) Additional information may be found in include/linux/seqlock.h.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) Packaged primitives: RCU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) ========================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) Another lockless design pattern that has been baked into an API
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) is RCU.  The Linux kernel makes sophisticated use of RCU, but the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) keep-things-simple rules for RCU are "do not write in read-side code"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) and "do not update anything that is visible to and accessed by readers",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) and "protect updates with locking".
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) These rules are illustrated by the functions foo_update_a() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) foo_get_a() shown in Documentation/RCU/whatisRCU.rst.  Additional
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) RCU usage patterns maybe found in Documentation/RCU and in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) source code.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) Packaged primitives: Atomic operations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) ======================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) Back in the day, the Linux kernel had three types of atomic operations:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) 1.	Initialization and read-out, such as atomic_set() and atomic_read().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) 2.	Operations that did not return a value and provided no ordering,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) 	such as atomic_inc() and atomic_dec().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) 3.	Operations that returned a value and provided full ordering, such as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) 	atomic_add_return() and atomic_dec_and_test().  Note that some
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) 	value-returning operations provide full ordering only conditionally.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) 	For example, cmpxchg() provides ordering only upon success.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) More recent kernels have operations that return a value but do not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) provide full ordering.  These are flagged with either a _relaxed()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) suffix (providing no ordering), or an _acquire() or _release() suffix
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) (providing limited ordering).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) Additional information may be found in these files:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) Documentation/atomic_t.txt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) Documentation/atomic_bitops.txt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) Documentation/core-api/atomic_ops.rst
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) Documentation/core-api/refcount-vs-atomic.rst
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) Reading code using these primitives is often also quite helpful.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) Lockless, fully ordered
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) =======================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) When using locking, there often comes a time when it is necessary
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) to access some variable or another without holding the data lock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) that serializes access to that variable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) If you want to keep things simple, use the initialization and read-out
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) operations from the previous section only when there are no racing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) accesses.  Otherwise, use only fully ordered operations when accessing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) or modifying the variable.  This approach guarantees that code prior
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) to a given access to that variable will be seen by all CPUs has having
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) happened before any code following any later access to that same variable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) Please note that per-CPU functions are not atomic operations and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) hence they do not provide any ordering guarantees at all.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) If the lockless accesses are frequently executed reads that are used
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) only for heuristics, or if they are frequently executed writes that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) are used only for statistics, please see the next section.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) Lockless statistics and heuristics
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) ==================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) Unordered primitives such as atomic_read(), atomic_set(), READ_ONCE(), and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) WRITE_ONCE() can safely be used in some cases.  These primitives provide
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) no ordering, but they do prevent the compiler from carrying out a number
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) of destructive optimizations (for which please see the next section).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) One example use for these primitives is statistics, such as per-CPU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) counters exemplified by the rt_cache_stat structure's routing-cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) statistics counters.  Another example use case is heuristics, such as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) the jiffies_till_first_fqs and jiffies_till_next_fqs kernel parameters
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) controlling how often RCU scans for idle CPUs.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) But be careful.  "Unordered" really does mean "unordered".  It is all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) too easy to assume ordering, and this assumption must be avoided when
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) using these primitives.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) Don't let the compiler trip you up
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) ==================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) It can be quite tempting to use plain C-language accesses for lockless
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) loads from and stores to shared variables.  Although this is both
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) possible and quite common in the Linux kernel, it does require a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) surprising amount of analysis, care, and knowledge about the compiler.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) Yes, some decades ago it was not unfair to consider a C compiler to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) an assembler with added syntax and better portability, but the advent of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) sophisticated optimizing compilers mean that those days are long gone.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) Today's optimizing compilers can profoundly rewrite your code during the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) translation process, and have long been ready, willing, and able to do so.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) Therefore, if you really need to use C-language assignments instead of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) READ_ONCE(), WRITE_ONCE(), and so on, you will need to have a very good
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) understanding of both the C standard and your compiler.  Here are some
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) introductory references and some tooling to start you on this noble quest:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) Who's afraid of a big bad optimizing compiler?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) 	https://lwn.net/Articles/793253/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) Calibrating your fear of big bad optimizing compilers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) 	https://lwn.net/Articles/799218/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) Concurrency bugs should fear the big bad data-race detector (part 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) 	https://lwn.net/Articles/816850/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) Concurrency bugs should fear the big bad data-race detector (part 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) 	https://lwn.net/Articles/816854/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) More complex use cases
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) ======================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) If the alternatives above do not do what you need, please look at the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) recipes-pairs.txt file to peel off the next layer of the memory-ordering
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) onion.