Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) On atomic types (atomic_t atomic64_t and atomic_long_t).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4) The atomic type provides an interface to the architecture's means of atomic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5) RMW operations between CPUs (atomic operations on MMIO are not supported and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6) can lead to fatal traps on some platforms).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8) API
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9) ---
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11) The 'full' API consists of (atomic64_ and atomic_long_ prefixes omitted for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12) brevity):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14) Non-RMW ops:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16)   atomic_read(), atomic_set()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17)   atomic_read_acquire(), atomic_set_release()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20) RMW atomic operations:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22) Arithmetic:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24)   atomic_{add,sub,inc,dec}()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25)   atomic_{add,sub,inc,dec}_return{,_relaxed,_acquire,_release}()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26)   atomic_fetch_{add,sub,inc,dec}{,_relaxed,_acquire,_release}()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29) Bitwise:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31)   atomic_{and,or,xor,andnot}()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32)   atomic_fetch_{and,or,xor,andnot}{,_relaxed,_acquire,_release}()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35) Swap:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37)   atomic_xchg{,_relaxed,_acquire,_release}()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38)   atomic_cmpxchg{,_relaxed,_acquire,_release}()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39)   atomic_try_cmpxchg{,_relaxed,_acquire,_release}()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42) Reference count (but please see refcount_t):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44)   atomic_add_unless(), atomic_inc_not_zero()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45)   atomic_sub_and_test(), atomic_dec_and_test()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48) Misc:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50)   atomic_inc_and_test(), atomic_add_negative()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51)   atomic_dec_unless_positive(), atomic_inc_unless_negative()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54) Barriers:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56)   smp_mb__{before,after}_atomic()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59) TYPES (signed vs unsigned)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60) -----
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62) While atomic_t, atomic_long_t and atomic64_t use int, long and s64
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63) respectively (for hysterical raisins), the kernel uses -fno-strict-overflow
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64) (which implies -fwrapv) and defines signed overflow to behave like
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65) 2s-complement.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67) Therefore, an explicitly unsigned variant of the atomic ops is strictly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68) unnecessary and we can simply cast, there is no UB.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70) There was a bug in UBSAN prior to GCC-8 that would generate UB warnings for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71) signed types.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73) With this we also conform to the C/C++ _Atomic behaviour and things like
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74) P1236R1.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77) SEMANTICS
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78) ---------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80) Non-RMW ops:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82) The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83) implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84) smp_store_release() respectively. Therefore, if you find yourself only using
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85) the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86) and are doing it wrong.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88) A note for the implementation of atomic_set{}() is that it must not break the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89) atomicity of the RMW ops. That is:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91)   C Atomic-RMW-ops-are-atomic-WRT-atomic_set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93)   {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94)     atomic_t v = ATOMIC_INIT(1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95)   }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97)   P0(atomic_t *v)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98)   {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99)     (void)atomic_add_unless(v, 1, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100)   }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102)   P1(atomic_t *v)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103)   {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104)     atomic_set(v, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105)   }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107)   exists
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108)   (v=2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) In this case we would expect the atomic_set() from CPU1 to either happen
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) before the atomic_add_unless(), in which case that latter one would no-op, or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) _after_ in which case we'd overwrite its result. In no case is "2" a valid
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) outcome.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) This is typically true on 'normal' platforms, where a regular competing STORE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) will invalidate a LL/SC or fail a CMPXCHG.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) The obvious case where this is not so is when we need to implement atomic ops
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) with a lock:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121)   CPU0						CPU1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123)   atomic_add_unless(v, 1, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124)     lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125)     ret = READ_ONCE(v->counter); // == 1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) 						atomic_set(v, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127)     if (ret != u)				  WRITE_ONCE(v->counter, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128)       WRITE_ONCE(v->counter, ret + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129)     unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) the typical solution is to then implement atomic_set{}() with atomic_xchg().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) RMW ops:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) These come in various forms:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138)  - plain operations without return value: atomic_{}()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140)  - operations which return the modified value: atomic_{}_return()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142)    these are limited to the arithmetic operations because those are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143)    reversible. Bitops are irreversible and therefore the modified value
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144)    is of dubious utility.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146)  - operations which return the original value: atomic_fetch_{}()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148)  - swap operations: xchg(), cmpxchg() and try_cmpxchg()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150)  - misc; the special purpose operations that are commonly used and would,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151)    given the interface, normally be implemented using (try_)cmpxchg loops but
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152)    are time critical and can, (typically) on LL/SC architectures, be more
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153)    efficiently implemented.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) All these operations are SMP atomic; that is, the operations (for a single
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) atomic variable) can be fully ordered and no intermediate state is lost or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) visible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) ORDERING  (go read memory-barriers.txt first)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) --------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) The rule of thumb:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165)  - non-RMW operations are unordered;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167)  - RMW operations that have no return value are unordered;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169)  - RMW operations that have a return value are fully ordered;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171)  - RMW operations that are conditional are unordered on FAILURE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172)    otherwise the above rules apply.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) Except of course when an operation has an explicit ordering like:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176)  {}_relaxed: unordered
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177)  {}_acquire: the R of the RMW (or atomic_read) is an ACQUIRE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178)  {}_release: the W of the RMW (or atomic_set)  is a  RELEASE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) Where 'unordered' is against other memory locations. Address dependencies are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) not defeated.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) Fully ordered primitives are ordered against everything prior and everything
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) subsequent. Therefore a fully ordered primitive is like having an smp_mb()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) before and an smp_mb() after the primitive.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) The barriers:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190)   smp_mb__{before,after}_atomic()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) only apply to the RMW atomic ops and can be used to augment/upgrade the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) ordering inherent to the op. These barriers act almost like a full smp_mb():
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) smp_mb__before_atomic() orders all earlier accesses against the RMW op
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) itself and all accesses following it, and smp_mb__after_atomic() orders all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) later accesses against the RMW op and all accesses preceding it. However,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) accesses between the smp_mb__{before,after}_atomic() and the RMW op are not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) ordered, so it is advisable to place the barrier right next to the RMW atomic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) op whenever possible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) These helper barriers exist because architectures have varying implicit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) ordering on their SMP atomic primitives. For example our TSO architectures
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) provide full ordered atomics and these barriers are no-ops.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) NOTE: when the atomic RmW ops are fully ordered, they should also imply a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) compiler barrier.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) Thus:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210)   atomic_fetch_add();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) is equivalent to:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214)   smp_mb__before_atomic();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215)   atomic_fetch_add_relaxed();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216)   smp_mb__after_atomic();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) However the atomic_fetch_add() might be implemented more efficiently.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) Further, while something like:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222)   smp_mb__before_atomic();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223)   atomic_dec(&X);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) is a 'typical' RELEASE pattern, the barrier is strictly stronger than
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) a RELEASE because it orders preceding instructions against both the read
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) and write parts of the atomic_dec(), and against all following instructions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) as well. Similarly, something like:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230)   atomic_inc(&X);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231)   smp_mb__after_atomic();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) is an ACQUIRE pattern (though very much not typical), but again the barrier is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) strictly stronger than ACQUIRE. As illustrated:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236)   C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238)   {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239)   }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241)   P0(int *x, atomic_t *y)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242)   {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243)     r0 = READ_ONCE(*x);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244)     smp_rmb();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245)     r1 = atomic_read(y);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246)   }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248)   P1(int *x, atomic_t *y)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249)   {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250)     atomic_inc(y);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251)     smp_mb__after_atomic();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252)     WRITE_ONCE(*x, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253)   }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255)   exists
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256)   (0:r0=1 /\ 0:r1=0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) This should not happen; but a hypothetical atomic_inc_acquire() --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) because it would not order the W part of the RMW against the following
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) WRITE_ONCE.  Thus:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263)   P0			P1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) 			t = LL.acq *y (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) 			t++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) 			*x = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268)   r0 = *x (1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269)   RMB
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270)   r1 = *y (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) 			SC *y, t;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) is allowed.