^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) .. _kernel_hacking_lock:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) ===========================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4) Unreliable Guide To Locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) ===========================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) :Author: Rusty Russell
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) Introduction
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) ============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) Welcome, to Rusty's Remarkably Unreliable Guide to Kernel Locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) issues. This document describes the locking systems in the Linux Kernel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) in 2.6.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) With the wide availability of HyperThreading, and preemption in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) Linux Kernel, everyone hacking on the kernel needs to know the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) fundamentals of concurrency and locking for SMP.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) The Problem With Concurrency
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21) ============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) (Skip this if you know what a Race Condition is).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) In a normal program, you can increment a counter like so:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) very_important_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) This is what they would expect to happen:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) .. table:: Expected Results
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) | Instance 1 | Instance 2 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) +====================================+====================================+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) | read very_important_count (5) | |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) | add 1 (6) | |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) | write very_important_count (6) | |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) | | read very_important_count (6) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) | | add 1 (7) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) | | write very_important_count (7) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) This is what might happen:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) .. table:: Possible Results
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) | Instance 1 | Instance 2 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) +====================================+====================================+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) | read very_important_count (5) | |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) | | read very_important_count (5) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64) | add 1 (6) | |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) | | add 1 (6) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) | write very_important_count (6) | |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70) | | write very_important_count (6) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) +------------------------------------+------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) Race Conditions and Critical Regions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) ------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77) This overlap, where the result depends on the relative timing of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) multiple tasks, is called a race condition. The piece of code containing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) the concurrency issue is called a critical region. And especially since
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80) Linux starting running on SMP machines, they became one of the major
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) issues in kernel design and implementation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) Preemption can have the same effect, even if there is only one CPU: by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) preempting one task during the critical region, we have exactly the same
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) race condition. In this case the thread which preempts might run the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) critical region itself.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) The solution is to recognize when these simultaneous accesses occur, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) use locks to make sure that only one instance can enter the critical
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) region at any time. There are many friendly primitives in the Linux
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) kernel to help you do this. And then there are the unfriendly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) primitives, but I'll pretend they don't exist.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) Locking in the Linux Kernel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95) ===========================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) If I could give you one piece of advice: never sleep with anyone crazier
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) than yourself. But if I had to give you advice on locking: **keep it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) simple**.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) Be reluctant to introduce new locks.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) Strangely enough, this last one is the exact reverse of my advice when
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) you **have** slept with someone crazier than yourself. And you should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) think about getting a big dog.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) Two Main Types of Kernel Locks: Spinlocks and Mutexes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) -----------------------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) There are two main types of kernel locks. The fundamental type is the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) spinlock (``include/asm/spinlock.h``), which is a very simple
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) single-holder lock: if you can't get the spinlock, you keep trying
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) (spinning) until you can. Spinlocks are very small and fast, and can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) used anywhere.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) The second type is a mutex (``include/linux/mutex.h``): it is like a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) spinlock, but you may block holding a mutex. If you can't lock a mutex,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) your task will suspend itself, and be woken up when the mutex is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) released. This means the CPU can do something else while you are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) waiting. There are many cases when you simply can't sleep (see
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) `What Functions Are Safe To Call From Interrupts? <#sleeping-things>`__),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) and so have to use a spinlock instead.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) Neither type of lock is recursive: see
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) `Deadlock: Simple and Advanced <#deadlock>`__.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) Locks and Uniprocessor Kernels
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) ------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) For kernels compiled without ``CONFIG_SMP``, and without
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) ``CONFIG_PREEMPT`` spinlocks do not exist at all. This is an excellent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) design decision: when no-one else can run at the same time, there is no
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) reason to have a lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) If the kernel is compiled without ``CONFIG_SMP``, but ``CONFIG_PREEMPT``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) is set, then spinlocks simply disable preemption, which is sufficient to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) prevent any races. For most purposes, we can think of preemption as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) equivalent to SMP, and not worry about it separately.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) You should always test your locking code with ``CONFIG_SMP`` and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) ``CONFIG_PREEMPT`` enabled, even if you don't have an SMP test box,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) because it will still catch some kinds of locking bugs.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) Mutexes still exist, because they are required for synchronization
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) between user contexts, as we will see below.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) Locking Only In User Context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) ----------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) If you have a data structure which is only ever accessed from user
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) context, then you can use a simple mutex (``include/linux/mutex.h``) to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) protect it. This is the most trivial case: you initialize the mutex.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) Then you can call mutex_lock_interruptible() to grab the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) mutex, and mutex_unlock() to release it. There is also a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) mutex_lock(), which should be avoided, because it will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) not return if a signal is received.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) Example: ``net/netfilter/nf_sockopt.c`` allows registration of new
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) setsockopt() and getsockopt() calls, with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) nf_register_sockopt(). Registration and de-registration
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) are only done on module load and unload (and boot time, where there is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) no concurrency), and the list of registrations is only consulted for an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) unknown setsockopt() or getsockopt() system
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) call. The ``nf_sockopt_mutex`` is perfect to protect this, especially
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) since the setsockopt and getsockopt calls may well sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) Locking Between User Context and Softirqs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) -----------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) If a softirq shares data with user context, you have two problems.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) Firstly, the current user context can be interrupted by a softirq, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) secondly, the critical region could be entered from another CPU. This is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) where spin_lock_bh() (``include/linux/spinlock.h``) is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) used. It disables softirqs on that CPU, then grabs the lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) spin_unlock_bh() does the reverse. (The '_bh' suffix is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) a historical reference to "Bottom Halves", the old name for software
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) interrupts. It should really be called spin_lock_softirq()' in a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) perfect world).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) Note that you can also use spin_lock_irq() or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) spin_lock_irqsave() here, which stop hardware interrupts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) as well: see `Hard IRQ Context <#hard-irq-context>`__.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) This works perfectly for UP as well: the spin lock vanishes, and this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) macro simply becomes local_bh_disable()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) (``include/linux/interrupt.h``), which protects you from the softirq
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) being run.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) Locking Between User Context and Tasklets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) -----------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) This is exactly the same as above, because tasklets are actually run
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) from a softirq.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) Locking Between User Context and Timers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) ---------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) This, too, is exactly the same as above, because timers are actually run
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) from a softirq. From a locking point of view, tasklets and timers are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) identical.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) Locking Between Tasklets/Timers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) -------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) Sometimes a tasklet or timer might want to share data with another
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) tasklet or timer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) The Same Tasklet/Timer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) ~~~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) Since a tasklet is never run on two CPUs at once, you don't need to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) worry about your tasklet being reentrant (running twice at once), even
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) on SMP.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) Different Tasklets/Timers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) ~~~~~~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) If another tasklet/timer wants to share data with your tasklet or timer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) , you will both need to use spin_lock() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) spin_unlock() calls. spin_lock_bh() is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) unnecessary here, as you are already in a tasklet, and none will be run
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) on the same CPU.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) Locking Between Softirqs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) ------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) Often a softirq might want to share data with itself or a tasklet/timer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) The Same Softirq
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) ~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) The same softirq can run on the other CPUs: you can use a per-CPU array
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) (see `Per-CPU Data <#per-cpu-data>`__) for better performance. If you're
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) going so far as to use a softirq, you probably care about scalable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) performance enough to justify the extra complexity.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) You'll need to use spin_lock() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) spin_unlock() for shared data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) Different Softirqs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) ~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) You'll need to use spin_lock() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) spin_unlock() for shared data, whether it be a timer,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) tasklet, different softirq or the same or another softirq: any of them
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) could be running on a different CPU.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) Hard IRQ Context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) ================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) Hardware interrupts usually communicate with a tasklet or softirq.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) Frequently this involves putting work in a queue, which the softirq will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) take out.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) Locking Between Hard IRQ and Softirqs/Tasklets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) ----------------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) If a hardware irq handler shares data with a softirq, you have two
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) concerns. Firstly, the softirq processing can be interrupted by a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) hardware interrupt, and secondly, the critical region could be entered
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) by a hardware interrupt on another CPU. This is where
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) spin_lock_irq() is used. It is defined to disable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) interrupts on that cpu, then grab the lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) spin_unlock_irq() does the reverse.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) The irq handler does not need to use spin_lock_irq(), because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) the softirq cannot run while the irq handler is running: it can use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) spin_lock(), which is slightly faster. The only exception
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) would be if a different hardware irq handler uses the same lock:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) spin_lock_irq() will stop that from interrupting us.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) This works perfectly for UP as well: the spin lock vanishes, and this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) macro simply becomes local_irq_disable()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) (``include/asm/smp.h``), which protects you from the softirq/tasklet/BH
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) being run.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) spin_lock_irqsave() (``include/linux/spinlock.h``) is a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) variant which saves whether interrupts were on or off in a flags word,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) which is passed to spin_unlock_irqrestore(). This means
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) that the same code can be used inside an hard irq handler (where
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) interrupts are already off) and in softirqs (where the irq disabling is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) required).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) Note that softirqs (and hence tasklets and timers) are run on return
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) from hardware interrupts, so spin_lock_irq() also stops
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) these. In that sense, spin_lock_irqsave() is the most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) general and powerful locking function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) Locking Between Two Hard IRQ Handlers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) -------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) It is rare to have to share data between two IRQ handlers, but if you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) do, spin_lock_irqsave() should be used: it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) architecture-specific whether all interrupts are disabled inside irq
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) handlers themselves.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) Cheat Sheet For Locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) =======================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) Pete Zaitcev gives the following summary:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) - If you are in a process context (any syscall) and want to lock other
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) process out, use a mutex. You can take a mutex and sleep
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) (``copy_from_user*(`` or ``kmalloc(x,GFP_KERNEL)``).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) - Otherwise (== data can be touched in an interrupt), use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) spin_lock_irqsave() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) spin_unlock_irqrestore().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) - Avoid holding spinlock for more than 5 lines of code and across any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) function call (except accessors like readb()).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) Table of Minimum Requirements
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) -----------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) The following table lists the **minimum** locking requirements between
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) various contexts. In some cases, the same context can only be running on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) one CPU at a time, so no locking is required for that context (eg. a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) particular thread can only run on one CPU at a time, but if it needs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) shares data with another thread, locking is required).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) Remember the advice above: you can always use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) spin_lock_irqsave(), which is a superset of all other
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) spinlock primitives.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) . IRQ Handler A IRQ Handler B Softirq A Softirq B Tasklet A Tasklet B Timer A Timer B User Context A User Context B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) IRQ Handler A None
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) IRQ Handler B SLIS None
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) Softirq A SLI SLI SL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) Softirq B SLI SLI SL SL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) Tasklet A SLI SLI SL SL None
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) Tasklet B SLI SLI SL SL SL None
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) Timer A SLI SLI SL SL SL SL None
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) Timer B SLI SLI SL SL SL SL SL None
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) User Context A SLI SLI SLBH SLBH SLBH SLBH SLBH SLBH None
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) User Context B SLI SLI SLBH SLBH SLBH SLBH SLBH SLBH MLI None
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ==============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) Table: Table of Locking Requirements
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) +--------+----------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) | SLIS | spin_lock_irqsave |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) +--------+----------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) | SLI | spin_lock_irq |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) +--------+----------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) | SL | spin_lock |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) +--------+----------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) | SLBH | spin_lock_bh |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) +--------+----------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) | MLI | mutex_lock_interruptible |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) +--------+----------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) Table: Legend for Locking Requirements Table
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) The trylock Functions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) =====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) There are functions that try to acquire a lock only once and immediately
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) return a value telling about success or failure to acquire the lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) They can be used if you need no access to the data protected with the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) lock when some other thread is holding the lock. You should acquire the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) lock later if you then need access to the data protected with the lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) spin_trylock() does not spin but returns non-zero if it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) acquires the spinlock on the first try or 0 if not. This function can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) used in all contexts like spin_lock(): you must have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) disabled the contexts that might interrupt you and acquire the spin
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372) mutex_trylock() does not suspend your task but returns
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) non-zero if it could lock the mutex on the first try or 0 if not. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) function cannot be safely used in hardware or software interrupt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) contexts despite not sleeping.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) Common Examples
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) ===============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) Let's step through a simple example: a cache of number to name mappings.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381) The cache keeps a count of how often each of the objects is used, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382) when it gets full, throws out the least used one.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384) All In User Context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385) -------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387) For our first example, we assume that all operations are in user context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388) (ie. from system calls), so we can sleep. This means we can use a mutex
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389) to protect the cache and all the objects within it. Here's the code::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391) #include <linux/list.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392) #include <linux/slab.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393) #include <linux/string.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394) #include <linux/mutex.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395) #include <asm/errno.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) struct object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) struct list_head list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) int id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401) char name[32];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) int popularity;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) /* Protects the cache, cache_num, and the objects within it */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) static DEFINE_MUTEX(cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407) static LIST_HEAD(cache);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) static unsigned int cache_num = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) #define MAX_CACHE_SIZE 10
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) /* Must be holding cache_lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) static struct object *__cache_find(int id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) struct object *i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416) list_for_each_entry(i, &cache, list)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) if (i->id == id) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418) i->popularity++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) return i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421) return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424) /* Must be holding cache_lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425) static void __cache_delete(struct object *obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) BUG_ON(!obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428) list_del(&obj->list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429) kfree(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430) cache_num--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433) /* Must be holding cache_lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) static void __cache_add(struct object *obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436) list_add(&obj->list, &cache);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437) if (++cache_num > MAX_CACHE_SIZE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438) struct object *i, *outcast = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439) list_for_each_entry(i, &cache, list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440) if (!outcast || i->popularity < outcast->popularity)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441) outcast = i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443) __cache_delete(outcast);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447) int cache_add(int id, const char *name)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449) struct object *obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451) if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452) return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454) strscpy(obj->name, name, sizeof(obj->name));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455) obj->id = id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456) obj->popularity = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458) mutex_lock(&cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459) __cache_add(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 460) mutex_unlock(&cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 461) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 462) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 463)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 464) void cache_delete(int id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 465) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 466) mutex_lock(&cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 467) __cache_delete(__cache_find(id));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 468) mutex_unlock(&cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 469) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 470)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 471) int cache_find(int id, char *name)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 472) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 473) struct object *obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 474) int ret = -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 475)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 476) mutex_lock(&cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 477) obj = __cache_find(id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 478) if (obj) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 479) ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 480) strcpy(name, obj->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 481) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 482) mutex_unlock(&cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 483) return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 484) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 485)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 486) Note that we always make sure we have the cache_lock when we add,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 487) delete, or look up the cache: both the cache infrastructure itself and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 488) the contents of the objects are protected by the lock. In this case it's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 489) easy, since we copy the data for the user, and never let them access the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 490) objects directly.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 491)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 492) There is a slight (and common) optimization here: in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 493) cache_add() we set up the fields of the object before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 494) grabbing the lock. This is safe, as no-one else can access it until we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 495) put it in cache.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 496)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 497) Accessing From Interrupt Context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 498) --------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 499)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 500) Now consider the case where cache_find() can be called
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 501) from interrupt context: either a hardware interrupt or a softirq. An
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 502) example would be a timer which deletes object from the cache.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 503)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 504) The change is shown below, in standard patch format: the ``-`` are lines
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 505) which are taken away, and the ``+`` are lines which are added.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 506)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 507) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 508)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 509) --- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 510) +++ cache.c.interrupt 2003-12-09 14:07:49.000000000 +1100
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 511) @@ -12,7 +12,7 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 512) int popularity;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 513) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 514)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 515) -static DEFINE_MUTEX(cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 516) +static DEFINE_SPINLOCK(cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 517) static LIST_HEAD(cache);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 518) static unsigned int cache_num = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 519) #define MAX_CACHE_SIZE 10
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 520) @@ -55,6 +55,7 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 521) int cache_add(int id, const char *name)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 522) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 523) struct object *obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 524) + unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 525)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 526) if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 527) return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 528) @@ -63,30 +64,33 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 529) obj->id = id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 530) obj->popularity = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 531)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 532) - mutex_lock(&cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 533) + spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 534) __cache_add(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 535) - mutex_unlock(&cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 536) + spin_unlock_irqrestore(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 537) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 538) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 539)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 540) void cache_delete(int id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 541) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 542) - mutex_lock(&cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 543) + unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 544) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 545) + spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 546) __cache_delete(__cache_find(id));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 547) - mutex_unlock(&cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 548) + spin_unlock_irqrestore(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 549) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 550)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 551) int cache_find(int id, char *name)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 552) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 553) struct object *obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 554) int ret = -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 555) + unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 556)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 557) - mutex_lock(&cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 558) + spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 559) obj = __cache_find(id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 560) if (obj) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 561) ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 562) strcpy(name, obj->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 563) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 564) - mutex_unlock(&cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 565) + spin_unlock_irqrestore(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 566) return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 567) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 568)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 569) Note that the spin_lock_irqsave() will turn off
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 570) interrupts if they are on, otherwise does nothing (if we are already in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 571) an interrupt handler), hence these functions are safe to call from any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 572) context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 573)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 574) Unfortunately, cache_add() calls kmalloc()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 575) with the ``GFP_KERNEL`` flag, which is only legal in user context. I
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 576) have assumed that cache_add() is still only called in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 577) user context, otherwise this should become a parameter to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 578) cache_add().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 579)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 580) Exposing Objects Outside This File
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 581) ----------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 582)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 583) If our objects contained more information, it might not be sufficient to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 584) copy the information in and out: other parts of the code might want to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 585) keep pointers to these objects, for example, rather than looking up the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 586) id every time. This produces two problems.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 587)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 588) The first problem is that we use the ``cache_lock`` to protect objects:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 589) we'd need to make this non-static so the rest of the code can use it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 590) This makes locking trickier, as it is no longer all in one place.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 591)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 592) The second problem is the lifetime problem: if another structure keeps a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 593) pointer to an object, it presumably expects that pointer to remain
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 594) valid. Unfortunately, this is only guaranteed while you hold the lock,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 595) otherwise someone might call cache_delete() and even
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 596) worse, add another object, re-using the same address.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 597)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 598) As there is only one lock, you can't hold it forever: no-one else would
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 599) get any work done.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 600)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 601) The solution to this problem is to use a reference count: everyone who
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 602) has a pointer to the object increases it when they first get the object,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 603) and drops the reference count when they're finished with it. Whoever
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 604) drops it to zero knows it is unused, and can actually delete it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 605)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 606) Here is the code::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 607)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 608) --- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 609) +++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 610) @@ -7,6 +7,7 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 611) struct object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 612) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 613) struct list_head list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 614) + unsigned int refcnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 615) int id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 616) char name[32];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 617) int popularity;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 618) @@ -17,6 +18,35 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 619) static unsigned int cache_num = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 620) #define MAX_CACHE_SIZE 10
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 621)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 622) +static void __object_put(struct object *obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 623) +{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 624) + if (--obj->refcnt == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 625) + kfree(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 626) +}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 627) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 628) +static void __object_get(struct object *obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 629) +{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 630) + obj->refcnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 631) +}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 632) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 633) +void object_put(struct object *obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 634) +{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 635) + unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 636) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 637) + spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 638) + __object_put(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 639) + spin_unlock_irqrestore(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 640) +}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 641) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 642) +void object_get(struct object *obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 643) +{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 644) + unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 645) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 646) + spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 647) + __object_get(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 648) + spin_unlock_irqrestore(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 649) +}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 650) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 651) /* Must be holding cache_lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 652) static struct object *__cache_find(int id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 653) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 654) @@ -35,6 +65,7 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 655) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 656) BUG_ON(!obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 657) list_del(&obj->list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 658) + __object_put(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 659) cache_num--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 660) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 661)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 662) @@ -63,6 +94,7 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 663) strscpy(obj->name, name, sizeof(obj->name));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 664) obj->id = id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 665) obj->popularity = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 666) + obj->refcnt = 1; /* The cache holds a reference */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 667)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 668) spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 669) __cache_add(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 670) @@ -79,18 +111,15 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 671) spin_unlock_irqrestore(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 672) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 673)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 674) -int cache_find(int id, char *name)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 675) +struct object *cache_find(int id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 676) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 677) struct object *obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 678) - int ret = -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 679) unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 680)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 681) spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 682) obj = __cache_find(id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 683) - if (obj) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 684) - ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 685) - strcpy(name, obj->name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 686) - }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 687) + if (obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 688) + __object_get(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 689) spin_unlock_irqrestore(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 690) - return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 691) + return obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 692) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 693)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 694) We encapsulate the reference counting in the standard 'get' and 'put'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 695) functions. Now we can return the object itself from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 696) cache_find() which has the advantage that the user can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 697) now sleep holding the object (eg. to copy_to_user() to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 698) name to userspace).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 699)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 700) The other point to note is that I said a reference should be held for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 701) every pointer to the object: thus the reference count is 1 when first
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 702) inserted into the cache. In some versions the framework does not hold a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 703) reference count, but they are more complicated.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 704)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 705) Using Atomic Operations For The Reference Count
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 706) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 707)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 708) In practice, :c:type:`atomic_t` would usually be used for refcnt. There are a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 709) number of atomic operations defined in ``include/asm/atomic.h``: these
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 710) are guaranteed to be seen atomically from all CPUs in the system, so no
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 711) lock is required. In this case, it is simpler than using spinlocks,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 712) although for anything non-trivial using spinlocks is clearer. The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 713) atomic_inc() and atomic_dec_and_test()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 714) are used instead of the standard increment and decrement operators, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 715) the lock is no longer used to protect the reference count itself.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 716)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 717) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 718)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 719) --- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 720) +++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 721) @@ -7,7 +7,7 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 722) struct object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 723) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 724) struct list_head list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 725) - unsigned int refcnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 726) + atomic_t refcnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 727) int id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 728) char name[32];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 729) int popularity;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 730) @@ -18,33 +18,15 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 731) static unsigned int cache_num = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 732) #define MAX_CACHE_SIZE 10
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 733)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 734) -static void __object_put(struct object *obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 735) -{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 736) - if (--obj->refcnt == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 737) - kfree(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 738) -}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 739) -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 740) -static void __object_get(struct object *obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 741) -{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 742) - obj->refcnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 743) -}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 744) -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 745) void object_put(struct object *obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 746) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 747) - unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 748) -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 749) - spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 750) - __object_put(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 751) - spin_unlock_irqrestore(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 752) + if (atomic_dec_and_test(&obj->refcnt))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 753) + kfree(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 754) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 755)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 756) void object_get(struct object *obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 757) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 758) - unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 759) -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 760) - spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 761) - __object_get(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 762) - spin_unlock_irqrestore(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 763) + atomic_inc(&obj->refcnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 764) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 765)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 766) /* Must be holding cache_lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 767) @@ -65,7 +47,7 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 768) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 769) BUG_ON(!obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 770) list_del(&obj->list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 771) - __object_put(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 772) + object_put(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 773) cache_num--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 774) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 775)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 776) @@ -94,7 +76,7 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 777) strscpy(obj->name, name, sizeof(obj->name));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 778) obj->id = id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 779) obj->popularity = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 780) - obj->refcnt = 1; /* The cache holds a reference */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 781) + atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 782)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 783) spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 784) __cache_add(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 785) @@ -119,7 +101,7 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 786) spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 787) obj = __cache_find(id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 788) if (obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 789) - __object_get(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 790) + object_get(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 791) spin_unlock_irqrestore(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 792) return obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 793) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 794)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 795) Protecting The Objects Themselves
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 796) ---------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 797)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 798) In these examples, we assumed that the objects (except the reference
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 799) counts) never changed once they are created. If we wanted to allow the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 800) name to change, there are three possibilities:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 801)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 802) - You can make ``cache_lock`` non-static, and tell people to grab that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 803) lock before changing the name in any object.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 804)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 805) - You can provide a cache_obj_rename() which grabs this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 806) lock and changes the name for the caller, and tell everyone to use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 807) that function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 808)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 809) - You can make the ``cache_lock`` protect only the cache itself, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 810) use another lock to protect the name.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 811)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 812) Theoretically, you can make the locks as fine-grained as one lock for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 813) every field, for every object. In practice, the most common variants
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 814) are:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 815)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 816) - One lock which protects the infrastructure (the ``cache`` list in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 817) this example) and all the objects. This is what we have done so far.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 818)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 819) - One lock which protects the infrastructure (including the list
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 820) pointers inside the objects), and one lock inside the object which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 821) protects the rest of that object.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 822)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 823) - Multiple locks to protect the infrastructure (eg. one lock per hash
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 824) chain), possibly with a separate per-object lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 825)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 826) Here is the "lock-per-object" implementation:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 827)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 828) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 829)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 830) --- cache.c.refcnt-atomic 2003-12-11 15:50:54.000000000 +1100
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 831) +++ cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 832) @@ -6,11 +6,17 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 833)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 834) struct object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 835) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 836) + /* These two protected by cache_lock. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 837) struct list_head list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 838) + int popularity;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 839) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 840) atomic_t refcnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 841) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 842) + /* Doesn't change once created. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 843) int id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 844) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 845) + spinlock_t lock; /* Protects the name */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 846) char name[32];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 847) - int popularity;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 848) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 849)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 850) static DEFINE_SPINLOCK(cache_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 851) @@ -77,6 +84,7 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 852) obj->id = id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 853) obj->popularity = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 854) atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 855) + spin_lock_init(&obj->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 856)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 857) spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 858) __cache_add(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 859)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 860) Note that I decide that the popularity count should be protected by the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 861) ``cache_lock`` rather than the per-object lock: this is because it (like
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 862) the :c:type:`struct list_head <list_head>` inside the object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 863) is logically part of the infrastructure. This way, I don't need to grab
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 864) the lock of every object in __cache_add() when seeking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 865) the least popular.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 866)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 867) I also decided that the id member is unchangeable, so I don't need to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 868) grab each object lock in __cache_find() to examine the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 869) id: the object lock is only used by a caller who wants to read or write
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 870) the name field.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 871)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 872) Note also that I added a comment describing what data was protected by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 873) which locks. This is extremely important, as it describes the runtime
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 874) behavior of the code, and can be hard to gain from just reading. And as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 875) Alan Cox says, “Lock data, not code”.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 876)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 877) Common Problems
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 878) ===============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 879)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 880) Deadlock: Simple and Advanced
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 881) -----------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 882)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 883) There is a coding bug where a piece of code tries to grab a spinlock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 884) twice: it will spin forever, waiting for the lock to be released
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 885) (spinlocks, rwlocks and mutexes are not recursive in Linux). This is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 886) trivial to diagnose: not a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 887) stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 888)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 889) For a slightly more complex case, imagine you have a region shared by a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 890) softirq and user context. If you use a spin_lock() call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 891) to protect it, it is possible that the user context will be interrupted
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 892) by the softirq while it holds the lock, and the softirq will then spin
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 893) forever trying to get the same lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 894)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 895) Both of these are called deadlock, and as shown above, it can occur even
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 896) with a single CPU (although not on UP compiles, since spinlocks vanish
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 897) on kernel compiles with ``CONFIG_SMP``\ =n. You'll still get data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 898) corruption in the second example).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 899)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 900) This complete lockup is easy to diagnose: on SMP boxes the watchdog
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 901) timer or compiling with ``DEBUG_SPINLOCK`` set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 902) (``include/linux/spinlock.h``) will show this up immediately when it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 903) happens.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 904)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 905) A more complex problem is the so-called 'deadly embrace', involving two
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 906) or more locks. Say you have a hash table: each entry in the table is a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 907) spinlock, and a chain of hashed objects. Inside a softirq handler, you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 908) sometimes want to alter an object from one place in the hash to another:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 909) you grab the spinlock of the old hash chain and the spinlock of the new
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 910) hash chain, and delete the object from the old one, and insert it in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 911) new one.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 912)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 913) There are two problems here. First, if your code ever tries to move the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 914) object to the same chain, it will deadlock with itself as it tries to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 915) lock it twice. Secondly, if the same softirq on another CPU is trying to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 916) move another object in the reverse direction, the following could
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 917) happen:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 918)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 919) +-----------------------+-----------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 920) | CPU 1 | CPU 2 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 921) +=======================+=======================+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 922) | Grab lock A -> OK | Grab lock B -> OK |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 923) +-----------------------+-----------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 924) | Grab lock B -> spin | Grab lock A -> spin |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 925) +-----------------------+-----------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 926)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 927) Table: Consequences
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 928)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 929) The two CPUs will spin forever, waiting for the other to give up their
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 930) lock. It will look, smell, and feel like a crash.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 931)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 932) Preventing Deadlock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 933) -------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 934)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 935) Textbooks will tell you that if you always lock in the same order, you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 936) will never get this kind of deadlock. Practice will tell you that this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 937) approach doesn't scale: when I create a new lock, I don't understand
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 938) enough of the kernel to figure out where in the 5000 lock hierarchy it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 939) will fit.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 940)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 941) The best locks are encapsulated: they never get exposed in headers, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 942) are never held around calls to non-trivial functions outside the same
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 943) file. You can read through this code and see that it will never
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 944) deadlock, because it never tries to grab another lock while it has that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 945) one. People using your code don't even need to know you are using a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 946) lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 947)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 948) A classic problem here is when you provide callbacks or hooks: if you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 949) call these with the lock held, you risk simple deadlock, or a deadly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 950) embrace (who knows what the callback will do?). Remember, the other
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 951) programmers are out to get you, so don't do this.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 952)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 953) Overzealous Prevention Of Deadlocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 954) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 955)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 956) Deadlocks are problematic, but not as bad as data corruption. Code which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 957) grabs a read lock, searches a list, fails to find what it wants, drops
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 958) the read lock, grabs a write lock and inserts the object has a race
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 959) condition.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 960)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 961) If you don't see why, please stay the fuck away from my code.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 962)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 963) Racing Timers: A Kernel Pastime
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 964) -------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 965)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 966) Timers can produce their own special problems with races. Consider a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 967) collection of objects (list, hash, etc) where each object has a timer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 968) which is due to destroy it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 969)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 970) If you want to destroy the entire collection (say on module removal),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 971) you might do the following::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 972)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 973) /* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 974) HUNGARIAN NOTATION */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 975) spin_lock_bh(&list_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 976)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 977) while (list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 978) struct foo *next = list->next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 979) del_timer(&list->timer);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 980) kfree(list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 981) list = next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 982) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 983)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 984) spin_unlock_bh(&list_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 985)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 986)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 987) Sooner or later, this will crash on SMP, because a timer can have just
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 988) gone off before the spin_lock_bh(), and it will only get
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 989) the lock after we spin_unlock_bh(), and then try to free
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 990) the element (which has already been freed!).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 991)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 992) This can be avoided by checking the result of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 993) del_timer(): if it returns 1, the timer has been deleted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 994) If 0, it means (in this case) that it is currently running, so we can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 995) do::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 996)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 997) retry:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 998) spin_lock_bh(&list_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 999)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1000) while (list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1001) struct foo *next = list->next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1002) if (!del_timer(&list->timer)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1003) /* Give timer a chance to delete this */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1004) spin_unlock_bh(&list_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1005) goto retry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1006) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1007) kfree(list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1008) list = next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1009) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1010)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1011) spin_unlock_bh(&list_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1012)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1013)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1014) Another common problem is deleting timers which restart themselves (by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1015) calling add_timer() at the end of their timer function).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1016) Because this is a fairly common case which is prone to races, you should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1017) use del_timer_sync() (``include/linux/timer.h``) to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1018) handle this case. It returns the number of times the timer had to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1019) deleted before we finally stopped it from adding itself back in.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1020)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1021) Locking Speed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1022) =============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1023)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1024) There are three main things to worry about when considering speed of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1025) some code which does locking. First is concurrency: how many things are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1026) going to be waiting while someone else is holding a lock. Second is the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1027) time taken to actually acquire and release an uncontended lock. Third is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1028) using fewer, or smarter locks. I'm assuming that the lock is used fairly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1029) often: otherwise, you wouldn't be concerned about efficiency.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1030)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1031) Concurrency depends on how long the lock is usually held: you should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1032) hold the lock for as long as needed, but no longer. In the cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1033) example, we always create the object without the lock held, and then
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1034) grab the lock only when we are ready to insert it in the list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1035)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1036) Acquisition times depend on how much damage the lock operations do to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1037) the pipeline (pipeline stalls) and how likely it is that this CPU was
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1038) the last one to grab the lock (ie. is the lock cache-hot for this CPU):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1039) on a machine with more CPUs, this likelihood drops fast. Consider a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1040) 700MHz Intel Pentium III: an instruction takes about 0.7ns, an atomic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1041) increment takes about 58ns, a lock which is cache-hot on this CPU takes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1042) 160ns, and a cacheline transfer from another CPU takes an additional 170
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1043) to 360ns. (These figures from Paul McKenney's `Linux Journal RCU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1044) article <http://www.linuxjournal.com/article.php?sid=6993>`__).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1045)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1046) These two aims conflict: holding a lock for a short time might be done
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1047) by splitting locks into parts (such as in our final per-object-lock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1048) example), but this increases the number of lock acquisitions, and the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1049) results are often slower than having a single lock. This is another
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1050) reason to advocate locking simplicity.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1051)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1052) The third concern is addressed below: there are some methods to reduce
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1053) the amount of locking which needs to be done.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1054)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1055) Read/Write Lock Variants
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1056) ------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1057)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1058) Both spinlocks and mutexes have read/write variants: ``rwlock_t`` and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1059) :c:type:`struct rw_semaphore <rw_semaphore>`. These divide
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1060) users into two classes: the readers and the writers. If you are only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1061) reading the data, you can get a read lock, but to write to the data you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1062) need the write lock. Many people can hold a read lock, but a writer must
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1063) be sole holder.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1064)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1065) If your code divides neatly along reader/writer lines (as our cache code
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1066) does), and the lock is held by readers for significant lengths of time,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1067) using these locks can help. They are slightly slower than the normal
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1068) locks though, so in practice ``rwlock_t`` is not usually worthwhile.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1069)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1070) Avoiding Locks: Read Copy Update
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1071) --------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1072)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1073) There is a special method of read/write locking called Read Copy Update.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1074) Using RCU, the readers can avoid taking a lock altogether: as we expect
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1075) our cache to be read more often than updated (otherwise the cache is a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1076) waste of time), it is a candidate for this optimization.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1077)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1078) How do we get rid of read locks? Getting rid of read locks means that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1079) writers may be changing the list underneath the readers. That is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1080) actually quite simple: we can read a linked list while an element is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1081) being added if the writer adds the element very carefully. For example,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1082) adding ``new`` to a single linked list called ``list``::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1083)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1084) new->next = list->next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1085) wmb();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1086) list->next = new;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1087)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1088)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1089) The wmb() is a write memory barrier. It ensures that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1090) first operation (setting the new element's ``next`` pointer) is complete
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1091) and will be seen by all CPUs, before the second operation is (putting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1092) the new element into the list). This is important, since modern
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1093) compilers and modern CPUs can both reorder instructions unless told
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1094) otherwise: we want a reader to either not see the new element at all, or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1095) see the new element with the ``next`` pointer correctly pointing at the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1096) rest of the list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1097)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1098) Fortunately, there is a function to do this for standard
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1099) :c:type:`struct list_head <list_head>` lists:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1100) list_add_rcu() (``include/linux/list.h``).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1101)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1102) Removing an element from the list is even simpler: we replace the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1103) pointer to the old element with a pointer to its successor, and readers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1104) will either see it, or skip over it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1105)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1106) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1107)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1108) list->next = old->next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1109)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1110)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1111) There is list_del_rcu() (``include/linux/list.h``) which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1112) does this (the normal version poisons the old object, which we don't
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1113) want).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1114)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1115) The reader must also be careful: some CPUs can look through the ``next``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1116) pointer to start reading the contents of the next element early, but
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1117) don't realize that the pre-fetched contents is wrong when the ``next``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1118) pointer changes underneath them. Once again, there is a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1119) list_for_each_entry_rcu() (``include/linux/list.h``)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1120) to help you. Of course, writers can just use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1121) list_for_each_entry(), since there cannot be two
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1122) simultaneous writers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1123)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1124) Our final dilemma is this: when can we actually destroy the removed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1125) element? Remember, a reader might be stepping through this element in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1126) the list right now: if we free this element and the ``next`` pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1127) changes, the reader will jump off into garbage and crash. We need to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1128) wait until we know that all the readers who were traversing the list
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1129) when we deleted the element are finished. We use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1130) call_rcu() to register a callback which will actually
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1131) destroy the object once all pre-existing readers are finished.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1132) Alternatively, synchronize_rcu() may be used to block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1133) until all pre-existing are finished.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1134)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1135) But how does Read Copy Update know when the readers are finished? The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1136) method is this: firstly, the readers always traverse the list inside
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1137) rcu_read_lock()/rcu_read_unlock() pairs:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1138) these simply disable preemption so the reader won't go to sleep while
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1139) reading the list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1140)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1141) RCU then waits until every other CPU has slept at least once: since
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1142) readers cannot sleep, we know that any readers which were traversing the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1143) list during the deletion are finished, and the callback is triggered.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1144) The real Read Copy Update code is a little more optimized than this, but
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1145) this is the fundamental idea.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1146)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1147) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1148)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1149) --- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1150) +++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1151) @@ -1,15 +1,18 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1152) #include <linux/list.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1153) #include <linux/slab.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1154) #include <linux/string.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1155) +#include <linux/rcupdate.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1156) #include <linux/mutex.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1157) #include <asm/errno.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1158)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1159) struct object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1160) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1161) - /* These two protected by cache_lock. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1162) + /* This is protected by RCU */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1163) struct list_head list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1164) int popularity;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1165)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1166) + struct rcu_head rcu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1167) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1168) atomic_t refcnt;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1169)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1170) /* Doesn't change once created. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1171) @@ -40,7 +43,7 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1172) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1173) struct object *i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1174)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1175) - list_for_each_entry(i, &cache, list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1176) + list_for_each_entry_rcu(i, &cache, list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1177) if (i->id == id) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1178) i->popularity++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1179) return i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1180) @@ -49,19 +52,25 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1181) return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1182) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1183)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1184) +/* Final discard done once we know no readers are looking. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1185) +static void cache_delete_rcu(void *arg)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1186) +{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1187) + object_put(arg);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1188) +}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1189) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1190) /* Must be holding cache_lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1191) static void __cache_delete(struct object *obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1192) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1193) BUG_ON(!obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1194) - list_del(&obj->list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1195) - object_put(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1196) + list_del_rcu(&obj->list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1197) cache_num--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1198) + call_rcu(&obj->rcu, cache_delete_rcu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1199) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1200)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1201) /* Must be holding cache_lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1202) static void __cache_add(struct object *obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1203) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1204) - list_add(&obj->list, &cache);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1205) + list_add_rcu(&obj->list, &cache);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1206) if (++cache_num > MAX_CACHE_SIZE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1207) struct object *i, *outcast = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1208) list_for_each_entry(i, &cache, list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1209) @@ -104,12 +114,11 @@
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1210) struct object *cache_find(int id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1211) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1212) struct object *obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1213) - unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1214)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1215) - spin_lock_irqsave(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1216) + rcu_read_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1217) obj = __cache_find(id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1218) if (obj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1219) object_get(obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1220) - spin_unlock_irqrestore(&cache_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1221) + rcu_read_unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1222) return obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1223) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1224)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1225) Note that the reader will alter the popularity member in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1226) __cache_find(), and now it doesn't hold a lock. One
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1227) solution would be to make it an ``atomic_t``, but for this usage, we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1228) don't really care about races: an approximate result is good enough, so
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1229) I didn't change it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1230)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1231) The result is that cache_find() requires no
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1232) synchronization with any other functions, so is almost as fast on SMP as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1233) it would be on UP.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1234)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1235) There is a further optimization possible here: remember our original
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1236) cache code, where there were no reference counts and the caller simply
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1237) held the lock whenever using the object? This is still possible: if you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1238) hold the lock, no one can delete the object, so you don't need to get
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1239) and put the reference count.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1240)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1241) Now, because the 'read lock' in RCU is simply disabling preemption, a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1242) caller which always has preemption disabled between calling
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1243) cache_find() and object_put() does not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1244) need to actually get and put the reference count: we could expose
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1245) __cache_find() by making it non-static, and such
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1246) callers could simply call that.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1247)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1248) The benefit here is that the reference count is not written to: the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1249) object is not altered in any way, which is much faster on SMP machines
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1250) due to caching.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1251)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1252) Per-CPU Data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1253) ------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1254)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1255) Another technique for avoiding locking which is used fairly widely is to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1256) duplicate information for each CPU. For example, if you wanted to keep a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1257) count of a common condition, you could use a spin lock and a single
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1258) counter. Nice and simple.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1259)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1260) If that was too slow (it's usually not, but if you've got a really big
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1261) machine to test on and can show that it is), you could instead use a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1262) counter for each CPU, then none of them need an exclusive lock. See
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1263) DEFINE_PER_CPU(), get_cpu_var() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1264) put_cpu_var() (``include/linux/percpu.h``).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1265)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1266) Of particular use for simple per-cpu counters is the ``local_t`` type,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1267) and the cpu_local_inc() and related functions, which are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1268) more efficient than simple code on some architectures
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1269) (``include/asm/local.h``).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1270)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1271) Note that there is no simple, reliable way of getting an exact value of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1272) such a counter, without introducing more locks. This is not a problem
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1273) for some uses.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1274)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1275) Data Which Mostly Used By An IRQ Handler
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1276) ----------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1277)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1278) If data is always accessed from within the same IRQ handler, you don't
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1279) need a lock at all: the kernel already guarantees that the irq handler
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1280) will not run simultaneously on multiple CPUs.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1281)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1282) Manfred Spraul points out that you can still do this, even if the data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1283) is very occasionally accessed in user context or softirqs/tasklets. The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1284) irq handler doesn't use a lock, and all other accesses are done as so::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1285)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1286) spin_lock(&lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1287) disable_irq(irq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1288) ...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1289) enable_irq(irq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1290) spin_unlock(&lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1291)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1292) The disable_irq() prevents the irq handler from running
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1293) (and waits for it to finish if it's currently running on other CPUs).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1294) The spinlock prevents any other accesses happening at the same time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1295) Naturally, this is slower than just a spin_lock_irq()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1296) call, so it only makes sense if this type of access happens extremely
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1297) rarely.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1298)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1299) What Functions Are Safe To Call From Interrupts?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1300) ================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1301)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1302) Many functions in the kernel sleep (ie. call schedule()) directly or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1303) indirectly: you can never call them while holding a spinlock, or with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1304) preemption disabled. This also means you need to be in user context:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1305) calling them from an interrupt is illegal.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1306)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1307) Some Functions Which Sleep
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1308) --------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1309)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1310) The most common ones are listed below, but you usually have to read the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1311) code to find out if other calls are safe. If everyone else who calls it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1312) can sleep, you probably need to be able to sleep, too. In particular,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1313) registration and deregistration functions usually expect to be called
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1314) from user context, and can sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1315)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1316) - Accesses to userspace:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1317)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1318) - copy_from_user()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1319)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1320) - copy_to_user()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1321)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1322) - get_user()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1323)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1324) - put_user()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1325)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1326) - kmalloc(GP_KERNEL) <kmalloc>`
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1327)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1328) - mutex_lock_interruptible() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1329) mutex_lock()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1330)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1331) There is a mutex_trylock() which does not sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1332) Still, it must not be used inside interrupt context since its
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1333) implementation is not safe for that. mutex_unlock()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1334) will also never sleep. It cannot be used in interrupt context either
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1335) since a mutex must be released by the same task that acquired it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1336)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1337) Some Functions Which Don't Sleep
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1338) --------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1339)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1340) Some functions are safe to call from any context, or holding almost any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1341) lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1342)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1343) - printk()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1344)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1345) - kfree()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1346)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1347) - add_timer() and del_timer()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1348)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1349) Mutex API reference
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1350) ===================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1351)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1352) .. kernel-doc:: include/linux/mutex.h
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1353) :internal:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1354)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1355) .. kernel-doc:: kernel/locking/mutex.c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1356) :export:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1357)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1358) Futex API reference
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1359) ===================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1360)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1361) .. kernel-doc:: kernel/futex.c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1362) :internal:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1363)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1364) Further reading
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1365) ===============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1366)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1367) - ``Documentation/locking/spinlocks.rst``: Linus Torvalds' spinlocking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1368) tutorial in the kernel sources.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1369)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1370) - Unix Systems for Modern Architectures: Symmetric Multiprocessing and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1371) Caching for Kernel Programmers:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1372)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1373) Curt Schimmel's very good introduction to kernel level locking (not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1374) written for Linux, but nearly everything applies). The book is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1375) expensive, but really worth every penny to understand SMP locking.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1376) [ISBN: 0201633388]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1377)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1378) Thanks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1379) ======
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1380)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1381) Thanks to Telsa Gwynne for DocBooking, neatening and adding style.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1382)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1383) Thanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul Mackerras,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1384) Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim Waugh, Pete Zaitcev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1385) James Morris, Robert Love, Paul McKenney, John Ashby for proofreading,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1386) correcting, flaming, commenting.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1387)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1388) Thanks to the cabal for having no influence on this document.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1389)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1390) Glossary
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1391) ========
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1392)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1393) preemption
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1394) Prior to 2.5, or when ``CONFIG_PREEMPT`` is unset, processes in user
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1395) context inside the kernel would not preempt each other (ie. you had that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1396) CPU until you gave it up, except for interrupts). With the addition of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1397) ``CONFIG_PREEMPT`` in 2.5.4, this changed: when in user context, higher
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1398) priority tasks can "cut in": spinlocks were changed to disable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1399) preemption, even on UP.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1400)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1401) bh
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1402) Bottom Half: for historical reasons, functions with '_bh' in them often
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1403) now refer to any software interrupt, e.g. spin_lock_bh()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1404) blocks any software interrupt on the current CPU. Bottom halves are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1405) deprecated, and will eventually be replaced by tasklets. Only one bottom
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1406) half will be running at any time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1407)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1408) Hardware Interrupt / Hardware IRQ
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1409) Hardware interrupt request. in_irq() returns true in a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1410) hardware interrupt handler.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1411)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1412) Interrupt Context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1413) Not user context: processing a hardware irq or software irq. Indicated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1414) by the in_interrupt() macro returning true.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1415)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1416) SMP
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1417) Symmetric Multi-Processor: kernels compiled for multiple-CPU machines.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1418) (``CONFIG_SMP=y``).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1419)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1420) Software Interrupt / softirq
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1421) Software interrupt handler. in_irq() returns false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1422) in_softirq() returns true. Tasklets and softirqs both
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1423) fall into the category of 'software interrupts'.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1424)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1425) Strictly speaking a softirq is one of up to 32 enumerated software
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1426) interrupts which can run on multiple CPUs at once. Sometimes used to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1427) refer to tasklets as well (ie. all software interrupts).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1428)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1429) tasklet
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1430) A dynamically-registrable software interrupt, which is guaranteed to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1431) only run on one CPU at a time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1432)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1433) timer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1434) A dynamically-registrable software interrupt, which is run at (or close
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1435) to) a given time. When running, it is just like a tasklet (in fact, they
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1436) are called from the ``TIMER_SOFTIRQ``).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1437)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1438) UP
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1439) Uni-Processor: Non-SMP. (``CONFIG_SMP=n``).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1440)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1441) User Context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1442) The kernel executing on behalf of a particular process (ie. a system
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1443) call or trap) or kernel thread. You can tell which process with the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1444) ``current`` macro.) Not to be confused with userspace. Can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1445) interrupted by software or hardware interrupts.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1446)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1447) Userspace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1448) A process executing its own code outside the kernel.