Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) ===========================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) Hardware Spinlock Framework
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3) ===========================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5) Introduction
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6) ============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8) Hardware spinlock modules provide hardware assistance for synchronization
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9) and mutual exclusion between heterogeneous processors and those not operating
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10) under a single, shared operating system.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12) For example, OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13) each of which is running a different Operating System (the master, A9,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14) is usually running Linux and the slave processors, the M3 and the DSP,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15) are running some flavor of RTOS).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17) A generic hwspinlock framework allows platform-independent drivers to use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18) the hwspinlock device in order to access data structures that are shared
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19) between remote processors, that otherwise have no alternative mechanism
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20) to accomplish synchronization and mutual exclusion operations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22) This is necessary, for example, for Inter-processor communications:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23) on OMAP4, cpu-intensive multimedia tasks are offloaded by the host to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24) remote M3 and/or C64x+ slave processors (by an IPC subsystem called Syslink).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26) To achieve fast message-based communications, a minimal kernel support
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27) is needed to deliver messages arriving from a remote processor to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28) appropriate user process.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30) This communication is based on simple data structures that is shared between
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31) the remote processors, and access to it is synchronized using the hwspinlock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32) module (remote processor directly places new messages in this shared data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33) structure).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35) A common hwspinlock interface makes it possible to have generic, platform-
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36) independent, drivers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38) User API
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39) ========
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43)   struct hwspinlock *hwspin_lock_request(void);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45) Dynamically assign an hwspinlock and return its address, or NULL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46) in case an unused hwspinlock isn't available. Users of this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47) API will usually want to communicate the lock's id to the remote core
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48) before it can be used to achieve synchronization.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50) Should be called from a process context (might sleep).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54)   struct hwspinlock *hwspin_lock_request_specific(unsigned int id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56) Assign a specific hwspinlock id and return its address, or NULL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57) if that hwspinlock is already in use. Usually board code will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58) be calling this function in order to reserve specific hwspinlock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59) ids for predefined purposes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61) Should be called from a process context (might sleep).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65)   int of_hwspin_lock_get_id(struct device_node *np, int index);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67) Retrieve the global lock id for an OF phandle-based specific lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68) This function provides a means for DT users of a hwspinlock module
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69) to get the global lock id of a specific hwspinlock, so that it can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70) be requested using the normal hwspin_lock_request_specific() API.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72) The function returns a lock id number on success, -EPROBE_DEFER if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73) the hwspinlock device is not yet registered with the core, or other
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74) error values.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76) Should be called from a process context (might sleep).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80)   int hwspin_lock_free(struct hwspinlock *hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82) Free a previously-assigned hwspinlock; returns 0 on success, or an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83) appropriate error code on failure (e.g. -EINVAL if the hwspinlock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84) is already free).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86) Should be called from a process context (might sleep).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90)   int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92) Lock a previously-assigned hwspinlock with a timeout limit (specified in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93) msecs). If the hwspinlock is already taken, the function will busy loop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94) waiting for it to be released, but give up when the timeout elapses.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95) Upon a successful return from this function, preemption is disabled so
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96) the caller must not sleep, and is advised to release the hwspinlock as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97) soon as possible, in order to minimize remote cores polling on the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98) hardware interconnect.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) Returns 0 when successful and an appropriate error code otherwise (most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) The function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106)   int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) Lock a previously-assigned hwspinlock with a timeout limit (specified in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) msecs). If the hwspinlock is already taken, the function will busy loop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) waiting for it to be released, but give up when the timeout elapses.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) Upon a successful return from this function, preemption and the local
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) interrupts are disabled, so the caller must not sleep, and is advised to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) release the hwspinlock as soon as possible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) Returns 0 when successful and an appropriate error code otherwise (most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) The function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121)   int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) 				  unsigned long *flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) Lock a previously-assigned hwspinlock with a timeout limit (specified in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) msecs). If the hwspinlock is already taken, the function will busy loop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) waiting for it to be released, but give up when the timeout elapses.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) Upon a successful return from this function, preemption is disabled,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) local interrupts are disabled and their previous state is saved at the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) given flags placeholder. The caller must not sleep, and is advised to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) release the hwspinlock as soon as possible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) Returns 0 when successful and an appropriate error code otherwise (most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) The function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139)   int hwspin_lock_timeout_raw(struct hwspinlock *hwlock, unsigned int timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) Lock a previously-assigned hwspinlock with a timeout limit (specified in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) msecs). If the hwspinlock is already taken, the function will busy loop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) waiting for it to be released, but give up when the timeout elapses.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) Caution: User must protect the routine of getting hardware lock with mutex
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) or spinlock to avoid dead-lock, that will let user can do some time-consuming
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) or sleepable operations under the hardware lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) Returns 0 when successful and an appropriate error code otherwise (most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) The function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156)   int hwspin_lock_timeout_in_atomic(struct hwspinlock *hwlock, unsigned int to);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) Lock a previously-assigned hwspinlock with a timeout limit (specified in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) msecs). If the hwspinlock is already taken, the function will busy loop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) waiting for it to be released, but give up when the timeout elapses.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) This function shall be called only from an atomic context and the timeout
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) value shall not exceed a few msecs.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) Returns 0 when successful and an appropriate error code otherwise (most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) The function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172)   int hwspin_trylock(struct hwspinlock *hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) Attempt to lock a previously-assigned hwspinlock, but immediately fail if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) it is already taken.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) Upon a successful return from this function, preemption is disabled so
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) caller must not sleep, and is advised to release the hwspinlock as soon as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) possible, in order to minimize remote cores polling on the hardware
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) interconnect.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) Returns 0 on success and an appropriate error code otherwise (most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) notably -EBUSY if the hwspinlock was already taken).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) The function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189)   int hwspin_trylock_irq(struct hwspinlock *hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) Attempt to lock a previously-assigned hwspinlock, but immediately fail if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) it is already taken.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) Upon a successful return from this function, preemption and the local
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) interrupts are disabled so caller must not sleep, and is advised to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) release the hwspinlock as soon as possible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) Returns 0 on success and an appropriate error code otherwise (most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) notably -EBUSY if the hwspinlock was already taken).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) The function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206)   int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) Attempt to lock a previously-assigned hwspinlock, but immediately fail if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) it is already taken.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) Upon a successful return from this function, preemption is disabled,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) the local interrupts are disabled and their previous state is saved
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) at the given flags placeholder. The caller must not sleep, and is advised
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) to release the hwspinlock as soon as possible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) Returns 0 on success and an appropriate error code otherwise (most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) notably -EBUSY if the hwspinlock was already taken).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) The function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222)   int hwspin_trylock_raw(struct hwspinlock *hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) Attempt to lock a previously-assigned hwspinlock, but immediately fail if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) it is already taken.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) Caution: User must protect the routine of getting hardware lock with mutex
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) or spinlock to avoid dead-lock, that will let user can do some time-consuming
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) or sleepable operations under the hardware lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) Returns 0 on success and an appropriate error code otherwise (most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) notably -EBUSY if the hwspinlock was already taken).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) The function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237)   int hwspin_trylock_in_atomic(struct hwspinlock *hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) Attempt to lock a previously-assigned hwspinlock, but immediately fail if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) it is already taken.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) This function shall be called only from an atomic context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) Returns 0 on success and an appropriate error code otherwise (most
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) notably -EBUSY if the hwspinlock was already taken).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) The function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250)   void hwspin_unlock(struct hwspinlock *hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) Unlock a previously-locked hwspinlock. Always succeed, and can be called
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) from any context (the function never sleeps).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) .. note::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257)   code should **never** unlock an hwspinlock which is already unlocked
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258)   (there is no protection against this).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262)   void hwspin_unlock_irq(struct hwspinlock *hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) Unlock a previously-locked hwspinlock and enable local interrupts.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) The caller should **never** unlock an hwspinlock which is already unlocked.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) Doing so is considered a bug (there is no protection against this).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) Upon a successful return from this function, preemption and local
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) interrupts are enabled. This function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273)   void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274)   hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) Unlock a previously-locked hwspinlock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) The caller should **never** unlock an hwspinlock which is already unlocked.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) Doing so is considered a bug (there is no protection against this).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) Upon a successful return from this function, preemption is reenabled,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) and the state of the local interrupts is restored to the state saved at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) the given flags. This function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286)   void hwspin_unlock_raw(struct hwspinlock *hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) Unlock a previously-locked hwspinlock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) The caller should **never** unlock an hwspinlock which is already unlocked.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) Doing so is considered a bug (there is no protection against this).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) This function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296)   void hwspin_unlock_in_atomic(struct hwspinlock *hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) Unlock a previously-locked hwspinlock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) The caller should **never** unlock an hwspinlock which is already unlocked.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) Doing so is considered a bug (there is no protection against this).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) This function will never sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306)   int hwspin_lock_get_id(struct hwspinlock *hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) Retrieve id number of a given hwspinlock. This is needed when an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) hwspinlock is dynamically assigned: before it can be used to achieve
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) mutual exclusion with a remote cpu, the id number should be communicated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) to the remote task with which we want to synchronize.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) Returns the hwspinlock id number, or -EINVAL if hwlock is null.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) Typical usage
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) =============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) 	#include <linux/hwspinlock.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) 	#include <linux/err.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) 	int hwspinlock_example1(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) 	{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) 		struct hwspinlock *hwlock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) 		int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) 		/* dynamically assign a hwspinlock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) 		hwlock = hwspin_lock_request();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) 		if (!hwlock)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) 			...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) 		id = hwspin_lock_get_id(hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) 		/* probably need to communicate id to a remote processor now */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) 		/* take the lock, spin for 1 sec if it's already taken */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) 		ret = hwspin_lock_timeout(hwlock, 1000);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) 		if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) 			...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) 		* we took the lock, do our thing now, but do NOT sleep
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) 		*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) 		/* release the lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) 		hwspin_unlock(hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) 		/* free the lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) 		ret = hwspin_lock_free(hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) 		if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) 			...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) 	int hwspinlock_example2(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) 	{
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) 		struct hwspinlock *hwlock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) 		int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) 		* assign a specific hwspinlock id - this should be called early
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) 		* by board init code.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) 		*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) 		hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) 		if (!hwlock)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) 			...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) 		/* try to take it, but don't spin on it */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) 		ret = hwspin_trylock(hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) 		if (!ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372) 			pr_info("lock is already taken\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) 			return -EBUSY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) 		* we took the lock, do our thing now, but do NOT sleep
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) 		*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) 		/* release the lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381) 		hwspin_unlock(hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383) 		/* free the lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384) 		ret = hwspin_lock_free(hwlock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385) 		if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) 			...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392) API for implementors
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393) ====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397)   int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) 		const struct hwspinlock_ops *ops, int base_id, int num_locks);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) To be called from the underlying platform-specific implementation, in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401) order to register a new hwspinlock device (which is usually a bank of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) numerous locks). Should be called from a process context (this function
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) might sleep).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) Returns 0 on success, or appropriate error code on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409)   int hwspin_lock_unregister(struct hwspinlock_device *bank);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) To be called from the underlying vendor-specific implementation, in order
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) to unregister an hwspinlock device (which is usually a bank of numerous
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413) locks).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415) Should be called from a process context (this function might sleep).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) Returns the address of hwspinlock on success, or NULL on error (e.g.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418) if the hwspinlock is still in use).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420) Important structs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421) =================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423) struct hwspinlock_device is a device which usually contains a bank
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424) of hardware locks. It is registered by the underlying hwspinlock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425) implementation using the hwspin_lock_register() API.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429) 	/**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430) 	* struct hwspinlock_device - a device which usually spans numerous hwspinlocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431) 	* @dev: underlying device, will be used to invoke runtime PM api
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432) 	* @ops: platform-specific hwspinlock handlers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433) 	* @base_id: id index of the first lock in this device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) 	* @num_locks: number of locks in this device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435) 	* @lock: dynamically allocated array of 'struct hwspinlock'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436) 	*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437) 	struct hwspinlock_device {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438) 		struct device *dev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439) 		const struct hwspinlock_ops *ops;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440) 		int base_id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441) 		int num_locks;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442) 		struct hwspinlock lock[0];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443) 	};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445) struct hwspinlock_device contains an array of hwspinlock structs, each
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446) of which represents a single hardware lock::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448) 	/**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449) 	* struct hwspinlock - this struct represents a single hwspinlock instance
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450) 	* @bank: the hwspinlock_device structure which owns this lock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451) 	* @lock: initialized and used by hwspinlock core
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452) 	* @priv: private data, owned by the underlying platform-specific hwspinlock drv
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453) 	*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454) 	struct hwspinlock {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455) 		struct hwspinlock_device *bank;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456) 		spinlock_t lock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457) 		void *priv;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458) 	};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 460) When registering a bank of locks, the hwspinlock driver only needs to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 461) set the priv members of the locks. The rest of the members are set and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 462) initialized by the hwspinlock core itself.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 463) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 464) Implementation callbacks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 465) ========================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 466) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 467) There are three possible callbacks defined in 'struct hwspinlock_ops'::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 468) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 469) 	struct hwspinlock_ops {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 470) 		int (*trylock)(struct hwspinlock *lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 471) 		void (*unlock)(struct hwspinlock *lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 472) 		void (*relax)(struct hwspinlock *lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 473) 	};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 474) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 475) The first two callbacks are mandatory:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 476) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 477) The ->trylock() callback should make a single attempt to take the lock, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 478) return 0 on failure and 1 on success. This callback may **not** sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 479) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 480) The ->unlock() callback releases the lock. It always succeed, and it, too,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 481) may **not** sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 482) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 483) The ->relax() callback is optional. It is called by hwspinlock core while
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 484) spinning on a lock, and can be used by the underlying implementation to force
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 485) a delay between two successive invocations of ->trylock(). It may **not** sleep.