^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) .. SPDX-License-Identifier: GPL-2.0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) =====================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4) Asynchronous Transfers/Transforms API
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) =====================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) .. Contents
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) 1. INTRODUCTION
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) 2 GENEALOGY
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) 3 USAGE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) 3.1 General format of the API
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) 3.2 Supported operations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) 3.3 Descriptor management
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) 3.4 When does the operation execute?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) 3.5 When does the operation complete?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) 3.6 Constraints
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) 3.7 Example
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) 4 DMAENGINE DRIVER DEVELOPER NOTES
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) 4.1 Conformance points
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) 4.2 "My application needs exclusive control of hardware channels"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) 5 SOURCE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28) 1. Introduction
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) ===============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) The async_tx API provides methods for describing a chain of asynchronous
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) bulk memory transfers/transforms with support for inter-transactional
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33) dependencies. It is implemented as a dmaengine client that smooths over
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) the details of different hardware offload engine implementations. Code
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) that is written to the API can optimize for asynchronous operation and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) the API will fit the chain of operations to the available offload
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) resources.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) 2.Genealogy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) ===========
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) The API was initially designed to offload the memory copy and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) xor-parity-calculations of the md-raid5 driver using the offload engines
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) present in the Intel(R) Xscale series of I/O processors. It also built
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) on the 'dmaengine' layer developed for offloading memory copies in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) network stack using Intel(R) I/OAT engines. The following design
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) features surfaced as a result:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) 1. implicit synchronous path: users of the API do not need to know if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) the platform they are running on has offload capabilities. The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) operation will be offloaded when an engine is available and carried out
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) in software otherwise.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) 2. cross channel dependency chains: the API allows a chain of dependent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) operations to be submitted, like xor->copy->xor in the raid5 case. The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) API automatically handles cases where the transition from one operation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56) to another implies a hardware channel switch.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57) 3. dmaengine extensions to support multiple clients and operation types
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) beyond 'memcpy'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) 3. Usage
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) ========
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) 3.1 General format of the API
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64) -----------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) struct dma_async_tx_descriptor *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) async_<operation>(<op specific parameters>, struct async_submit ctl *submit)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) 3.2 Supported operations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) ------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) ======== ====================================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) memcpy memory copy between a source and a destination buffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) memset fill a destination buffer with a byte value
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77) xor xor a series of source buffers and write the result to a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) destination buffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) xor_val xor a series of source buffers and set a flag if the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80) result is zero. The implementation attempts to prevent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) writes to memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) pq generate the p+q (raid6 syndrome) from a series of source buffers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) pq_val validate that a p and or q buffer are in sync with a given series of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) sources
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) datap (raid6_datap_recov) recover a raid6 data block and the p block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) from the given sources
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) 2data (raid6_2data_recov) recover 2 raid6 data blocks from the given
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) sources
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) ======== ====================================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) 3.3 Descriptor management
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) -------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) The return value is non-NULL and points to a 'descriptor' when the operation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95) has been queued to execute asynchronously. Descriptors are recycled
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96) resources, under control of the offload engine driver, to be reused as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) operations complete. When an application needs to submit a chain of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) operations it must guarantee that the descriptor is not automatically recycled
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) before the dependency is submitted. This requires that all descriptors be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) acknowledged by the application before the offload engine driver is allowed to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) recycle (or free) the descriptor. A descriptor can be acked by one of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) following methods:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) 1. setting the ASYNC_TX_ACK flag if no child operations are to be submitted
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) 2. submitting an unacknowledged descriptor as a dependency to another
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) async_tx call will implicitly set the acknowledged state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) 3. calling async_tx_ack() on the descriptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) 3.4 When does the operation execute?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) ------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) Operations do not immediately issue after return from the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) async_<operation> call. Offload engine drivers batch operations to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) improve performance by reducing the number of mmio cycles needed to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) manage the channel. Once a driver-specific threshold is met the driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) automatically issues pending operations. An application can force this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) event by calling async_tx_issue_pending_all(). This operates on all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) channels since the application has no knowledge of channel to operation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) mapping.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) 3.5 When does the operation complete?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) -------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) There are two methods for an application to learn about the completion
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) of an operation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) 1. Call dma_wait_for_async_tx(). This call causes the CPU to spin while
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) it polls for the completion of the operation. It handles dependency
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) chains and issuing pending operations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) 2. Specify a completion callback. The callback routine runs in tasklet
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) context if the offload engine driver supports interrupts, or it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) called in application context if the operation is carried out
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) synchronously in software. The callback can be set in the call to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) async_<operation>, or when the application needs to submit a chain of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) unknown length it can use the async_trigger_callback() routine to set a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) completion interrupt/callback at the end of the chain.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) 3.6 Constraints
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) ---------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) 1. Calls to async_<operation> are not permitted in IRQ context. Other
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) contexts are permitted provided constraint #2 is not violated.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) 2. Completion callback routines cannot submit new operations. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) results in recursion in the synchronous case and spin_locks being
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) acquired twice in the asynchronous case.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) 3.7 Example
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) -----------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) Perform a xor->copy->xor operation where each operation depends on the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) result from the previous operation::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) void callback(void *param)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) struct completion *cmp = param;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) complete(cmp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) void run_xor_copy_xor(struct page **xor_srcs,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) int xor_src_cnt,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) struct page *xor_dest,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) size_t xor_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) struct page *copy_src,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) struct page *copy_dest,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) size_t copy_len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) struct dma_async_tx_descriptor *tx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) addr_conv_t addr_conv[xor_src_cnt];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) struct async_submit_ctl submit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) addr_conv_t addr_conv[NDISKS];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) struct completion cmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) init_async_submit(&submit, ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) addr_conv);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, &submit)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) submit->depend_tx = tx;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len, &submit);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) init_completion(&cmp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) init_async_submit(&submit, ASYNC_TX_XOR_DROP_DST | ASYNC_TX_ACK, tx,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) callback, &cmp, addr_conv);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, &submit);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) async_tx_issue_pending_all();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) wait_for_completion(&cmp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) See include/linux/async_tx.h for more information on the flags. See the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) implementation examples.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) 4. Driver Development Notes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) ===========================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) 4.1 Conformance points
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) ----------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) There are a few conformance points required in dmaengine drivers to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) accommodate assumptions made by applications using the async_tx API:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) 1. Completion callbacks are expected to happen in tasklet context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) 2. dma_async_tx_descriptor fields are never manipulated in IRQ context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) 3. Use async_tx_run_dependencies() in the descriptor clean up path to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) handle submission of dependent operations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) 4.2 "My application needs exclusive control of hardware channels"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) -----------------------------------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) Primarily this requirement arises from cases where a DMA engine driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) is being used to support device-to-memory operations. A channel that is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) performing these operations cannot, for many platform specific reasons,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) be shared. For these cases the dma_request_channel() interface is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) provided.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) The interface is::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) dma_filter_fn filter_fn,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) void *filter_param);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) Where dma_filter_fn is defined as::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) When the optional 'filter_fn' parameter is set to NULL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) dma_request_channel simply returns the first channel that satisfies the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) capability mask. Otherwise, when the mask parameter is insufficient for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) specifying the necessary channel, the filter_fn routine can be used to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) disposition the available channels in the system. The filter_fn routine
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) is called once for each free channel in the system. Upon seeing a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) suitable channel filter_fn returns DMA_ACK which flags that channel to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) be the return value from dma_request_channel. A channel allocated via
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) this interface is exclusive to the caller, until dma_release_channel()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) is called.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) The DMA_PRIVATE capability flag is used to tag dma devices that should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) not be used by the general-purpose allocator. It can be set at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) initialization time if it is known that a channel will always be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) private. Alternatively, it is set when dma_request_channel() finds an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) unused "public" channel.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) A couple caveats to note when implementing a driver and consumer:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) 1. Once a channel has been privately allocated it will no longer be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) considered by the general-purpose allocator even after a call to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) dma_release_channel().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) 2. Since capabilities are specified at the device level a dma_device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) with multiple channels will either have all channels public, or all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) channels private.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) 5. Source
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) ---------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) include/linux/dmaengine.h:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) core header file for DMA drivers and api users
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) drivers/dma/dmaengine.c:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) offload engine channel management routines
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) drivers/dma/:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) location for offload engine drivers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) include/linux/async_tx.h:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) core header file for the async_tx api
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) crypto/async_tx/async_tx.c:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) async_tx interface to dmaengine and common code
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) crypto/async_tx/async_memcpy.c:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) copy offload
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) crypto/async_tx/async_xor.c:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) xor and xor zero sum offload