^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) ====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) DMA Engine API Guide
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) ====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) Vinod Koul <vinod dot koul at intel.com>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) .. note:: For DMA Engine usage in async_tx please see:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) ``Documentation/crypto/async-tx-api.rst``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) Below is a guide to device driver writers on how to use the Slave-DMA API of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) DMA Engine. This is applicable only for slave DMA usage only.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) DMA usage
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) =========
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) The slave DMA usage consists of following steps:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) - Allocate a DMA slave channel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21) - Set slave and controller specific parameters
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) - Get a descriptor for transaction
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) - Submit the transaction
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) - Issue pending requests and wait for callback notification
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) The details of these operations are:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) 1. Allocate a DMA slave channel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33) Channel allocation is slightly different in the slave DMA context,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) client drivers typically need a channel from a particular DMA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) controller only and even in some cases a specific channel is desired.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) To request a channel dma_request_chan() API is used.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) Interface:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) struct dma_chan *dma_request_chan(struct device *dev, const char *name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) Which will find and return the ``name`` DMA channel associated with the 'dev'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) device. The association is done via DT, ACPI or board file based
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) dma_slave_map matching table.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) A channel allocated via this interface is exclusive to the caller,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) until dma_release_channel() is called.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) 2. Set slave and controller specific parameters
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) Next step is always to pass some specific information to the DMA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) driver. Most of the generic information which a slave DMA can use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) is in struct dma_slave_config. This allows the clients to specify
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56) DMA direction, DMA addresses, bus widths, DMA burst lengths etc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57) for the peripheral.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) If some DMA controllers have more parameters to be sent then they
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) should try to embed struct dma_slave_config in their controller
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) specific structure. That gives flexibility to client to pass more
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) parameters, if required.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64) Interface:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) int dmaengine_slave_config(struct dma_chan *chan,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) struct dma_slave_config *config)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) Please see the dma_slave_config structure definition in dmaengine.h
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) for a detailed explanation of the struct members. Please note
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73) that the 'direction' member will be going away as it duplicates the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) direction given in the prepare call.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) 3. Get a descriptor for transaction
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) For slave usage the various modes of slave transfers supported by the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) DMA-engine are:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) - slave_sg: DMA a list of scatter gather buffers from/to a peripheral
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) - dma_cyclic: Perform a cyclic DMA operation from/to a peripheral till the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) operation is explicitly stopped.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) - interleaved_dma: This is common to Slave as well as M2M clients. For slave
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) address of devices' fifo could be already known to the driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) Various types of operations could be expressed by setting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) appropriate values to the 'dma_interleaved_template' members. Cyclic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) interleaved DMA transfers are also possible if supported by the channel by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) setting the DMA_PREP_REPEAT transfer flag.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) A non-NULL return of this transfer API represents a "descriptor" for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) the given transaction.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96) Interface:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) struct dma_async_tx_descriptor *dmaengine_prep_slave_sg(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) struct dma_chan *chan, struct scatterlist *sgl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) unsigned int sg_len, enum dma_data_direction direction,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) unsigned long flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) struct dma_async_tx_descriptor *dmaengine_prep_dma_cyclic(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) size_t period_len, enum dma_data_direction direction);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) struct dma_async_tx_descriptor *dmaengine_prep_interleaved_dma(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) struct dma_chan *chan, struct dma_interleaved_template *xt,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) unsigned long flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) The peripheral driver is expected to have mapped the scatterlist for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) the DMA operation prior to calling dmaengine_prep_slave_sg(), and must
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) keep the scatterlist mapped until the DMA operation has completed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) The scatterlist must be mapped using the DMA struct device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) If a mapping needs to be synchronized later, dma_sync_*_for_*() must be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) called using the DMA struct device, too.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) So, normal setup should look like this:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) if (nr_sg == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) /* error */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) desc = dmaengine_prep_slave_sg(chan, sgl, nr_sg, direction, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) Once a descriptor has been obtained, the callback information can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) added and the descriptor must then be submitted. Some DMA engine
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) drivers may hold a spinlock between a successful preparation and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) submission so it is important that these two operations are closely
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) paired.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) .. note::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) Although the async_tx API specifies that completion callback
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) routines cannot submit any new operations, this is not the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) case for slave/cyclic DMA.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) For slave DMA, the subsequent transaction may not be available
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) for submission prior to callback function being invoked, so
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) slave DMA callbacks are permitted to prepare and submit a new
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) transaction.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) For cyclic DMA, a callback function may wish to terminate the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) DMA via dmaengine_terminate_async().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) Therefore, it is important that DMA engine drivers drop any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) locks before calling the callback function which may cause a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) deadlock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) Note that callbacks will always be invoked from the DMA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) engines tasklet, never from interrupt context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) **Optional: per descriptor metadata**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) DMAengine provides two ways for metadata support.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) DESC_METADATA_CLIENT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) The metadata buffer is allocated/provided by the client driver and it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) attached to the descriptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) int dmaengine_desc_attach_metadata(struct dma_async_tx_descriptor *desc,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) void *data, size_t len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) DESC_METADATA_ENGINE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) The metadata buffer is allocated/managed by the DMA driver. The client
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) driver can ask for the pointer, maximum size and the currently used size of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) the metadata and can directly update or read it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) Becasue the DMA driver manages the memory area containing the metadata,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) clients must make sure that they do not try to access or get the pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) after their transfer completion callback has run for the descriptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) If no completion callback has been defined for the transfer, then the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) metadata must not be accessed after issue_pending.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) In other words: if the aim is to read back metadata after the transfer is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) completed, then the client must use completion callback.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) void *dmaengine_desc_get_metadata_ptr(struct dma_async_tx_descriptor *desc,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) size_t *payload_len, size_t *max_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) int dmaengine_desc_set_metadata_len(struct dma_async_tx_descriptor *desc,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) size_t payload_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) Client drivers can query if a given mode is supported with:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) bool dmaengine_is_metadata_mode_supported(struct dma_chan *chan,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) enum dma_desc_metadata_mode mode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) Depending on the used mode client drivers must follow different flow.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) DESC_METADATA_CLIENT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) 1. prepare the descriptor (dmaengine_prep_*)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) construct the metadata in the client's buffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) 2. use dmaengine_desc_attach_metadata() to attach the buffer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) 3. submit the transfer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) - DMA_DEV_TO_MEM:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) 1. prepare the descriptor (dmaengine_prep_*)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) 2. use dmaengine_desc_attach_metadata() to attach the buffer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) 3. submit the transfer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) 4. when the transfer is completed, the metadata should be available in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) attached buffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) DESC_METADATA_ENGINE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 1. prepare the descriptor (dmaengine_prep_*)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) 2. use dmaengine_desc_get_metadata_ptr() to get the pointer to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) engine's metadata area
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) 3. update the metadata at the pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) 4. use dmaengine_desc_set_metadata_len() to tell the DMA engine the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) amount of data the client has placed into the metadata buffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) 5. submit the transfer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) - DMA_DEV_TO_MEM:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) 1. prepare the descriptor (dmaengine_prep_*)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) 2. submit the transfer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) 3. on transfer completion, use dmaengine_desc_get_metadata_ptr() to get
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) the pointer to the engine's metadata area
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) 4. read out the metadata from the pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) .. note::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) When DESC_METADATA_ENGINE mode is used the metadata area for the descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) is no longer valid after the transfer has been completed (valid up to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) point when the completion callback returns if used).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) Mixed use of DESC_METADATA_CLIENT / DESC_METADATA_ENGINE is not allowed,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) client drivers must use either of the modes per descriptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) 4. Submit the transaction
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) Once the descriptor has been prepared and the callback information
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) added, it must be placed on the DMA engine drivers pending queue.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) Interface:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) This returns a cookie can be used to check the progress of DMA engine
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) activity via other DMA engine calls not covered in this document.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) dmaengine_submit() will not start the DMA operation, it merely adds
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) it to the pending queue. For this, see step 5, dma_async_issue_pending.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) .. note::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) After calling ``dmaengine_submit()`` the submitted transfer descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) (``struct dma_async_tx_descriptor``) belongs to the DMA engine.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) Consequently, the client must consider invalid the pointer to that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) descriptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) 5. Issue pending DMA requests and wait for callback notification
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) The transactions in the pending queue can be activated by calling the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) issue_pending API. If channel is idle then the first transaction in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) queue is started and subsequent ones queued up.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) On completion of each DMA operation, the next in queue is started and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) a tasklet triggered. The tasklet will then call the client driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) completion callback routine for notification, if set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) Interface:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) void dma_async_issue_pending(struct dma_chan *chan);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) Further APIs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) ------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) 1. Terminate APIs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) int dmaengine_terminate_sync(struct dma_chan *chan)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) int dmaengine_terminate_async(struct dma_chan *chan)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) int dmaengine_terminate_all(struct dma_chan *chan) /* DEPRECATED */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) This causes all activity for the DMA channel to be stopped, and may
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) discard data in the DMA FIFO which hasn't been fully transferred.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) No callback functions will be called for any incomplete transfers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) Two variants of this function are available.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) dmaengine_terminate_async() might not wait until the DMA has been fully
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) stopped or until any running complete callbacks have finished. But it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) possible to call dmaengine_terminate_async() from atomic context or from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) within a complete callback. dmaengine_synchronize() must be called before it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) is safe to free the memory accessed by the DMA transfer or free resources
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) accessed from within the complete callback.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) dmaengine_terminate_sync() will wait for the transfer and any running
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) complete callbacks to finish before it returns. But the function must not be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) called from atomic context or from within a complete callback.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) dmaengine_terminate_all() is deprecated and should not be used in new code.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) 2. Pause API
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) int dmaengine_pause(struct dma_chan *chan)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) This pauses activity on the DMA channel without data loss.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) 3. Resume API
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) int dmaengine_resume(struct dma_chan *chan)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) Resume a previously paused DMA channel. It is invalid to resume a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) channel which is not currently paused.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) 4. Check Txn complete
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) enum dma_status dma_async_is_tx_complete(struct dma_chan *chan,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) This can be used to check the status of the channel. Please see
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) the documentation in include/linux/dmaengine.h for a more complete
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) description of this API.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) This can be used in conjunction with dma_async_is_complete() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) the cookie returned from dmaengine_submit() to check for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) completion of a specific DMA transaction.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) .. note::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) Not all DMA engine drivers can return reliable information for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) a running DMA channel. It is recommended that DMA engine users
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) pause or stop (via dmaengine_terminate_all()) the channel before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) using this API.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) 5. Synchronize termination API
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) .. code-block:: c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) void dmaengine_synchronize(struct dma_chan *chan)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) Synchronize the termination of the DMA channel to the current context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) This function should be used after dmaengine_terminate_async() to synchronize
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) the termination of the DMA channel to the current context. The function will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) wait for the transfer and any running complete callbacks to finish before it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) returns.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) If dmaengine_terminate_async() is used to stop the DMA channel this function
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372) must be called before it is safe to free memory accessed by previously
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) submitted descriptors or to free any resources accessed within the complete
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) callback of previously submitted descriptors.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) The behavior of this function is undefined if dma_async_issue_pending() has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) been called between dmaengine_terminate_async() and this function.