^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) ==================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) DMAengine controller documentation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) ==================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) Hardware Introduction
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) =====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) Most of the Slave DMA controllers have the same general principles of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) operations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) They have a given number of channels to use for the DMA transfers, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) a given number of requests lines.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) Requests and channels are pretty much orthogonal. Channels can be used
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) to serve several to any requests. To simplify, channels are the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) entities that will be doing the copy, and requests what endpoints are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) involved.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) The request lines actually correspond to physical lines going from the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) DMA-eligible devices to the controller itself. Whenever the device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21) will want to start a transfer, it will assert a DMA request (DRQ) by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) asserting that request line.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) A very simple DMA controller would only take into account a single
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) parameter: the transfer size. At each clock cycle, it would transfer a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) byte of data from one buffer to another, until the transfer size has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) been reached.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) That wouldn't work well in the real world, since slave devices might
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) require a specific number of bits to be transferred in a single
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) cycle. For example, we may want to transfer as much data as the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) physical bus allows to maximize performances when doing a simple
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33) memory copy operation, but our audio device could have a narrower FIFO
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) that requires data to be written exactly 16 or 24 bits at a time. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) is why most if not all of the DMA controllers can adjust this, using a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) parameter called the transfer width.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) Moreover, some DMA controllers, whenever the RAM is used as a source
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) or destination, can group the reads or writes in memory into a buffer,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) so instead of having a lot of small memory accesses, which is not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) really efficient, you'll get several bigger transfers. This is done
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) using a parameter called the burst size, that defines how many single
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) reads/writes it's allowed to do without the controller splitting the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) transfer into smaller sub-transfers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) Our theoretical DMA controller would then only be able to do transfers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) that involve a single contiguous block of data. However, some of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) transfers we usually have are not, and want to copy data from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) non-contiguous buffers to a contiguous buffer, which is called
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) scatter-gather.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) DMAEngine, at least for mem2dev transfers, require support for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) scatter-gather. So we're left with two cases here: either we have a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) quite simple DMA controller that doesn't support it, and we'll have to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) implement it in software, or we have a more advanced DMA controller,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56) that implements in hardware scatter-gather.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) The latter are usually programmed using a collection of chunks to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) transfer, and whenever the transfer is started, the controller will go
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) over that collection, doing whatever we programmed there.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) This collection is usually either a table or a linked list. You will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) then push either the address of the table and its number of elements,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64) or the first item of the list to one channel of the DMA controller,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) and whenever a DRQ will be asserted, it will go through the collection
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) to know where to fetch the data from.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) Either way, the format of this collection is completely dependent on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) your hardware. Each DMA controller will require a different structure,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70) but all of them will require, for every chunk, at least the source and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) destination addresses, whether it should increment these addresses or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) not and the three parameters we saw earlier: the burst size, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73) transfer width and the transfer size.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) The one last thing is that usually, slave devices won't issue DRQ by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) default, and you have to enable this in your slave device driver first
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77) whenever you're willing to use DMA.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) These were just the general memory-to-memory (also called mem2mem) or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80) memory-to-device (mem2dev) kind of transfers. Most devices often
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) support other kind of transfers or memory operations that dmaengine
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) support and will be detailed later in this document.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) DMA Support in Linux
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) ====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) Historically, DMA controller drivers have been implemented using the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) async TX API, to offload operations such as memory copy, XOR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) cryptography, etc., basically any memory to memory operation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) Over time, the need for memory to device transfers arose, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) dmaengine was extended. Nowadays, the async TX API is written as a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) layer on top of dmaengine, and acts as a client. Still, dmaengine
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) accommodates that API in some cases, and made some design choices to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95) ensure that it stayed compatible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) For more information on the Async TX API, please look the relevant
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) documentation file in Documentation/crypto/async-tx-api.rst.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) DMAEngine APIs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) ==============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) ``struct dma_device`` Initialization
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) ------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) Just like any other kernel framework, the whole DMAEngine registration
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) relies on the driver filling a structure and registering against the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) framework. In our case, that structure is dma_device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) The first thing you need to do in your driver is to allocate this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) structure. Any of the usual memory allocators will do, but you'll also
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) need to initialize a few fields in there:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) - ``channels``: should be initialized as a list using the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) INIT_LIST_HEAD macro for example
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) - ``src_addr_widths``:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) should contain a bitmask of the supported source transfer width
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) - ``dst_addr_widths``:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) should contain a bitmask of the supported destination transfer width
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) - ``directions``:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) should contain a bitmask of the supported slave directions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) (i.e. excluding mem2mem transfers)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) - ``residue_granularity``:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) granularity of the transfer residue reported to dma_set_residue.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) This can be either:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) - Descriptor:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) your device doesn't support any kind of residue
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) reporting. The framework will only know that a particular
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) transaction descriptor is done.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) - Segment:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) your device is able to report which chunks have been transferred
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) - Burst:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) your device is able to report which burst have been transferred
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) - ``dev``: should hold the pointer to the ``struct device`` associated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) to your current driver instance.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) Supported transaction types
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) ---------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) The next thing you need is to set which transaction types your device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) (and driver) supports.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) Our ``dma_device structure`` has a field called cap_mask that holds the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) various types of transaction supported, and you need to modify this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) mask using the dma_cap_set function, with various flags depending on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) transaction types you support as an argument.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) All those capabilities are defined in the ``dma_transaction_type enum``,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) in ``include/linux/dmaengine.h``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) Currently, the types available are:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) - DMA_MEMCPY
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) - The device is able to do memory to memory copies
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) - DMA_XOR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) - The device is able to perform XOR operations on memory areas
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) - Used to accelerate XOR intensive tasks, such as RAID5
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) - DMA_XOR_VAL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) - The device is able to perform parity check using the XOR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) algorithm against a memory buffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) - DMA_PQ
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) - The device is able to perform RAID6 P+Q computations, P being a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) simple XOR, and Q being a Reed-Solomon algorithm.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) - DMA_PQ_VAL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) - The device is able to perform parity check using RAID6 P+Q
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) algorithm against a memory buffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) - DMA_INTERRUPT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) - The device is able to trigger a dummy transfer that will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) generate periodic interrupts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) - Used by the client drivers to register a callback that will be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) called on a regular basis through the DMA controller interrupt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) - DMA_PRIVATE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) - The devices only supports slave transfers, and as such isn't
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) available for async transfers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) - DMA_ASYNC_TX
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) - Must not be set by the device, and will be set by the framework
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) if needed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) - TODO: What is it about?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) - DMA_SLAVE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) - The device can handle device to memory transfers, including
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) scatter-gather transfers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) - While in the mem2mem case we were having two distinct types to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) deal with a single chunk to copy or a collection of them, here,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) we just have a single transaction type that is supposed to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) handle both.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) - If you want to transfer a single contiguous memory buffer,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) simply build a scatter list with only one item.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) - DMA_CYCLIC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) - The device can handle cyclic transfers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) - A cyclic transfer is a transfer where the chunk collection will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) loop over itself, with the last item pointing to the first.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) - It's usually used for audio transfers, where you want to operate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) on a single ring buffer that you will fill with your audio data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) - DMA_INTERLEAVE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) - The device supports interleaved transfer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) - These transfers can transfer data from a non-contiguous buffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) to a non-contiguous buffer, opposed to DMA_SLAVE that can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) transfer data from a non-contiguous data set to a continuous
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) destination buffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) - It's usually used for 2d content transfers, in which case you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) want to transfer a portion of uncompressed data directly to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) display to print it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) - DMA_COMPLETION_NO_ORDER
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) - The device does not support in order completion.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) - The driver should return DMA_OUT_OF_ORDER for device_tx_status if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) the device is setting this capability.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) - All cookie tracking and checking API should be treated as invalid if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) the device exports this capability.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) - At this point, this is incompatible with polling option for dmatest.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) - If this cap is set, the user is recommended to provide an unique
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) identifier for each descriptor sent to the DMA device in order to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) properly track the completion.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) - DMA_REPEAT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) - The device supports repeated transfers. A repeated transfer, indicated by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) the DMA_PREP_REPEAT transfer flag, is similar to a cyclic transfer in that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) it gets automatically repeated when it ends, but can additionally be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) replaced by the client.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) - This feature is limited to interleaved transfers, this flag should thus not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) be set if the DMA_INTERLEAVE flag isn't set. This limitation is based on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) the current needs of DMA clients, support for additional transfer types
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) should be added in the future if and when the need arises.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) - DMA_LOAD_EOT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) - The device supports replacing repeated transfers at end of transfer (EOT)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) by queuing a new transfer with the DMA_PREP_LOAD_EOT flag set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) - Support for replacing a currently running transfer at another point (such
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) as end of burst instead of end of transfer) will be added in the future
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) based on DMA clients needs, if and when the need arises.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) These various types will also affect how the source and destination
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) addresses change over time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) Addresses pointing to RAM are typically incremented (or decremented)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) after each transfer. In case of a ring buffer, they may loop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) (DMA_CYCLIC). Addresses pointing to a device's register (e.g. a FIFO)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) are typically fixed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) Per descriptor metadata support
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) -------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) Some data movement architecture (DMA controller and peripherals) uses metadata
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) associated with a transaction. The DMA controller role is to transfer the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) payload and the metadata alongside.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) The metadata itself is not used by the DMA engine itself, but it contains
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) parameters, keys, vectors, etc for peripheral or from the peripheral.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) The DMAengine framework provides a generic ways to facilitate the metadata for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) descriptors. Depending on the architecture the DMA driver can implement either
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) or both of the methods and it is up to the client driver to choose which one
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) to use.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) - DESC_METADATA_CLIENT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) The metadata buffer is allocated/provided by the client driver and it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) attached (via the dmaengine_desc_attach_metadata() helper to the descriptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) From the DMA driver the following is expected for this mode:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) The data from the provided metadata buffer should be prepared for the DMA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) controller to be sent alongside of the payload data. Either by copying to a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) hardware descriptor, or highly coupled packet.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) - DMA_DEV_TO_MEM
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) On transfer completion the DMA driver must copy the metadata to the client
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) provided metadata buffer before notifying the client about the completion.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) After the transfer completion, DMA drivers must not touch the metadata
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) buffer provided by the client.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) - DESC_METADATA_ENGINE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) The metadata buffer is allocated/managed by the DMA driver. The client driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) can ask for the pointer, maximum size and the currently used size of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) metadata and can directly update or read it. dmaengine_desc_get_metadata_ptr()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) and dmaengine_desc_set_metadata_len() is provided as helper functions.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) From the DMA driver the following is expected for this mode:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) - get_metadata_ptr()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) Should return a pointer for the metadata buffer, the maximum size of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) metadata buffer and the currently used / valid (if any) bytes in the buffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) - set_metadata_len()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) It is called by the clients after it have placed the metadata to the buffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) to let the DMA driver know the number of valid bytes provided.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) Note: since the client will ask for the metadata pointer in the completion
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) callback (in DMA_DEV_TO_MEM case) the DMA driver must ensure that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) descriptor is not freed up prior the callback is called.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) Device operations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) -----------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) Our dma_device structure also requires a few function pointers in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) order to implement the actual logic, now that we described what
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) operations we were able to perform.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) The functions that we have to fill in there, and hence have to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) implement, obviously depend on the transaction types you reported as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) supported.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) - ``device_alloc_chan_resources``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) - ``device_free_chan_resources``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) - These functions will be called whenever a driver will call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) ``dma_request_channel`` or ``dma_release_channel`` for the first/last
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) time on the channel associated to that driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) - They are in charge of allocating/freeing all the needed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) resources in order for that channel to be useful for your driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) - These functions can sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) - ``device_prep_dma_*``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) - These functions are matching the capabilities you registered
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) previously.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372) - These functions all take the buffer or the scatterlist relevant
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) for the transfer being prepared, and should create a hardware
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) descriptor or a list of hardware descriptors from it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) - These functions can be called from an interrupt context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) - Any allocation you might do should be using the GFP_NOWAIT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) flag, in order not to potentially sleep, but without depleting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) the emergency pool either.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382) - Drivers should try to pre-allocate any memory they might need
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383) during the transfer setup at probe time to avoid putting to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384) much pressure on the nowait allocator.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) - It should return a unique instance of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387) ``dma_async_tx_descriptor structure``, that further represents this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388) particular transfer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390) - This structure can be initialized using the function
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391) ``dma_async_tx_descriptor_init``.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393) - You'll also need to set two fields in this structure:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395) - flags:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396) TODO: Can it be modified by the driver itself, or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) should it be always the flags passed in the arguments
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) - tx_submit: A pointer to a function you have to implement,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) that is supposed to push the current transaction descriptor to a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401) pending queue, waiting for issue_pending to be called.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) - In this structure the function pointer callback_result can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) initialized in order for the submitter to be notified that a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) transaction has completed. In the earlier code the function pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) callback has been used. However it does not provide any status to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407) transaction and will be deprecated. The result structure defined as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) ``dmaengine_result`` that is passed in to callback_result
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) has two fields:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) - result: This provides the transfer result defined by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) ``dmaengine_tx_result``. Either success or some error condition.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) - residue: Provides the residue bytes of the transfer for those that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415) support residue.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) - ``device_issue_pending``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) - Takes the first transaction descriptor in the pending queue,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420) and starts the transfer. Whenever that transfer is done, it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421) should move to the next transaction in the list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423) - This function can be called in an interrupt context
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425) - ``device_tx_status``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) - Should report the bytes left to go over on the given channel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429) - Should only care about the transaction descriptor passed as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430) argument, not the currently active one on a given channel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432) - The tx_state argument might be NULL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) - Should use dma_set_residue to report it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436) - In the case of a cyclic transfer, it should only take into
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437) account the current period.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439) - Should return DMA_OUT_OF_ORDER if the device does not support in order
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440) completion and is completing the operation out of order.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442) - This function can be called in an interrupt context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) - device_config
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446) - Reconfigures the channel with the configuration given as argument
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448) - This command should NOT perform synchronously, or on any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449) currently queued transfers, but only on subsequent ones
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451) - In this case, the function will receive a ``dma_slave_config``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452) structure pointer as an argument, that will detail which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453) configuration to use.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455) - Even though that structure contains a direction field, this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456) field is deprecated in favor of the direction argument given to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457) the prep_* functions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459) - This call is mandatory for slave operations only. This should NOT be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 460) set or expected to be set for memcpy operations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 461) If a driver support both, it should use this call for slave
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 462) operations only and not for memcpy ones.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 463)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 464) - device_pause
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 465)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 466) - Pauses a transfer on the channel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 467)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 468) - This command should operate synchronously on the channel,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 469) pausing right away the work of the given channel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 470)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 471) - device_resume
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 472)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 473) - Resumes a transfer on the channel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 474)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 475) - This command should operate synchronously on the channel,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 476) resuming right away the work of the given channel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 477)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 478) - device_terminate_all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 479)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 480) - Aborts all the pending and ongoing transfers on the channel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 481)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 482) - For aborted transfers the complete callback should not be called
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 483)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 484) - Can be called from atomic context or from within a complete
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 485) callback of a descriptor. Must not sleep. Drivers must be able
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 486) to handle this correctly.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 487)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 488) - Termination may be asynchronous. The driver does not have to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 489) wait until the currently active transfer has completely stopped.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 490) See device_synchronize.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 491)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 492) - device_synchronize
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 493)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 494) - Must synchronize the termination of a channel to the current
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 495) context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 496)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 497) - Must make sure that memory for previously submitted
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 498) descriptors is no longer accessed by the DMA controller.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 499)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 500) - Must make sure that all complete callbacks for previously
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 501) submitted descriptors have finished running and none are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 502) scheduled to run.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 503)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 504) - May sleep.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 505)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 506)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 507) Misc notes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 508) ==========
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 509)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 510) (stuff that should be documented, but don't really know
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 511) where to put them)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 512)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 513) ``dma_run_dependencies``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 514)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 515) - Should be called at the end of an async TX transfer, and can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 516) ignored in the slave transfers case.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 517)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 518) - Makes sure that dependent operations are run before marking it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 519) as complete.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 520)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 521) dma_cookie_t
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 522)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 523) - it's a DMA transaction ID that will increment over time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 524)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 525) - Not really relevant any more since the introduction of ``virt-dma``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 526) that abstracts it away.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 527)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 528) DMA_CTRL_ACK
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 529)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 530) - If clear, the descriptor cannot be reused by provider until the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 531) client acknowledges receipt, i.e. has a chance to establish any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 532) dependency chains
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 533)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 534) - This can be acked by invoking async_tx_ack()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 535)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 536) - If set, does not mean descriptor can be reused
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 537)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 538) DMA_CTRL_REUSE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 539)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 540) - If set, the descriptor can be reused after being completed. It should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 541) not be freed by provider if this flag is set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 542)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 543) - The descriptor should be prepared for reuse by invoking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 544) ``dmaengine_desc_set_reuse()`` which will set DMA_CTRL_REUSE.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 545)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 546) - ``dmaengine_desc_set_reuse()`` will succeed only when channel support
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 547) reusable descriptor as exhibited by capabilities
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 548)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 549) - As a consequence, if a device driver wants to skip the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 550) ``dma_map_sg()`` and ``dma_unmap_sg()`` in between 2 transfers,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 551) because the DMA'd data wasn't used, it can resubmit the transfer right after
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 552) its completion.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 553)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 554) - Descriptor can be freed in few ways
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 555)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 556) - Clearing DMA_CTRL_REUSE by invoking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 557) ``dmaengine_desc_clear_reuse()`` and submitting for last txn
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 558)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 559) - Explicitly invoking ``dmaengine_desc_free()``, this can succeed only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 560) when DMA_CTRL_REUSE is already set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 561)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 562) - Terminating the channel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 563)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 564) - DMA_PREP_CMD
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 565)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 566) - If set, the client driver tells DMA controller that passed data in DMA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 567) API is command data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 568)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 569) - Interpretation of command data is DMA controller specific. It can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 570) used for issuing commands to other peripherals/register reads/register
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 571) writes for which the descriptor should be in different format from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 572) normal data descriptors.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 573)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 574) - DMA_PREP_REPEAT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 575)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 576) - If set, the transfer will be automatically repeated when it ends until a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 577) new transfer is queued on the same channel with the DMA_PREP_LOAD_EOT flag.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 578) If the next transfer to be queued on the channel does not have the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 579) DMA_PREP_LOAD_EOT flag set, the current transfer will be repeated until the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 580) client terminates all transfers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 581)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 582) - This flag is only supported if the channel reports the DMA_REPEAT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 583) capability.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 584)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 585) - DMA_PREP_LOAD_EOT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 586)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 587) - If set, the transfer will replace the transfer currently being executed at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 588) the end of the transfer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 589)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 590) - This is the default behaviour for non-repeated transfers, specifying
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 591) DMA_PREP_LOAD_EOT for non-repeated transfers will thus make no difference.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 592)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 593) - When using repeated transfers, DMA clients will usually need to set the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 594) DMA_PREP_LOAD_EOT flag on all transfers, otherwise the channel will keep
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 595) repeating the last repeated transfer and ignore the new transfers being
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 596) queued. Failure to set DMA_PREP_LOAD_EOT will appear as if the channel was
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 597) stuck on the previous transfer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 598)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 599) - This flag is only supported if the channel reports the DMA_LOAD_EOT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 600) capability.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 601)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 602) General Design Notes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 603) ====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 604)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 605) Most of the DMAEngine drivers you'll see are based on a similar design
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 606) that handles the end of transfer interrupts in the handler, but defer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 607) most work to a tasklet, including the start of a new transfer whenever
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 608) the previous transfer ended.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 609)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 610) This is a rather inefficient design though, because the inter-transfer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 611) latency will be not only the interrupt latency, but also the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 612) scheduling latency of the tasklet, which will leave the channel idle
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 613) in between, which will slow down the global transfer rate.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 614)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 615) You should avoid this kind of practice, and instead of electing a new
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 616) transfer in your tasklet, move that part to the interrupt handler in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 617) order to have a shorter idle window (that we can't really avoid
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 618) anyway).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 619)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 620) Glossary
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 621) ========
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 622)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 623) - Burst: A number of consecutive read or write operations that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 624) can be queued to buffers before being flushed to memory.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 625)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 626) - Chunk: A contiguous collection of bursts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 627)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 628) - Transfer: A collection of chunks (be it contiguous or not)