Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) /* SPDX-License-Identifier: GPL-2.0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) /******************************************************************************
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3)  * blkif.h
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5)  * Unified block-device I/O interface for Xen guest OSes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7)  * Copyright (c) 2003-2004, Keir Fraser
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10) #ifndef __XEN_PUBLIC_IO_BLKIF_H__
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11) #define __XEN_PUBLIC_IO_BLKIF_H__
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13) #include <xen/interface/io/ring.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14) #include <xen/interface/grant_table.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17)  * Front->back notifications: When enqueuing a new request, sending a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18)  * notification can be made conditional on req_event (i.e., the generic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19)  * hold-off mechanism provided by the ring macros). Backends must set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20)  * req_event appropriately (e.g., using RING_FINAL_CHECK_FOR_REQUESTS()).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22)  * Back->front notifications: When enqueuing a new response, sending a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23)  * notification can be made conditional on rsp_event (i.e., the generic
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24)  * hold-off mechanism provided by the ring macros). Frontends must set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25)  * rsp_event appropriately (e.g., using RING_FINAL_CHECK_FOR_RESPONSES()).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28) typedef uint16_t blkif_vdev_t;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29) typedef uint64_t blkif_sector_t;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32)  * Multiple hardware queues/rings:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33)  * If supported, the backend will write the key "multi-queue-max-queues" to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34)  * the directory for that vbd, and set its value to the maximum supported
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35)  * number of queues.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36)  * Frontends that are aware of this feature and wish to use it can write the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37)  * key "multi-queue-num-queues" with the number they wish to use, which must be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38)  * greater than zero, and no more than the value reported by the backend in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39)  * "multi-queue-max-queues".
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41)  * For frontends requesting just one queue, the usual event-channel and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42)  * ring-ref keys are written as before, simplifying the backend processing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43)  * to avoid distinguishing between a frontend that doesn't understand the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44)  * multi-queue feature, and one that does, but requested only one queue.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46)  * Frontends requesting two or more queues must not write the toplevel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47)  * event-channel and ring-ref keys, instead writing those keys under sub-keys
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48)  * having the name "queue-N" where N is the integer ID of the queue/ring for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49)  * which those keys belong. Queues are indexed from zero.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50)  * For example, a frontend with two queues must write the following set of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51)  * queue-related keys:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53)  * /local/domain/1/device/vbd/0/multi-queue-num-queues = "2"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54)  * /local/domain/1/device/vbd/0/queue-0 = ""
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55)  * /local/domain/1/device/vbd/0/queue-0/ring-ref = "<ring-ref#0>"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56)  * /local/domain/1/device/vbd/0/queue-0/event-channel = "<evtchn#0>"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57)  * /local/domain/1/device/vbd/0/queue-1 = ""
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58)  * /local/domain/1/device/vbd/0/queue-1/ring-ref = "<ring-ref#1>"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59)  * /local/domain/1/device/vbd/0/queue-1/event-channel = "<evtchn#1>"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61)  * It is also possible to use multiple queues/rings together with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62)  * feature multi-page ring buffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63)  * For example, a frontend requests two queues/rings and the size of each ring
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64)  * buffer is two pages must write the following set of related keys:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66)  * /local/domain/1/device/vbd/0/multi-queue-num-queues = "2"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67)  * /local/domain/1/device/vbd/0/ring-page-order = "1"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68)  * /local/domain/1/device/vbd/0/queue-0 = ""
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69)  * /local/domain/1/device/vbd/0/queue-0/ring-ref0 = "<ring-ref#0>"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70)  * /local/domain/1/device/vbd/0/queue-0/ring-ref1 = "<ring-ref#1>"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71)  * /local/domain/1/device/vbd/0/queue-0/event-channel = "<evtchn#0>"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72)  * /local/domain/1/device/vbd/0/queue-1 = ""
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73)  * /local/domain/1/device/vbd/0/queue-1/ring-ref0 = "<ring-ref#2>"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74)  * /local/domain/1/device/vbd/0/queue-1/ring-ref1 = "<ring-ref#3>"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75)  * /local/domain/1/device/vbd/0/queue-1/event-channel = "<evtchn#1>"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80)  * REQUEST CODES.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82) #define BLKIF_OP_READ              0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83) #define BLKIF_OP_WRITE             1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85)  * Recognised only if "feature-barrier" is present in backend xenbus info.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86)  * The "feature_barrier" node contains a boolean indicating whether barrier
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87)  * requests are likely to succeed or fail. Either way, a barrier request
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88)  * may fail at any time with BLKIF_RSP_EOPNOTSUPP if it is unsupported by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89)  * the underlying block-device hardware. The boolean simply indicates whether
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90)  * or not it is worthwhile for the frontend to attempt barrier requests.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91)  * If a backend does not recognise BLKIF_OP_WRITE_BARRIER, it should *not*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92)  * create the "feature-barrier" node!
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94) #define BLKIF_OP_WRITE_BARRIER     2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97)  * Recognised if "feature-flush-cache" is present in backend xenbus
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98)  * info.  A flush will ask the underlying storage hardware to flush its
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99)  * non-volatile caches as appropriate.  The "feature-flush-cache" node
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100)  * contains a boolean indicating whether flush requests are likely to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101)  * succeed or fail. Either way, a flush request may fail at any time
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102)  * with BLKIF_RSP_EOPNOTSUPP if it is unsupported by the underlying
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103)  * block-device hardware. The boolean simply indicates whether or not it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104)  * is worthwhile for the frontend to attempt flushes.  If a backend does
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105)  * not recognise BLKIF_OP_WRITE_FLUSH_CACHE, it should *not* create the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106)  * "feature-flush-cache" node!
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) #define BLKIF_OP_FLUSH_DISKCACHE   3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111)  * Recognised only if "feature-discard" is present in backend xenbus info.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112)  * The "feature-discard" node contains a boolean indicating whether trim
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113)  * (ATA) or unmap (SCSI) - conviently called discard requests are likely
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114)  * to succeed or fail. Either way, a discard request
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115)  * may fail at any time with BLKIF_RSP_EOPNOTSUPP if it is unsupported by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116)  * the underlying block-device hardware. The boolean simply indicates whether
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117)  * or not it is worthwhile for the frontend to attempt discard requests.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118)  * If a backend does not recognise BLKIF_OP_DISCARD, it should *not*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119)  * create the "feature-discard" node!
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121)  * Discard operation is a request for the underlying block device to mark
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122)  * extents to be erased. However, discard does not guarantee that the blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123)  * will be erased from the device - it is just a hint to the device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124)  * controller that these blocks are no longer in use. What the device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125)  * controller does with that information is left to the controller.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126)  * Discard operations are passed with sector_number as the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127)  * sector index to begin discard operations at and nr_sectors as the number of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128)  * sectors to be discarded. The specified sectors should be discarded if the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129)  * underlying block device supports trim (ATA) or unmap (SCSI) operations,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130)  * or a BLKIF_RSP_EOPNOTSUPP  should be returned.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131)  * More information about trim/unmap operations at:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132)  * http://t13.org/Documents/UploadedDocuments/docs2008/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133)  *     e07154r6-Data_Set_Management_Proposal_for_ATA-ACS2.doc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134)  * http://www.seagate.com/staticfiles/support/disc/manuals/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135)  *     Interface%20manuals/100293068c.pdf
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136)  * The backend can optionally provide three extra XenBus attributes to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137)  * further optimize the discard functionality:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138)  * 'discard-alignment' - Devices that support discard functionality may
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139)  * internally allocate space in units that are bigger than the exported
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140)  * logical block size. The discard-alignment parameter indicates how many bytes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141)  * the beginning of the partition is offset from the internal allocation unit's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142)  * natural alignment.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143)  * 'discard-granularity'  - Devices that support discard functionality may
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144)  * internally allocate space using units that are bigger than the logical block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145)  * size. The discard-granularity parameter indicates the size of the internal
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146)  * allocation unit in bytes if reported by the device. Otherwise the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147)  * discard-granularity will be set to match the device's physical block size.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148)  * 'discard-secure' - All copies of the discarded sectors (potentially created
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149)  * by garbage collection) must also be erased.  To use this feature, the flag
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150)  * BLKIF_DISCARD_SECURE must be set in the blkif_request_trim.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) #define BLKIF_OP_DISCARD           5
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155)  * Recognized if "feature-max-indirect-segments" in present in the backend
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156)  * xenbus info. The "feature-max-indirect-segments" node contains the maximum
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157)  * number of segments allowed by the backend per request. If the node is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158)  * present, the frontend might use blkif_request_indirect structs in order to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159)  * issue requests with more than BLKIF_MAX_SEGMENTS_PER_REQUEST (11). The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160)  * maximum number of indirect segments is fixed by the backend, but the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161)  * frontend can issue requests with any number of indirect segments as long as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162)  * it's less than the number provided by the backend. The indirect_grefs field
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163)  * in blkif_request_indirect should be filled by the frontend with the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164)  * grant references of the pages that are holding the indirect segments.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165)  * These pages are filled with an array of blkif_request_segment that hold the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166)  * information about the segments. The number of indirect pages to use is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167)  * determined by the number of segments an indirect request contains. Every
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168)  * indirect page can contain a maximum of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169)  * (PAGE_SIZE / sizeof(struct blkif_request_segment)) segments, so to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170)  * calculate the number of indirect pages to use we have to do
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171)  * ceil(indirect_segments / (PAGE_SIZE / sizeof(struct blkif_request_segment))).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173)  * If a backend does not recognize BLKIF_OP_INDIRECT, it should *not*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174)  * create the "feature-max-indirect-segments" node!
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) #define BLKIF_OP_INDIRECT          6
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179)  * Maximum scatter/gather segments per request.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180)  * This is carefully chosen so that sizeof(struct blkif_ring) <= PAGE_SIZE.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181)  * NB. This could be 12 if the ring indexes weren't stored in the same page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) #define BLKIF_MAX_SEGMENTS_PER_REQUEST 11
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) #define BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST 8
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) struct blkif_request_segment {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) 		grant_ref_t gref;        /* reference to I/O buffer frame        */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) 		/* @first_sect: first sector in frame to transfer (inclusive).   */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) 		/* @last_sect: last sector in frame to transfer (inclusive).     */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) 		uint8_t     first_sect, last_sect;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) struct blkif_request_rw {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) 	uint8_t        nr_segments;  /* number of segments                   */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) 	blkif_vdev_t   handle;       /* only for read/write requests         */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) #ifndef CONFIG_X86_32
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) 	uint32_t       _pad1;	     /* offsetof(blkif_request,u.rw.id) == 8 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) 	uint64_t       id;           /* private guest value, echoed in resp  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) 	blkif_sector_t sector_number;/* start sector idx on disk (r/w only)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) 	struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) } __attribute__((__packed__));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) struct blkif_request_discard {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) 	uint8_t        flag;         /* BLKIF_DISCARD_SECURE or zero.        */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) #define BLKIF_DISCARD_SECURE (1<<0)  /* ignored if discard-secure=0          */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) 	blkif_vdev_t   _pad1;        /* only for read/write requests         */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) #ifndef CONFIG_X86_32
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) 	uint32_t       _pad2;        /* offsetof(blkif_req..,u.discard.id)==8*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) 	uint64_t       id;           /* private guest value, echoed in resp  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) 	blkif_sector_t sector_number;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) 	uint64_t       nr_sectors;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) 	uint8_t        _pad3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) } __attribute__((__packed__));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) struct blkif_request_other {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) 	uint8_t      _pad1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) 	blkif_vdev_t _pad2;        /* only for read/write requests         */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) #ifndef CONFIG_X86_32
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) 	uint32_t     _pad3;        /* offsetof(blkif_req..,u.other.id)==8*/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 	uint64_t     id;           /* private guest value, echoed in resp  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) } __attribute__((__packed__));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) struct blkif_request_indirect {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) 	uint8_t        indirect_op;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) 	uint16_t       nr_segments;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) #ifndef CONFIG_X86_32
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) 	uint32_t       _pad1;        /* offsetof(blkif_...,u.indirect.id) == 8 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) 	uint64_t       id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) 	blkif_sector_t sector_number;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) 	blkif_vdev_t   handle;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) 	uint16_t       _pad2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) 	grant_ref_t    indirect_grefs[BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) #ifndef CONFIG_X86_32
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) 	uint32_t      _pad3;         /* make it 64 byte aligned */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) 	uint64_t      _pad3;         /* make it 64 byte aligned */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) } __attribute__((__packed__));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) struct blkif_request {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) 	uint8_t        operation;    /* BLKIF_OP_???                         */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) 	union {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) 		struct blkif_request_rw rw;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) 		struct blkif_request_discard discard;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) 		struct blkif_request_other other;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) 		struct blkif_request_indirect indirect;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) 	} u;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) } __attribute__((__packed__));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) struct blkif_response {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) 	uint64_t        id;              /* copied from request */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) 	uint8_t         operation;       /* copied from request */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) 	int16_t         status;          /* BLKIF_RSP_???       */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262)  * STATUS RETURN CODES.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264)  /* Operation not supported (only happens on barrier writes). */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) #define BLKIF_RSP_EOPNOTSUPP  -2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266)  /* Operation failed for some unspecified reason (-EIO). */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) #define BLKIF_RSP_ERROR       -1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268)  /* Operation completed successfully. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) #define BLKIF_RSP_OKAY         0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272)  * Generate blkif ring structures and types.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) DEFINE_RING_TYPES(blkif, struct blkif_request, struct blkif_response);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) #define VDISK_CDROM        0x1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) #define VDISK_REMOVABLE    0x2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) #define VDISK_READONLY     0x4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) /* Xen-defined major numbers for virtual disks, they look strangely
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282)  * familiar */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) #define XEN_IDE0_MAJOR	3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) #define XEN_IDE1_MAJOR	22
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) #define XEN_SCSI_DISK0_MAJOR	8
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) #define XEN_SCSI_DISK1_MAJOR	65
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) #define XEN_SCSI_DISK2_MAJOR	66
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) #define XEN_SCSI_DISK3_MAJOR	67
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) #define XEN_SCSI_DISK4_MAJOR	68
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) #define XEN_SCSI_DISK5_MAJOR	69
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) #define XEN_SCSI_DISK6_MAJOR	70
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) #define XEN_SCSI_DISK7_MAJOR	71
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) #define XEN_SCSI_DISK8_MAJOR	128
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) #define XEN_SCSI_DISK9_MAJOR	129
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) #define XEN_SCSI_DISK10_MAJOR	130
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) #define XEN_SCSI_DISK11_MAJOR	131
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) #define XEN_SCSI_DISK12_MAJOR	132
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) #define XEN_SCSI_DISK13_MAJOR	133
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) #define XEN_SCSI_DISK14_MAJOR	134
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) #define XEN_SCSI_DISK15_MAJOR	135
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) #endif /* __XEN_PUBLIC_IO_BLKIF_H__ */