Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    1) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    2)  * Copyright 2017 Red Hat
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    3)  * Parts ported from amdgpu (fence wait code).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    4)  * Copyright 2016 Advanced Micro Devices, Inc.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    5)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    6)  * Permission is hereby granted, free of charge, to any person obtaining a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    7)  * copy of this software and associated documentation files (the "Software"),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    8)  * to deal in the Software without restriction, including without limitation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    9)  * the rights to use, copy, modify, merge, publish, distribute, sublicense,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   10)  * and/or sell copies of the Software, and to permit persons to whom the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   11)  * Software is furnished to do so, subject to the following conditions:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   12)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   13)  * The above copyright notice and this permission notice (including the next
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   14)  * paragraph) shall be included in all copies or substantial portions of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   15)  * Software.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   16)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   17)  * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   18)  * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   19)  * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   20)  * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   21)  * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   22)  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   23)  * IN THE SOFTWARE.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   24)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   25)  * Authors:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   26)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   27)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   28) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   29) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   30)  * DOC: Overview
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   31)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   32)  * DRM synchronisation objects (syncobj, see struct &drm_syncobj) provide a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   33)  * container for a synchronization primitive which can be used by userspace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   34)  * to explicitly synchronize GPU commands, can be shared between userspace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   35)  * processes, and can be shared between different DRM drivers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   36)  * Their primary use-case is to implement Vulkan fences and semaphores.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   37)  * The syncobj userspace API provides ioctls for several operations:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   38)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   39)  *  - Creation and destruction of syncobjs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   40)  *  - Import and export of syncobjs to/from a syncobj file descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   41)  *  - Import and export a syncobj's underlying fence to/from a sync file
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   42)  *  - Reset a syncobj (set its fence to NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   43)  *  - Signal a syncobj (set a trivially signaled fence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   44)  *  - Wait for a syncobj's fence to appear and be signaled
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   45)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   46)  * The syncobj userspace API also provides operations to manipulate a syncobj
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   47)  * in terms of a timeline of struct &dma_fence_chain rather than a single
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   48)  * struct &dma_fence, through the following operations:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   49)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   50)  *   - Signal a given point on the timeline
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   51)  *   - Wait for a given point to appear and/or be signaled
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   52)  *   - Import and export from/to a given point of a timeline
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   53)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   54)  * At it's core, a syncobj is simply a wrapper around a pointer to a struct
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   55)  * &dma_fence which may be NULL.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   56)  * When a syncobj is first created, its pointer is either NULL or a pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   57)  * to an already signaled fence depending on whether the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   58)  * &DRM_SYNCOBJ_CREATE_SIGNALED flag is passed to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   59)  * &DRM_IOCTL_SYNCOBJ_CREATE.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   60)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   61)  * If the syncobj is considered as a binary (its state is either signaled or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   62)  * unsignaled) primitive, when GPU work is enqueued in a DRM driver to signal
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   63)  * the syncobj, the syncobj's fence is replaced with a fence which will be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   64)  * signaled by the completion of that work.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   65)  * If the syncobj is considered as a timeline primitive, when GPU work is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   66)  * enqueued in a DRM driver to signal the a given point of the syncobj, a new
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   67)  * struct &dma_fence_chain pointing to the DRM driver's fence and also
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   68)  * pointing to the previous fence that was in the syncobj. The new struct
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   69)  * &dma_fence_chain fence replace the syncobj's fence and will be signaled by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   70)  * completion of the DRM driver's work and also any work associated with the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   71)  * fence previously in the syncobj.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   72)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   73)  * When GPU work which waits on a syncobj is enqueued in a DRM driver, at the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   74)  * time the work is enqueued, it waits on the syncobj's fence before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   75)  * submitting the work to hardware. That fence is either :
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   76)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   77)  *    - The syncobj's current fence if the syncobj is considered as a binary
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   78)  *      primitive.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   79)  *    - The struct &dma_fence associated with a given point if the syncobj is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   80)  *      considered as a timeline primitive.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   81)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   82)  * If the syncobj's fence is NULL or not present in the syncobj's timeline,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   83)  * the enqueue operation is expected to fail.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   84)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   85)  * With binary syncobj, all manipulation of the syncobjs's fence happens in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   86)  * terms of the current fence at the time the ioctl is called by userspace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   87)  * regardless of whether that operation is an immediate host-side operation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   88)  * (signal or reset) or or an operation which is enqueued in some driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   89)  * queue. &DRM_IOCTL_SYNCOBJ_RESET and &DRM_IOCTL_SYNCOBJ_SIGNAL can be used
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   90)  * to manipulate a syncobj from the host by resetting its pointer to NULL or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   91)  * setting its pointer to a fence which is already signaled.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   92)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   93)  * With a timeline syncobj, all manipulation of the synobj's fence happens in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   94)  * terms of a u64 value referring to point in the timeline. See
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   95)  * dma_fence_chain_find_seqno() to see how a given point is found in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   96)  * timeline.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   97)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   98)  * Note that applications should be careful to always use timeline set of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   99)  * ioctl() when dealing with syncobj considered as timeline. Using a binary
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  100)  * set of ioctl() with a syncobj considered as timeline could result incorrect
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  101)  * synchronization. The use of binary syncobj is supported through the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  102)  * timeline set of ioctl() by using a point value of 0, this will reproduce
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  103)  * the behavior of the binary set of ioctl() (for example replace the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  104)  * syncobj's fence when signaling).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  105)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  106)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  107)  * Host-side wait on syncobjs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  108)  * --------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  109)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  110)  * &DRM_IOCTL_SYNCOBJ_WAIT takes an array of syncobj handles and does a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  111)  * host-side wait on all of the syncobj fences simultaneously.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  112)  * If &DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL is set, the wait ioctl will wait on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  113)  * all of the syncobj fences to be signaled before it returns.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  114)  * Otherwise, it returns once at least one syncobj fence has been signaled
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  115)  * and the index of a signaled fence is written back to the client.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  116)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  117)  * Unlike the enqueued GPU work dependencies which fail if they see a NULL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  118)  * fence in a syncobj, if &DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT is set,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  119)  * the host-side wait will first wait for the syncobj to receive a non-NULL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  120)  * fence and then wait on that fence.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  121)  * If &DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT is not set and any one of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  122)  * syncobjs in the array has a NULL fence, -EINVAL will be returned.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  123)  * Assuming the syncobj starts off with a NULL fence, this allows a client
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  124)  * to do a host wait in one thread (or process) which waits on GPU work
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  125)  * submitted in another thread (or process) without having to manually
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  126)  * synchronize between the two.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  127)  * This requirement is inherited from the Vulkan fence API.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  128)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  129)  * Similarly, &DRM_IOCTL_SYNCOBJ_TIMELINE_WAIT takes an array of syncobj
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  130)  * handles as well as an array of u64 points and does a host-side wait on all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  131)  * of syncobj fences at the given points simultaneously.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  132)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  133)  * &DRM_IOCTL_SYNCOBJ_TIMELINE_WAIT also adds the ability to wait for a given
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  134)  * fence to materialize on the timeline without waiting for the fence to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  135)  * signaled by using the &DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE flag. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  136)  * requirement is inherited from the wait-before-signal behavior required by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  137)  * the Vulkan timeline semaphore API.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  138)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  139)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  140)  * Import/export of syncobjs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  141)  * -------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  142)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  143)  * &DRM_IOCTL_SYNCOBJ_FD_TO_HANDLE and &DRM_IOCTL_SYNCOBJ_HANDLE_TO_FD
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  144)  * provide two mechanisms for import/export of syncobjs.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  145)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  146)  * The first lets the client import or export an entire syncobj to a file
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  147)  * descriptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  148)  * These fd's are opaque and have no other use case, except passing the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  149)  * syncobj between processes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  150)  * All exported file descriptors and any syncobj handles created as a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  151)  * result of importing those file descriptors own a reference to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  152)  * same underlying struct &drm_syncobj and the syncobj can be used
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  153)  * persistently across all the processes with which it is shared.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  154)  * The syncobj is freed only once the last reference is dropped.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  155)  * Unlike dma-buf, importing a syncobj creates a new handle (with its own
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  156)  * reference) for every import instead of de-duplicating.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  157)  * The primary use-case of this persistent import/export is for shared
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  158)  * Vulkan fences and semaphores.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  159)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  160)  * The second import/export mechanism, which is indicated by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  161)  * &DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  162)  * &DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE lets the client
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  163)  * import/export the syncobj's current fence from/to a &sync_file.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  164)  * When a syncobj is exported to a sync file, that sync file wraps the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  165)  * sycnobj's fence at the time of export and any later signal or reset
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  166)  * operations on the syncobj will not affect the exported sync file.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  167)  * When a sync file is imported into a syncobj, the syncobj's fence is set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  168)  * to the fence wrapped by that sync file.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  169)  * Because sync files are immutable, resetting or signaling the syncobj
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  170)  * will not affect any sync files whose fences have been imported into the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  171)  * syncobj.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  172)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  173)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  174)  * Import/export of timeline points in timeline syncobjs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  175)  * -----------------------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  176)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  177)  * &DRM_IOCTL_SYNCOBJ_TRANSFER provides a mechanism to transfer a struct
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  178)  * &dma_fence_chain of a syncobj at a given u64 point to another u64 point
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  179)  * into another syncobj.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  180)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  181)  * Note that if you want to transfer a struct &dma_fence_chain from a given
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  182)  * point on a timeline syncobj from/into a binary syncobj, you can use the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  183)  * point 0 to mean take/replace the fence in the syncobj.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  184)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  185) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  186) #include <linux/anon_inodes.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  187) #include <linux/file.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  188) #include <linux/fs.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  189) #include <linux/sched/signal.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  190) #include <linux/sync_file.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  191) #include <linux/uaccess.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  192) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  193) #include <drm/drm.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  194) #include <drm/drm_drv.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  195) #include <drm/drm_file.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  196) #include <drm/drm_gem.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  197) #include <drm/drm_print.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  198) #include <drm/drm_syncobj.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  199) #include <drm/drm_utils.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  200) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  201) #include "drm_internal.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  202) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  203) struct syncobj_wait_entry {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  204) 	struct list_head node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  205) 	struct task_struct *task;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  206) 	struct dma_fence *fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  207) 	struct dma_fence_cb fence_cb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  208) 	u64    point;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  209) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  210) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  211) static void syncobj_wait_syncobj_func(struct drm_syncobj *syncobj,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  212) 				      struct syncobj_wait_entry *wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  213) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  214) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  215)  * drm_syncobj_find - lookup and reference a sync object.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  216)  * @file_private: drm file private pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  217)  * @handle: sync object handle to lookup.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  218)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  219)  * Returns a reference to the syncobj pointed to by handle or NULL. The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  220)  * reference must be released by calling drm_syncobj_put().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  221)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  222) struct drm_syncobj *drm_syncobj_find(struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  223) 				     u32 handle)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  224) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  225) 	struct drm_syncobj *syncobj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  226) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  227) 	spin_lock(&file_private->syncobj_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  228) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  229) 	/* Check if we currently have a reference on the object */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  230) 	syncobj = idr_find(&file_private->syncobj_idr, handle);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  231) 	if (syncobj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  232) 		drm_syncobj_get(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  233) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  234) 	spin_unlock(&file_private->syncobj_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  235) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  236) 	return syncobj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  237) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  238) EXPORT_SYMBOL(drm_syncobj_find);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  239) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  240) static void drm_syncobj_fence_add_wait(struct drm_syncobj *syncobj,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  241) 				       struct syncobj_wait_entry *wait)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  242) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  243) 	struct dma_fence *fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  244) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  245) 	if (wait->fence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  246) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  247) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  248) 	spin_lock(&syncobj->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  249) 	/* We've already tried once to get a fence and failed.  Now that we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  250) 	 * have the lock, try one more time just to be sure we don't add a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  251) 	 * callback when a fence has already been set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  252) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  253) 	fence = dma_fence_get(rcu_dereference_protected(syncobj->fence, 1));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  254) 	if (!fence || dma_fence_chain_find_seqno(&fence, wait->point)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  255) 		dma_fence_put(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  256) 		list_add_tail(&wait->node, &syncobj->cb_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  257) 	} else if (!fence) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  258) 		wait->fence = dma_fence_get_stub();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  259) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  260) 		wait->fence = fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  261) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  262) 	spin_unlock(&syncobj->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  263) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  264) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  265) static void drm_syncobj_remove_wait(struct drm_syncobj *syncobj,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  266) 				    struct syncobj_wait_entry *wait)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  267) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  268) 	if (!wait->node.next)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  269) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  270) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  271) 	spin_lock(&syncobj->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  272) 	list_del_init(&wait->node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  273) 	spin_unlock(&syncobj->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  274) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  275) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  276) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  277)  * drm_syncobj_add_point - add new timeline point to the syncobj
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  278)  * @syncobj: sync object to add timeline point do
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  279)  * @chain: chain node to use to add the point
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  280)  * @fence: fence to encapsulate in the chain node
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  281)  * @point: sequence number to use for the point
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  282)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  283)  * Add the chain node as new timeline point to the syncobj.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  284)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  285) void drm_syncobj_add_point(struct drm_syncobj *syncobj,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  286) 			   struct dma_fence_chain *chain,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  287) 			   struct dma_fence *fence,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  288) 			   uint64_t point)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  289) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  290) 	struct syncobj_wait_entry *cur, *tmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  291) 	struct dma_fence *prev;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  292) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  293) 	dma_fence_get(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  294) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  295) 	spin_lock(&syncobj->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  296) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  297) 	prev = drm_syncobj_fence_get(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  298) 	/* You are adding an unorder point to timeline, which could cause payload returned from query_ioctl is 0! */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  299) 	if (prev && prev->seqno >= point)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  300) 		DRM_DEBUG("You are adding an unorder point to timeline!\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  301) 	dma_fence_chain_init(chain, prev, fence, point);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  302) 	rcu_assign_pointer(syncobj->fence, &chain->base);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  303) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  304) 	list_for_each_entry_safe(cur, tmp, &syncobj->cb_list, node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  305) 		syncobj_wait_syncobj_func(syncobj, cur);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  306) 	spin_unlock(&syncobj->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  307) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  308) 	/* Walk the chain once to trigger garbage collection */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  309) 	dma_fence_chain_for_each(fence, prev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  310) 	dma_fence_put(prev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  311) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  312) EXPORT_SYMBOL(drm_syncobj_add_point);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  313) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  314) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  315)  * drm_syncobj_replace_fence - replace fence in a sync object.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  316)  * @syncobj: Sync object to replace fence in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  317)  * @fence: fence to install in sync file.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  318)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  319)  * This replaces the fence on a sync object.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  320)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  321) void drm_syncobj_replace_fence(struct drm_syncobj *syncobj,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  322) 			       struct dma_fence *fence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  323) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  324) 	struct dma_fence *old_fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  325) 	struct syncobj_wait_entry *cur, *tmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  326) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  327) 	if (fence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  328) 		dma_fence_get(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  329) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  330) 	spin_lock(&syncobj->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  331) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  332) 	old_fence = rcu_dereference_protected(syncobj->fence,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  333) 					      lockdep_is_held(&syncobj->lock));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  334) 	rcu_assign_pointer(syncobj->fence, fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  335) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  336) 	if (fence != old_fence) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  337) 		list_for_each_entry_safe(cur, tmp, &syncobj->cb_list, node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  338) 			syncobj_wait_syncobj_func(syncobj, cur);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  339) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  340) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  341) 	spin_unlock(&syncobj->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  342) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  343) 	dma_fence_put(old_fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  344) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  345) EXPORT_SYMBOL(drm_syncobj_replace_fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  346) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  347) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  348)  * drm_syncobj_assign_null_handle - assign a stub fence to the sync object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  349)  * @syncobj: sync object to assign the fence on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  350)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  351)  * Assign a already signaled stub fence to the sync object.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  352)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  353) static void drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  354) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  355) 	struct dma_fence *fence = dma_fence_get_stub();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  356) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  357) 	drm_syncobj_replace_fence(syncobj, fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  358) 	dma_fence_put(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  359) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  360) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  361) /* 5s default for wait submission */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  362) #define DRM_SYNCOBJ_WAIT_FOR_SUBMIT_TIMEOUT 5000000000ULL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  363) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  364)  * drm_syncobj_find_fence - lookup and reference the fence in a sync object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  365)  * @file_private: drm file private pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  366)  * @handle: sync object handle to lookup.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  367)  * @point: timeline point
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  368)  * @flags: DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT or not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  369)  * @fence: out parameter for the fence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  370)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  371)  * This is just a convenience function that combines drm_syncobj_find() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  372)  * drm_syncobj_fence_get().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  373)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  374)  * Returns 0 on success or a negative error value on failure. On success @fence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  375)  * contains a reference to the fence, which must be released by calling
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  376)  * dma_fence_put().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  377)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  378) int drm_syncobj_find_fence(struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  379) 			   u32 handle, u64 point, u64 flags,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  380) 			   struct dma_fence **fence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  381) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  382) 	struct drm_syncobj *syncobj = drm_syncobj_find(file_private, handle);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  383) 	struct syncobj_wait_entry wait;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  384) 	u64 timeout = nsecs_to_jiffies64(DRM_SYNCOBJ_WAIT_FOR_SUBMIT_TIMEOUT);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  385) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  386) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  387) 	if (!syncobj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  388) 		return -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  389) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  390) 	*fence = drm_syncobj_fence_get(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  391) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  392) 	if (*fence) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  393) 		ret = dma_fence_chain_find_seqno(fence, point);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  394) 		if (!ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  395) 			/* If the requested seqno is already signaled
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  396) 			 * drm_syncobj_find_fence may return a NULL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  397) 			 * fence. To make sure the recipient gets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  398) 			 * signalled, use a new fence instead.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  399) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  400) 			if (!*fence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  401) 				*fence = dma_fence_get_stub();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  402) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  403) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  404) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  405) 		dma_fence_put(*fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  406) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  407) 		ret = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  408) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  409) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  410) 	if (!(flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  411) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  412) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  413) 	memset(&wait, 0, sizeof(wait));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  414) 	wait.task = current;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  415) 	wait.point = point;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  416) 	drm_syncobj_fence_add_wait(syncobj, &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  417) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  418) 	do {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  419) 		set_current_state(TASK_INTERRUPTIBLE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  420) 		if (wait.fence) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  421) 			ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  422) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  423) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  424)                 if (timeout == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  425)                         ret = -ETIME;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  426)                         break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  427)                 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  428) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  429) 		if (signal_pending(current)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  430) 			ret = -ERESTARTSYS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  431) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  432) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  433) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  434)                 timeout = schedule_timeout(timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  435) 	} while (1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  436) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  437) 	__set_current_state(TASK_RUNNING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  438) 	*fence = wait.fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  439) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  440) 	if (wait.node.next)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  441) 		drm_syncobj_remove_wait(syncobj, &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  442) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  443) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  444) 	drm_syncobj_put(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  445) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  446) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  447) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  448) EXPORT_SYMBOL(drm_syncobj_find_fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  449) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  450) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  451)  * drm_syncobj_free - free a sync object.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  452)  * @kref: kref to free.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  453)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  454)  * Only to be called from kref_put in drm_syncobj_put.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  455)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  456) void drm_syncobj_free(struct kref *kref)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  457) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  458) 	struct drm_syncobj *syncobj = container_of(kref,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  459) 						   struct drm_syncobj,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  460) 						   refcount);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  461) 	drm_syncobj_replace_fence(syncobj, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  462) 	kfree(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  463) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  464) EXPORT_SYMBOL(drm_syncobj_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  465) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  466) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  467)  * drm_syncobj_create - create a new syncobj
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  468)  * @out_syncobj: returned syncobj
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  469)  * @flags: DRM_SYNCOBJ_* flags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  470)  * @fence: if non-NULL, the syncobj will represent this fence
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  471)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  472)  * This is the first function to create a sync object. After creating, drivers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  473)  * probably want to make it available to userspace, either through
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  474)  * drm_syncobj_get_handle() or drm_syncobj_get_fd().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  475)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  476)  * Returns 0 on success or a negative error value on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  477)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  478) int drm_syncobj_create(struct drm_syncobj **out_syncobj, uint32_t flags,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  479) 		       struct dma_fence *fence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  480) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  481) 	struct drm_syncobj *syncobj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  482) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  483) 	syncobj = kzalloc(sizeof(struct drm_syncobj), GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  484) 	if (!syncobj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  485) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  486) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  487) 	kref_init(&syncobj->refcount);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  488) 	INIT_LIST_HEAD(&syncobj->cb_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  489) 	spin_lock_init(&syncobj->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  490) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  491) 	if (flags & DRM_SYNCOBJ_CREATE_SIGNALED)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  492) 		drm_syncobj_assign_null_handle(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  493) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  494) 	if (fence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  495) 		drm_syncobj_replace_fence(syncobj, fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  496) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  497) 	*out_syncobj = syncobj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  498) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  499) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  500) EXPORT_SYMBOL(drm_syncobj_create);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  501) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  502) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  503)  * drm_syncobj_get_handle - get a handle from a syncobj
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  504)  * @file_private: drm file private pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  505)  * @syncobj: Sync object to export
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  506)  * @handle: out parameter with the new handle
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  507)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  508)  * Exports a sync object created with drm_syncobj_create() as a handle on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  509)  * @file_private to userspace.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  510)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  511)  * Returns 0 on success or a negative error value on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  512)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  513) int drm_syncobj_get_handle(struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  514) 			   struct drm_syncobj *syncobj, u32 *handle)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  515) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  516) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  517) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  518) 	/* take a reference to put in the idr */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  519) 	drm_syncobj_get(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  520) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  521) 	idr_preload(GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  522) 	spin_lock(&file_private->syncobj_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  523) 	ret = idr_alloc(&file_private->syncobj_idr, syncobj, 1, 0, GFP_NOWAIT);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  524) 	spin_unlock(&file_private->syncobj_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  525) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  526) 	idr_preload_end();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  527) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  528) 	if (ret < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  529) 		drm_syncobj_put(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  530) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  531) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  532) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  533) 	*handle = ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  534) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  535) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  536) EXPORT_SYMBOL(drm_syncobj_get_handle);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  537) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  538) static int drm_syncobj_create_as_handle(struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  539) 					u32 *handle, uint32_t flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  540) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  541) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  542) 	struct drm_syncobj *syncobj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  543) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  544) 	ret = drm_syncobj_create(&syncobj, flags, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  545) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  546) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  547) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  548) 	ret = drm_syncobj_get_handle(file_private, syncobj, handle);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  549) 	drm_syncobj_put(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  550) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  551) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  552) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  553) static int drm_syncobj_destroy(struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  554) 			       u32 handle)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  555) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  556) 	struct drm_syncobj *syncobj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  557) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  558) 	spin_lock(&file_private->syncobj_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  559) 	syncobj = idr_remove(&file_private->syncobj_idr, handle);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  560) 	spin_unlock(&file_private->syncobj_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  561) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  562) 	if (!syncobj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  563) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  564) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  565) 	drm_syncobj_put(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  566) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  567) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  568) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  569) static int drm_syncobj_file_release(struct inode *inode, struct file *file)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  570) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  571) 	struct drm_syncobj *syncobj = file->private_data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  572) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  573) 	drm_syncobj_put(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  574) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  575) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  576) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  577) static const struct file_operations drm_syncobj_file_fops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  578) 	.release = drm_syncobj_file_release,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  579) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  580) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  581) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  582)  * drm_syncobj_get_fd - get a file descriptor from a syncobj
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  583)  * @syncobj: Sync object to export
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  584)  * @p_fd: out parameter with the new file descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  585)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  586)  * Exports a sync object created with drm_syncobj_create() as a file descriptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  587)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  588)  * Returns 0 on success or a negative error value on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  589)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  590) int drm_syncobj_get_fd(struct drm_syncobj *syncobj, int *p_fd)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  591) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  592) 	struct file *file;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  593) 	int fd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  594) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  595) 	fd = get_unused_fd_flags(O_CLOEXEC);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  596) 	if (fd < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  597) 		return fd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  598) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  599) 	file = anon_inode_getfile("syncobj_file",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  600) 				  &drm_syncobj_file_fops,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  601) 				  syncobj, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  602) 	if (IS_ERR(file)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  603) 		put_unused_fd(fd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  604) 		return PTR_ERR(file);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  605) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  606) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  607) 	drm_syncobj_get(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  608) 	fd_install(fd, file);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  609) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  610) 	*p_fd = fd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  611) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  612) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  613) EXPORT_SYMBOL(drm_syncobj_get_fd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  614) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  615) static int drm_syncobj_handle_to_fd(struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  616) 				    u32 handle, int *p_fd)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  617) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  618) 	struct drm_syncobj *syncobj = drm_syncobj_find(file_private, handle);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  619) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  620) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  621) 	if (!syncobj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  622) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  623) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  624) 	ret = drm_syncobj_get_fd(syncobj, p_fd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  625) 	drm_syncobj_put(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  626) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  627) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  628) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  629) static int drm_syncobj_fd_to_handle(struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  630) 				    int fd, u32 *handle)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  631) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  632) 	struct drm_syncobj *syncobj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  633) 	struct fd f = fdget(fd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  634) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  635) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  636) 	if (!f.file)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  637) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  638) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  639) 	if (f.file->f_op != &drm_syncobj_file_fops) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  640) 		fdput(f);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  641) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  642) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  643) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  644) 	/* take a reference to put in the idr */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  645) 	syncobj = f.file->private_data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  646) 	drm_syncobj_get(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  647) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  648) 	idr_preload(GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  649) 	spin_lock(&file_private->syncobj_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  650) 	ret = idr_alloc(&file_private->syncobj_idr, syncobj, 1, 0, GFP_NOWAIT);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  651) 	spin_unlock(&file_private->syncobj_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  652) 	idr_preload_end();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  653) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  654) 	if (ret > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  655) 		*handle = ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  656) 		ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  657) 	} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  658) 		drm_syncobj_put(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  659) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  660) 	fdput(f);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  661) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  662) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  663) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  664) static int drm_syncobj_import_sync_file_fence(struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  665) 					      int fd, int handle)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  666) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  667) 	struct dma_fence *fence = sync_file_get_fence(fd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  668) 	struct drm_syncobj *syncobj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  669) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  670) 	if (!fence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  671) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  672) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  673) 	syncobj = drm_syncobj_find(file_private, handle);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  674) 	if (!syncobj) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  675) 		dma_fence_put(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  676) 		return -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  677) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  678) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  679) 	drm_syncobj_replace_fence(syncobj, fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  680) 	dma_fence_put(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  681) 	drm_syncobj_put(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  682) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  683) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  684) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  685) static int drm_syncobj_export_sync_file(struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  686) 					int handle, int *p_fd)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  687) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  688) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  689) 	struct dma_fence *fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  690) 	struct sync_file *sync_file;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  691) 	int fd = get_unused_fd_flags(O_CLOEXEC);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  692) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  693) 	if (fd < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  694) 		return fd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  695) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  696) 	ret = drm_syncobj_find_fence(file_private, handle, 0, 0, &fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  697) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  698) 		goto err_put_fd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  699) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  700) 	sync_file = sync_file_create(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  701) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  702) 	dma_fence_put(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  703) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  704) 	if (!sync_file) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  705) 		ret = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  706) 		goto err_put_fd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  707) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  708) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  709) 	fd_install(fd, sync_file->file);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  710) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  711) 	*p_fd = fd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  712) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  713) err_put_fd:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  714) 	put_unused_fd(fd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  715) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  716) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  717) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  718)  * drm_syncobj_open - initalizes syncobj file-private structures at devnode open time
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  719)  * @file_private: drm file-private structure to set up
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  720)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  721)  * Called at device open time, sets up the structure for handling refcounting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  722)  * of sync objects.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  723)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  724) void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  725) drm_syncobj_open(struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  726) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  727) 	idr_init_base(&file_private->syncobj_idr, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  728) 	spin_lock_init(&file_private->syncobj_table_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  729) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  730) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  731) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  732) drm_syncobj_release_handle(int id, void *ptr, void *data)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  733) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  734) 	struct drm_syncobj *syncobj = ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  735) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  736) 	drm_syncobj_put(syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  737) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  738) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  739) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  740) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  741)  * drm_syncobj_release - release file-private sync object resources
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  742)  * @file_private: drm file-private structure to clean up
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  743)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  744)  * Called at close time when the filp is going away.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  745)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  746)  * Releases any remaining references on objects by this filp.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  747)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  748) void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  749) drm_syncobj_release(struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  750) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  751) 	idr_for_each(&file_private->syncobj_idr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  752) 		     &drm_syncobj_release_handle, file_private);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  753) 	idr_destroy(&file_private->syncobj_idr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  754) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  755) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  756) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  757) drm_syncobj_create_ioctl(struct drm_device *dev, void *data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  758) 			 struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  759) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  760) 	struct drm_syncobj_create *args = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  761) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  762) 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  763) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  764) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  765) 	/* no valid flags yet */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  766) 	if (args->flags & ~DRM_SYNCOBJ_CREATE_SIGNALED)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  767) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  768) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  769) 	return drm_syncobj_create_as_handle(file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  770) 					    &args->handle, args->flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  771) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  772) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  773) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  774) drm_syncobj_destroy_ioctl(struct drm_device *dev, void *data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  775) 			  struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  776) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  777) 	struct drm_syncobj_destroy *args = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  778) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  779) 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  780) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  781) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  782) 	/* make sure padding is empty */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  783) 	if (args->pad)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  784) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  785) 	return drm_syncobj_destroy(file_private, args->handle);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  786) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  787) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  788) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  789) drm_syncobj_handle_to_fd_ioctl(struct drm_device *dev, void *data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  790) 				   struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  791) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  792) 	struct drm_syncobj_handle *args = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  793) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  794) 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  795) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  796) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  797) 	if (args->pad)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  798) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  799) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  800) 	if (args->flags != 0 &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  801) 	    args->flags != DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  802) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  803) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  804) 	if (args->flags & DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  805) 		return drm_syncobj_export_sync_file(file_private, args->handle,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  806) 						    &args->fd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  807) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  808) 	return drm_syncobj_handle_to_fd(file_private, args->handle,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  809) 					&args->fd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  810) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  811) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  812) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  813) drm_syncobj_fd_to_handle_ioctl(struct drm_device *dev, void *data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  814) 				   struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  815) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  816) 	struct drm_syncobj_handle *args = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  817) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  818) 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  819) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  820) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  821) 	if (args->pad)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  822) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  823) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  824) 	if (args->flags != 0 &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  825) 	    args->flags != DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  826) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  827) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  828) 	if (args->flags & DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  829) 		return drm_syncobj_import_sync_file_fence(file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  830) 							  args->fd,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  831) 							  args->handle);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  832) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  833) 	return drm_syncobj_fd_to_handle(file_private, args->fd,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  834) 					&args->handle);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  835) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  836) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  837) static int drm_syncobj_transfer_to_timeline(struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  838) 					    struct drm_syncobj_transfer *args)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  839) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  840) 	struct drm_syncobj *timeline_syncobj = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  841) 	struct dma_fence *fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  842) 	struct dma_fence_chain *chain;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  843) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  844) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  845) 	timeline_syncobj = drm_syncobj_find(file_private, args->dst_handle);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  846) 	if (!timeline_syncobj) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  847) 		return -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  848) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  849) 	ret = drm_syncobj_find_fence(file_private, args->src_handle,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  850) 				     args->src_point, args->flags,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  851) 				     &fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  852) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  853) 		goto err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  854) 	chain = kzalloc(sizeof(struct dma_fence_chain), GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  855) 	if (!chain) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  856) 		ret = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  857) 		goto err1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  858) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  859) 	drm_syncobj_add_point(timeline_syncobj, chain, fence, args->dst_point);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  860) err1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  861) 	dma_fence_put(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  862) err:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  863) 	drm_syncobj_put(timeline_syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  864) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  865) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  866) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  867) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  868) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  869) drm_syncobj_transfer_to_binary(struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  870) 			       struct drm_syncobj_transfer *args)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  871) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  872) 	struct drm_syncobj *binary_syncobj = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  873) 	struct dma_fence *fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  874) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  875) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  876) 	binary_syncobj = drm_syncobj_find(file_private, args->dst_handle);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  877) 	if (!binary_syncobj)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  878) 		return -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  879) 	ret = drm_syncobj_find_fence(file_private, args->src_handle,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  880) 				     args->src_point, args->flags, &fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  881) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  882) 		goto err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  883) 	drm_syncobj_replace_fence(binary_syncobj, fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  884) 	dma_fence_put(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  885) err:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  886) 	drm_syncobj_put(binary_syncobj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  887) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  888) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  889) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  890) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  891) drm_syncobj_transfer_ioctl(struct drm_device *dev, void *data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  892) 			   struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  893) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  894) 	struct drm_syncobj_transfer *args = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  895) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  896) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  897) 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  898) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  899) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  900) 	if (args->pad)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  901) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  902) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  903) 	if (args->dst_point)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  904) 		ret = drm_syncobj_transfer_to_timeline(file_private, args);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  905) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  906) 		ret = drm_syncobj_transfer_to_binary(file_private, args);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  907) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  908) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  909) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  910) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  911) static void syncobj_wait_fence_func(struct dma_fence *fence,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  912) 				    struct dma_fence_cb *cb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  913) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  914) 	struct syncobj_wait_entry *wait =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  915) 		container_of(cb, struct syncobj_wait_entry, fence_cb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  916) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  917) 	wake_up_process(wait->task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  918) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  919) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  920) static void syncobj_wait_syncobj_func(struct drm_syncobj *syncobj,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  921) 				      struct syncobj_wait_entry *wait)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  922) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  923) 	struct dma_fence *fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  924) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  925) 	/* This happens inside the syncobj lock */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  926) 	fence = rcu_dereference_protected(syncobj->fence,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  927) 					  lockdep_is_held(&syncobj->lock));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  928) 	dma_fence_get(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  929) 	if (!fence || dma_fence_chain_find_seqno(&fence, wait->point)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  930) 		dma_fence_put(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  931) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  932) 	} else if (!fence) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  933) 		wait->fence = dma_fence_get_stub();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  934) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  935) 		wait->fence = fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  936) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  937) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  938) 	wake_up_process(wait->task);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  939) 	list_del_init(&wait->node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  940) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  941) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  942) static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  943) 						  void __user *user_points,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  944) 						  uint32_t count,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  945) 						  uint32_t flags,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  946) 						  signed long timeout,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  947) 						  uint32_t *idx)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  948) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  949) 	struct syncobj_wait_entry *entries;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  950) 	struct dma_fence *fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  951) 	uint64_t *points;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  952) 	uint32_t signaled_count, i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  953) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  954) 	points = kmalloc_array(count, sizeof(*points), GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  955) 	if (points == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  956) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  957) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  958) 	if (!user_points) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  959) 		memset(points, 0, count * sizeof(uint64_t));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  960) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  961) 	} else if (copy_from_user(points, user_points,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  962) 				  sizeof(uint64_t) * count)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  963) 		timeout = -EFAULT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  964) 		goto err_free_points;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  965) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  966) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  967) 	entries = kcalloc(count, sizeof(*entries), GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  968) 	if (!entries) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  969) 		timeout = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  970) 		goto err_free_points;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  971) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  972) 	/* Walk the list of sync objects and initialize entries.  We do
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  973) 	 * this up-front so that we can properly return -EINVAL if there is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  974) 	 * a syncobj with a missing fence and then never have the chance of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  975) 	 * returning -EINVAL again.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  976) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  977) 	signaled_count = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  978) 	for (i = 0; i < count; ++i) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  979) 		struct dma_fence *fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  980) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  981) 		entries[i].task = current;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  982) 		entries[i].point = points[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  983) 		fence = drm_syncobj_fence_get(syncobjs[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  984) 		if (!fence || dma_fence_chain_find_seqno(&fence, points[i])) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  985) 			dma_fence_put(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  986) 			if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  987) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  988) 			} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  989) 				timeout = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  990) 				goto cleanup_entries;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  991) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  992) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  993) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  994) 		if (fence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  995) 			entries[i].fence = fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  996) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  997) 			entries[i].fence = dma_fence_get_stub();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  998) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  999) 		if ((flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1000) 		    dma_fence_is_signaled(entries[i].fence)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1001) 			if (signaled_count == 0 && idx)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1002) 				*idx = i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1003) 			signaled_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1004) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1005) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1006) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1007) 	if (signaled_count == count ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1008) 	    (signaled_count > 0 &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1009) 	     !(flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1010) 		goto cleanup_entries;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1011) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1012) 	/* There's a very annoying laxness in the dma_fence API here, in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1013) 	 * that backends are not required to automatically report when a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1014) 	 * fence is signaled prior to fence->ops->enable_signaling() being
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1015) 	 * called.  So here if we fail to match signaled_count, we need to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1016) 	 * fallthough and try a 0 timeout wait!
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1017) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1018) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1019) 	if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1020) 		for (i = 0; i < count; ++i)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1021) 			drm_syncobj_fence_add_wait(syncobjs[i], &entries[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1022) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1023) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1024) 	do {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1025) 		set_current_state(TASK_INTERRUPTIBLE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1026) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1027) 		signaled_count = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1028) 		for (i = 0; i < count; ++i) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1029) 			fence = entries[i].fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1030) 			if (!fence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1031) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1032) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1033) 			if ((flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1034) 			    dma_fence_is_signaled(fence) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1035) 			    (!entries[i].fence_cb.func &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1036) 			     dma_fence_add_callback(fence,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1037) 						    &entries[i].fence_cb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1038) 						    syncobj_wait_fence_func))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1039) 				/* The fence has been signaled */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1040) 				if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1041) 					signaled_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1042) 				} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1043) 					if (idx)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1044) 						*idx = i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1045) 					goto done_waiting;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1046) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1047) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1048) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1049) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1050) 		if (signaled_count == count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1051) 			goto done_waiting;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1052) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1053) 		if (timeout == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1054) 			timeout = -ETIME;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1055) 			goto done_waiting;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1056) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1057) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1058) 		if (signal_pending(current)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1059) 			timeout = -ERESTARTSYS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1060) 			goto done_waiting;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1061) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1062) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1063) 		timeout = schedule_timeout(timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1064) 	} while (1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1065) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1066) done_waiting:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1067) 	__set_current_state(TASK_RUNNING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1068) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1069) cleanup_entries:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1070) 	for (i = 0; i < count; ++i) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1071) 		drm_syncobj_remove_wait(syncobjs[i], &entries[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1072) 		if (entries[i].fence_cb.func)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1073) 			dma_fence_remove_callback(entries[i].fence,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1074) 						  &entries[i].fence_cb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1075) 		dma_fence_put(entries[i].fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1076) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1077) 	kfree(entries);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1078) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1079) err_free_points:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1080) 	kfree(points);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1081) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1082) 	return timeout;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1083) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1084) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1085) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1086)  * drm_timeout_abs_to_jiffies - calculate jiffies timeout from absolute value
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1087)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1088)  * @timeout_nsec: timeout nsec component in ns, 0 for poll
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1089)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1090)  * Calculate the timeout in jiffies from an absolute time in sec/nsec.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1091)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1092) signed long drm_timeout_abs_to_jiffies(int64_t timeout_nsec)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1093) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1094) 	ktime_t abs_timeout, now;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1095) 	u64 timeout_ns, timeout_jiffies64;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1096) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1097) 	/* make 0 timeout means poll - absolute 0 doesn't seem valid */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1098) 	if (timeout_nsec == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1099) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1100) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1101) 	abs_timeout = ns_to_ktime(timeout_nsec);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1102) 	now = ktime_get();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1103) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1104) 	if (!ktime_after(abs_timeout, now))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1105) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1106) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1107) 	timeout_ns = ktime_to_ns(ktime_sub(abs_timeout, now));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1108) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1109) 	timeout_jiffies64 = nsecs_to_jiffies64(timeout_ns);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1110) 	/*  clamp timeout to avoid infinite timeout */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1111) 	if (timeout_jiffies64 >= MAX_SCHEDULE_TIMEOUT - 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1112) 		return MAX_SCHEDULE_TIMEOUT - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1113) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1114) 	return timeout_jiffies64 + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1115) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1116) EXPORT_SYMBOL(drm_timeout_abs_to_jiffies);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1117) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1118) static int drm_syncobj_array_wait(struct drm_device *dev,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1119) 				  struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1120) 				  struct drm_syncobj_wait *wait,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1121) 				  struct drm_syncobj_timeline_wait *timeline_wait,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1122) 				  struct drm_syncobj **syncobjs, bool timeline)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1123) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1124) 	signed long timeout = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1125) 	uint32_t first = ~0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1126) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1127) 	if (!timeline) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1128) 		timeout = drm_timeout_abs_to_jiffies(wait->timeout_nsec);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1129) 		timeout = drm_syncobj_array_wait_timeout(syncobjs,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1130) 							 NULL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1131) 							 wait->count_handles,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1132) 							 wait->flags,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1133) 							 timeout, &first);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1134) 		if (timeout < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1135) 			return timeout;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1136) 		wait->first_signaled = first;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1137) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1138) 		timeout = drm_timeout_abs_to_jiffies(timeline_wait->timeout_nsec);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1139) 		timeout = drm_syncobj_array_wait_timeout(syncobjs,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1140) 							 u64_to_user_ptr(timeline_wait->points),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1141) 							 timeline_wait->count_handles,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1142) 							 timeline_wait->flags,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1143) 							 timeout, &first);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1144) 		if (timeout < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1145) 			return timeout;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1146) 		timeline_wait->first_signaled = first;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1147) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1148) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1149) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1150) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1151) static int drm_syncobj_array_find(struct drm_file *file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1152) 				  void __user *user_handles,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1153) 				  uint32_t count_handles,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1154) 				  struct drm_syncobj ***syncobjs_out)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1155) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1156) 	uint32_t i, *handles;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1157) 	struct drm_syncobj **syncobjs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1158) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1159) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1160) 	handles = kmalloc_array(count_handles, sizeof(*handles), GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1161) 	if (handles == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1162) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1163) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1164) 	if (copy_from_user(handles, user_handles,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1165) 			   sizeof(uint32_t) * count_handles)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1166) 		ret = -EFAULT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1167) 		goto err_free_handles;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1168) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1169) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1170) 	syncobjs = kmalloc_array(count_handles, sizeof(*syncobjs), GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1171) 	if (syncobjs == NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1172) 		ret = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1173) 		goto err_free_handles;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1174) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1175) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1176) 	for (i = 0; i < count_handles; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1177) 		syncobjs[i] = drm_syncobj_find(file_private, handles[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1178) 		if (!syncobjs[i]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1179) 			ret = -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1180) 			goto err_put_syncobjs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1181) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1182) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1183) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1184) 	kfree(handles);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1185) 	*syncobjs_out = syncobjs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1186) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1187) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1188) err_put_syncobjs:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1189) 	while (i-- > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1190) 		drm_syncobj_put(syncobjs[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1191) 	kfree(syncobjs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1192) err_free_handles:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1193) 	kfree(handles);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1194) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1195) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1196) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1197) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1198) static void drm_syncobj_array_free(struct drm_syncobj **syncobjs,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1199) 				   uint32_t count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1200) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1201) 	uint32_t i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1202) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1203) 	for (i = 0; i < count; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1204) 		drm_syncobj_put(syncobjs[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1205) 	kfree(syncobjs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1206) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1207) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1208) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1209) drm_syncobj_wait_ioctl(struct drm_device *dev, void *data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1210) 		       struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1211) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1212) 	struct drm_syncobj_wait *args = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1213) 	struct drm_syncobj **syncobjs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1214) 	int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1215) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1216) 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1217) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1218) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1219) 	if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1220) 			    DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1221) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1222) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1223) 	if (args->count_handles == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1224) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1225) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1226) 	ret = drm_syncobj_array_find(file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1227) 				     u64_to_user_ptr(args->handles),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1228) 				     args->count_handles,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1229) 				     &syncobjs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1230) 	if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1231) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1232) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1233) 	ret = drm_syncobj_array_wait(dev, file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1234) 				     args, NULL, syncobjs, false);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1235) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1236) 	drm_syncobj_array_free(syncobjs, args->count_handles);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1237) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1238) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1239) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1240) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1241) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1242) drm_syncobj_timeline_wait_ioctl(struct drm_device *dev, void *data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1243) 				struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1244) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1245) 	struct drm_syncobj_timeline_wait *args = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1246) 	struct drm_syncobj **syncobjs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1247) 	int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1248) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1249) 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1250) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1251) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1252) 	if (args->flags & ~(DRM_SYNCOBJ_WAIT_FLAGS_WAIT_ALL |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1253) 			    DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1254) 			    DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1255) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1256) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1257) 	if (args->count_handles == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1258) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1259) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1260) 	ret = drm_syncobj_array_find(file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1261) 				     u64_to_user_ptr(args->handles),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1262) 				     args->count_handles,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1263) 				     &syncobjs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1264) 	if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1265) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1266) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1267) 	ret = drm_syncobj_array_wait(dev, file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1268) 				     NULL, args, syncobjs, true);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1269) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1270) 	drm_syncobj_array_free(syncobjs, args->count_handles);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1271) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1272) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1273) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1274) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1275) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1276) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1277) drm_syncobj_reset_ioctl(struct drm_device *dev, void *data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1278) 			struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1279) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1280) 	struct drm_syncobj_array *args = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1281) 	struct drm_syncobj **syncobjs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1282) 	uint32_t i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1283) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1284) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1285) 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1286) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1287) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1288) 	if (args->pad != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1289) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1290) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1291) 	if (args->count_handles == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1292) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1293) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1294) 	ret = drm_syncobj_array_find(file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1295) 				     u64_to_user_ptr(args->handles),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1296) 				     args->count_handles,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1297) 				     &syncobjs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1298) 	if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1299) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1300) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1301) 	for (i = 0; i < args->count_handles; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1302) 		drm_syncobj_replace_fence(syncobjs[i], NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1303) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1304) 	drm_syncobj_array_free(syncobjs, args->count_handles);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1305) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1306) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1307) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1308) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1309) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1310) drm_syncobj_signal_ioctl(struct drm_device *dev, void *data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1311) 			 struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1312) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1313) 	struct drm_syncobj_array *args = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1314) 	struct drm_syncobj **syncobjs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1315) 	uint32_t i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1316) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1317) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1318) 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1319) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1320) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1321) 	if (args->pad != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1322) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1323) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1324) 	if (args->count_handles == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1325) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1326) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1327) 	ret = drm_syncobj_array_find(file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1328) 				     u64_to_user_ptr(args->handles),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1329) 				     args->count_handles,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1330) 				     &syncobjs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1331) 	if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1332) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1333) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1334) 	for (i = 0; i < args->count_handles; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1335) 		drm_syncobj_assign_null_handle(syncobjs[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1336) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1337) 	drm_syncobj_array_free(syncobjs, args->count_handles);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1338) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1339) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1340) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1341) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1342) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1343) drm_syncobj_timeline_signal_ioctl(struct drm_device *dev, void *data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1344) 				  struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1345) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1346) 	struct drm_syncobj_timeline_array *args = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1347) 	struct drm_syncobj **syncobjs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1348) 	struct dma_fence_chain **chains;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1349) 	uint64_t *points;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1350) 	uint32_t i, j;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1351) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1352) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1353) 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1354) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1355) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1356) 	if (args->flags != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1357) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1358) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1359) 	if (args->count_handles == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1360) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1361) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1362) 	ret = drm_syncobj_array_find(file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1363) 				     u64_to_user_ptr(args->handles),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1364) 				     args->count_handles,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1365) 				     &syncobjs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1366) 	if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1367) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1368) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1369) 	points = kmalloc_array(args->count_handles, sizeof(*points),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1370) 			       GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1371) 	if (!points) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1372) 		ret = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1373) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1374) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1375) 	if (!u64_to_user_ptr(args->points)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1376) 		memset(points, 0, args->count_handles * sizeof(uint64_t));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1377) 	} else if (copy_from_user(points, u64_to_user_ptr(args->points),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1378) 				  sizeof(uint64_t) * args->count_handles)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1379) 		ret = -EFAULT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1380) 		goto err_points;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1381) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1382) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1383) 	chains = kmalloc_array(args->count_handles, sizeof(void *), GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1384) 	if (!chains) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1385) 		ret = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1386) 		goto err_points;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1387) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1388) 	for (i = 0; i < args->count_handles; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1389) 		chains[i] = kzalloc(sizeof(struct dma_fence_chain), GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1390) 		if (!chains[i]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1391) 			for (j = 0; j < i; j++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1392) 				kfree(chains[j]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1393) 			ret = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1394) 			goto err_chains;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1395) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1396) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1397) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1398) 	for (i = 0; i < args->count_handles; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1399) 		struct dma_fence *fence = dma_fence_get_stub();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1400) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1401) 		drm_syncobj_add_point(syncobjs[i], chains[i],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1402) 				      fence, points[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1403) 		dma_fence_put(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1404) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1405) err_chains:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1406) 	kfree(chains);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1407) err_points:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1408) 	kfree(points);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1409) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1410) 	drm_syncobj_array_free(syncobjs, args->count_handles);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1411) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1412) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1413) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1414) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1415) int drm_syncobj_query_ioctl(struct drm_device *dev, void *data,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1416) 			    struct drm_file *file_private)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1417) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1418) 	struct drm_syncobj_timeline_array *args = data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1419) 	struct drm_syncobj **syncobjs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1420) 	uint64_t __user *points = u64_to_user_ptr(args->points);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1421) 	uint32_t i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1422) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1423) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1424) 	if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1425) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1426) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1427) 	if (args->flags & ~DRM_SYNCOBJ_QUERY_FLAGS_LAST_SUBMITTED)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1428) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1429) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1430) 	if (args->count_handles == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1431) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1432) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1433) 	ret = drm_syncobj_array_find(file_private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1434) 				     u64_to_user_ptr(args->handles),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1435) 				     args->count_handles,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1436) 				     &syncobjs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1437) 	if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1438) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1439) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1440) 	for (i = 0; i < args->count_handles; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1441) 		struct dma_fence_chain *chain;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1442) 		struct dma_fence *fence;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1443) 		uint64_t point;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1444) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1445) 		fence = drm_syncobj_fence_get(syncobjs[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1446) 		chain = to_dma_fence_chain(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1447) 		if (chain) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1448) 			struct dma_fence *iter, *last_signaled =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1449) 				dma_fence_get(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1450) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1451) 			if (args->flags &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1452) 			    DRM_SYNCOBJ_QUERY_FLAGS_LAST_SUBMITTED) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1453) 				point = fence->seqno;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1454) 			} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1455) 				dma_fence_chain_for_each(iter, fence) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1456) 					if (iter->context != fence->context) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1457) 						dma_fence_put(iter);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1458) 						/* It is most likely that timeline has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1459) 						* unorder points. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1460) 						break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1461) 					}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1462) 					dma_fence_put(last_signaled);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1463) 					last_signaled = dma_fence_get(iter);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1464) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1465) 				point = dma_fence_is_signaled(last_signaled) ?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1466) 					last_signaled->seqno :
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1467) 					to_dma_fence_chain(last_signaled)->prev_seqno;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1468) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1469) 			dma_fence_put(last_signaled);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1470) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1471) 			point = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1472) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1473) 		dma_fence_put(fence);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1474) 		ret = copy_to_user(&points[i], &point, sizeof(uint64_t));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1475) 		ret = ret ? -EFAULT : 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1476) 		if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1477) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1478) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1479) 	drm_syncobj_array_free(syncobjs, args->count_handles);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1480) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1481) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1482) }