Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags   |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) // SPDX-License-Identifier: GPL-2.0+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3)  * Copyright (C) 2016 Oracle.  All Rights Reserved.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4)  * Author: Darrick J. Wong <darrick.wong@oracle.com>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6) #include "xfs.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7) #include "xfs_fs.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8) #include "xfs_shared.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9) #include "xfs_format.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10) #include "xfs_log_format.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11) #include "xfs_trans_resv.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12) #include "xfs_mount.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13) #include "xfs_defer.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14) #include "xfs_trans.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15) #include "xfs_buf_item.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16) #include "xfs_inode.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17) #include "xfs_inode_item.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18) #include "xfs_trace.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19) #include "xfs_icache.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20) #include "xfs_log.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23)  * Deferred Operations in XFS
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25)  * Due to the way locking rules work in XFS, certain transactions (block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26)  * mapping and unmapping, typically) have permanent reservations so that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27)  * we can roll the transaction to adhere to AG locking order rules and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28)  * to unlock buffers between metadata updates.  Prior to rmap/reflink,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29)  * the mapping code had a mechanism to perform these deferrals for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30)  * extents that were going to be freed; this code makes that facility
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31)  * more generic.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33)  * When adding the reverse mapping and reflink features, it became
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34)  * necessary to perform complex remapping multi-transactions to comply
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35)  * with AG locking order rules, and to be able to spread a single
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36)  * refcount update operation (an operation on an n-block extent can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37)  * update as many as n records!) among multiple transactions.  XFS can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38)  * roll a transaction to facilitate this, but using this facility
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39)  * requires us to log "intent" items in case log recovery needs to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40)  * redo the operation, and to log "done" items to indicate that redo
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41)  * is not necessary.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43)  * Deferred work is tracked in xfs_defer_pending items.  Each pending
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44)  * item tracks one type of deferred work.  Incoming work items (which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45)  * have not yet had an intent logged) are attached to a pending item
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46)  * on the dop_intake list, where they wait for the caller to finish
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47)  * the deferred operations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49)  * Finishing a set of deferred operations is an involved process.  To
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50)  * start, we define "rolling a deferred-op transaction" as follows:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52)  * > For each xfs_defer_pending item on the dop_intake list,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53)  *   - Sort the work items in AG order.  XFS locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54)  *     order rules require us to lock buffers in AG order.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55)  *   - Create a log intent item for that type.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56)  *   - Attach it to the pending item.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57)  *   - Move the pending item from the dop_intake list to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58)  *     dop_pending list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59)  * > Roll the transaction.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61)  * NOTE: To avoid exceeding the transaction reservation, we limit the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62)  * number of items that we attach to a given xfs_defer_pending.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64)  * The actual finishing process looks like this:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66)  * > For each xfs_defer_pending in the dop_pending list,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67)  *   - Roll the deferred-op transaction as above.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68)  *   - Create a log done item for that type, and attach it to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69)  *     log intent item.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70)  *   - For each work item attached to the log intent item,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71)  *     * Perform the described action.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72)  *     * Attach the work item to the log done item.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73)  *     * If the result of doing the work was -EAGAIN, ->finish work
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74)  *       wants a new transaction.  See the "Requesting a Fresh
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75)  *       Transaction while Finishing Deferred Work" section below for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76)  *       details.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78)  * The key here is that we must log an intent item for all pending
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79)  * work items every time we roll the transaction, and that we must log
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80)  * a done item as soon as the work is completed.  With this mechanism
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81)  * we can perform complex remapping operations, chaining intent items
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82)  * as needed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84)  * Requesting a Fresh Transaction while Finishing Deferred Work
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86)  * If ->finish_item decides that it needs a fresh transaction to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87)  * finish the work, it must ask its caller (xfs_defer_finish) for a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88)  * continuation.  The most likely cause of this circumstance are the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89)  * refcount adjust functions deciding that they've logged enough items
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90)  * to be at risk of exceeding the transaction reservation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92)  * To get a fresh transaction, we want to log the existing log done
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93)  * item to prevent the log intent item from replaying, immediately log
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94)  * a new log intent item with the unfinished work items, roll the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95)  * transaction, and re-call ->finish_item wherever it left off.  The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96)  * log done item and the new log intent item must be in the same
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97)  * transaction or atomicity cannot be guaranteed; defer_finish ensures
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98)  * that this happens.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100)  * This requires some coordination between ->finish_item and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101)  * defer_finish.  Upon deciding to request a new transaction,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102)  * ->finish_item should update the current work item to reflect the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103)  * unfinished work.  Next, it should reset the log done item's list
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104)  * count to the number of items finished, and return -EAGAIN.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105)  * defer_finish sees the -EAGAIN, logs the new log intent item
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106)  * with the remaining work items, and leaves the xfs_defer_pending
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107)  * item at the head of the dop_work queue.  Then it rolls the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108)  * transaction and picks up processing where it left off.  It is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109)  * required that ->finish_item must be careful to leave enough
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110)  * transaction reservation to fit the new log intent item.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112)  * This is an example of remapping the extent (E, E+B) into file X at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113)  * offset A and dealing with the extent (C, C+B) already being mapped
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114)  * there:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115)  * +-------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116)  * | Unmap file X startblock C offset A length B     | t0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117)  * | Intent to reduce refcount for extent (C, B)     |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118)  * | Intent to remove rmap (X, C, A, B)              |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119)  * | Intent to free extent (D, 1) (bmbt block)       |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120)  * | Intent to map (X, A, B) at startblock E         |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121)  * +-------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122)  * | Map file X startblock E offset A length B       | t1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123)  * | Done mapping (X, E, A, B)                       |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124)  * | Intent to increase refcount for extent (E, B)   |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125)  * | Intent to add rmap (X, E, A, B)                 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126)  * +-------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127)  * | Reduce refcount for extent (C, B)               | t2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128)  * | Done reducing refcount for extent (C, 9)        |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129)  * | Intent to reduce refcount for extent (C+9, B-9) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130)  * | (ran out of space after 9 refcount updates)     |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131)  * +-------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132)  * | Reduce refcount for extent (C+9, B+9)           | t3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133)  * | Done reducing refcount for extent (C+9, B-9)    |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134)  * | Increase refcount for extent (E, B)             |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135)  * | Done increasing refcount for extent (E, B)      |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136)  * | Intent to free extent (C, B)                    |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137)  * | Intent to free extent (F, 1) (refcountbt block) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138)  * | Intent to remove rmap (F, 1, REFC)              |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139)  * +-------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140)  * | Remove rmap (X, C, A, B)                        | t4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141)  * | Done removing rmap (X, C, A, B)                 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142)  * | Add rmap (X, E, A, B)                           |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143)  * | Done adding rmap (X, E, A, B)                   |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144)  * | Remove rmap (F, 1, REFC)                        |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145)  * | Done removing rmap (F, 1, REFC)                 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146)  * +-------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147)  * | Free extent (C, B)                              | t5
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148)  * | Done freeing extent (C, B)                      |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149)  * | Free extent (D, 1)                              |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150)  * | Done freeing extent (D, 1)                      |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151)  * | Free extent (F, 1)                              |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152)  * | Done freeing extent (F, 1)                      |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153)  * +-------------------------------------------------+
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155)  * If we should crash before t2 commits, log recovery replays
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156)  * the following intent items:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158)  * - Intent to reduce refcount for extent (C, B)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159)  * - Intent to remove rmap (X, C, A, B)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160)  * - Intent to free extent (D, 1) (bmbt block)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161)  * - Intent to increase refcount for extent (E, B)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162)  * - Intent to add rmap (X, E, A, B)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164)  * In the process of recovering, it should also generate and take care
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165)  * of these intent items:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167)  * - Intent to free extent (C, B)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168)  * - Intent to free extent (F, 1) (refcountbt block)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169)  * - Intent to remove rmap (F, 1, REFC)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171)  * Note that the continuation requested between t2 and t3 is likely to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172)  * reoccur.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) static const struct xfs_defer_op_type *defer_op_types[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) 	[XFS_DEFER_OPS_TYPE_BMAP]	= &xfs_bmap_update_defer_type,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) 	[XFS_DEFER_OPS_TYPE_REFCOUNT]	= &xfs_refcount_update_defer_type,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) 	[XFS_DEFER_OPS_TYPE_RMAP]	= &xfs_rmap_update_defer_type,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) 	[XFS_DEFER_OPS_TYPE_FREE]	= &xfs_extent_free_defer_type,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) 	[XFS_DEFER_OPS_TYPE_AGFL_FREE]	= &xfs_agfl_free_defer_type,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) static void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) xfs_defer_create_intent(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) 	struct xfs_trans		*tp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) 	struct xfs_defer_pending	*dfp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) 	bool				sort)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) 	const struct xfs_defer_op_type	*ops = defer_op_types[dfp->dfp_type];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) 	if (!dfp->dfp_intent)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) 		dfp->dfp_intent = ops->create_intent(tp, &dfp->dfp_work,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) 						     dfp->dfp_count, sort);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197)  * For each pending item in the intake list, log its intent item and the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198)  * associated extents, then add the entire intake list to the end of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199)  * the pending list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) STATIC void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) xfs_defer_create_intents(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) 	struct xfs_trans		*tp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) 	struct xfs_defer_pending	*dfp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) 	list_for_each_entry(dfp, &tp->t_dfops, dfp_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) 		trace_xfs_defer_create_intent(tp->t_mountp, dfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) 		xfs_defer_create_intent(tp, dfp, true);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) /* Abort all the intents that were committed. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) STATIC void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) xfs_defer_trans_abort(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) 	struct xfs_trans		*tp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) 	struct list_head		*dop_pending)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) 	struct xfs_defer_pending	*dfp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) 	const struct xfs_defer_op_type	*ops;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) 	trace_xfs_defer_trans_abort(tp, _RET_IP_);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 	/* Abort intent items that don't have a done item. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) 	list_for_each_entry(dfp, dop_pending, dfp_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) 		ops = defer_op_types[dfp->dfp_type];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) 		trace_xfs_defer_pending_abort(tp->t_mountp, dfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) 		if (dfp->dfp_intent && !dfp->dfp_done) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) 			ops->abort_intent(dfp->dfp_intent);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) 			dfp->dfp_intent = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) /* Roll a transaction so we can do some deferred op processing. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) STATIC int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) xfs_defer_trans_roll(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) 	struct xfs_trans		**tpp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) 	struct xfs_trans		*tp = *tpp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) 	struct xfs_buf_log_item		*bli;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) 	struct xfs_inode_log_item	*ili;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) 	struct xfs_log_item		*lip;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) 	struct xfs_buf			*bplist[XFS_DEFER_OPS_NR_BUFS];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) 	struct xfs_inode		*iplist[XFS_DEFER_OPS_NR_INODES];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) 	unsigned int			ordered = 0; /* bitmap */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) 	int				bpcount = 0, ipcount = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) 	int				i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) 	int				error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) 	BUILD_BUG_ON(NBBY * sizeof(ordered) < XFS_DEFER_OPS_NR_BUFS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) 	list_for_each_entry(lip, &tp->t_items, li_trans) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) 		switch (lip->li_type) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) 		case XFS_LI_BUF:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) 			bli = container_of(lip, struct xfs_buf_log_item,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) 					   bli_item);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) 			if (bli->bli_flags & XFS_BLI_HOLD) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) 				if (bpcount >= XFS_DEFER_OPS_NR_BUFS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) 					ASSERT(0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) 					return -EFSCORRUPTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) 				if (bli->bli_flags & XFS_BLI_ORDERED)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) 					ordered |= (1U << bpcount);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) 				else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) 					xfs_trans_dirty_buf(tp, bli->bli_buf);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) 				bplist[bpcount++] = bli->bli_buf;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) 		case XFS_LI_INODE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) 			ili = container_of(lip, struct xfs_inode_log_item,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) 					   ili_item);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) 			if (ili->ili_lock_flags == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) 				if (ipcount >= XFS_DEFER_OPS_NR_INODES) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) 					ASSERT(0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) 					return -EFSCORRUPTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) 				xfs_trans_log_inode(tp, ili->ili_inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) 						    XFS_ILOG_CORE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) 				iplist[ipcount++] = ili->ili_inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) 		default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) 	trace_xfs_defer_trans_roll(tp, _RET_IP_);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) 	 * Roll the transaction.  Rolling always given a new transaction (even
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) 	 * if committing the old one fails!) to hand back to the caller, so we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) 	 * join the held resources to the new transaction so that we always
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) 	 * return with the held resources joined to @tpp, no matter what
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) 	 * happened.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) 	error = xfs_trans_roll(tpp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) 	tp = *tpp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) 	/* Rejoin the joined inodes. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) 	for (i = 0; i < ipcount; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) 		xfs_trans_ijoin(tp, iplist[i], 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) 	/* Rejoin the buffers and dirty them so the log moves forward. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) 	for (i = 0; i < bpcount; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) 		xfs_trans_bjoin(tp, bplist[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) 		if (ordered & (1U << i))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) 			xfs_trans_ordered_buf(tp, bplist[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) 		xfs_trans_bhold(tp, bplist[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312) 	if (error)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) 		trace_xfs_defer_trans_roll_error(tp, error);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) 	return error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318)  * Free up any items left in the list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) static void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) xfs_defer_cancel_list(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) 	struct xfs_mount		*mp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) 	struct list_head		*dop_list)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) 	struct xfs_defer_pending	*dfp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) 	struct xfs_defer_pending	*pli;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) 	struct list_head		*pwi;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) 	struct list_head		*n;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) 	const struct xfs_defer_op_type	*ops;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) 	 * Free the pending items.  Caller should already have arranged
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) 	 * for the intent items to be released.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) 	list_for_each_entry_safe(dfp, pli, dop_list, dfp_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) 		ops = defer_op_types[dfp->dfp_type];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) 		trace_xfs_defer_cancel_list(mp, dfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) 		list_del(&dfp->dfp_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) 		list_for_each_safe(pwi, n, &dfp->dfp_work) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) 			list_del(pwi);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) 			dfp->dfp_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) 			ops->cancel_item(pwi);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) 		ASSERT(dfp->dfp_count == 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) 		kmem_free(dfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350)  * Prevent a log intent item from pinning the tail of the log by logging a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351)  * done item to release the intent item; and then log a new intent item.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352)  * The caller should provide a fresh transaction and roll it after we're done.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) xfs_defer_relog(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) 	struct xfs_trans		**tpp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) 	struct list_head		*dfops)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) 	struct xlog			*log = (*tpp)->t_mountp->m_log;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) 	struct xfs_defer_pending	*dfp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) 	xfs_lsn_t			threshold_lsn = NULLCOMMITLSN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) 	ASSERT((*tpp)->t_flags & XFS_TRANS_PERM_LOG_RES);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) 	list_for_each_entry(dfp, dfops, dfp_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) 		 * If the log intent item for this deferred op is not a part of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) 		 * the current log checkpoint, relog the intent item to keep
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) 		 * the log tail moving forward.  We're ok with this being racy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) 		 * because an incorrect decision means we'll be a little slower
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372) 		 * at pushing the tail.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) 		if (dfp->dfp_intent == NULL ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) 		    xfs_log_item_in_current_chkpt(dfp->dfp_intent))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) 		 * Figure out where we need the tail to be in order to maintain
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) 		 * the minimum required free space in the log.  Only sample
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381) 		 * the log threshold once per call.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383) 		if (threshold_lsn == NULLCOMMITLSN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384) 			threshold_lsn = xlog_grant_push_threshold(log, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385) 			if (threshold_lsn == NULLCOMMITLSN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388) 		if (XFS_LSN_CMP(dfp->dfp_intent->li_lsn, threshold_lsn) >= 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391) 		trace_xfs_defer_relog_intent((*tpp)->t_mountp, dfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392) 		XFS_STATS_INC((*tpp)->t_mountp, defer_relog);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393) 		dfp->dfp_intent = xfs_trans_item_relog(dfp->dfp_intent, *tpp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396) 	if ((*tpp)->t_flags & XFS_TRANS_DIRTY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) 		return xfs_defer_trans_roll(tpp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402)  * Log an intent-done item for the first pending intent, and finish the work
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403)  * items.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) xfs_defer_finish_one(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407) 	struct xfs_trans		*tp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) 	struct xfs_defer_pending	*dfp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410) 	const struct xfs_defer_op_type	*ops = defer_op_types[dfp->dfp_type];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) 	struct xfs_btree_cur		*state = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) 	struct list_head		*li, *n;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413) 	int				error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415) 	trace_xfs_defer_pending_finish(tp->t_mountp, dfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) 	dfp->dfp_done = ops->create_done(tp, dfp->dfp_intent, dfp->dfp_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418) 	list_for_each_safe(li, n, &dfp->dfp_work) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) 		list_del(li);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420) 		dfp->dfp_count--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421) 		error = ops->finish_item(tp, dfp->dfp_done, li, &state);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422) 		if (error == -EAGAIN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424) 			 * Caller wants a fresh transaction; put the work item
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425) 			 * back on the list and log a new log intent item to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426) 			 * replace the old one.  See "Requesting a Fresh
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) 			 * Transaction while Finishing Deferred Work" above.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429) 			list_add(li, &dfp->dfp_work);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430) 			dfp->dfp_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431) 			dfp->dfp_done = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432) 			dfp->dfp_intent = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433) 			xfs_defer_create_intent(tp, dfp, false);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436) 		if (error)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440) 	/* Done with the dfp, free it. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441) 	list_del(&dfp->dfp_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442) 	kmem_free(dfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) 	if (ops->finish_cleanup)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445) 		ops->finish_cleanup(tp, state, error);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446) 	return error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450)  * Finish all the pending work.  This involves logging intent items for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451)  * any work items that wandered in since the last transaction roll (if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452)  * one has even happened), rolling the transaction, and finishing the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453)  * work items in the first item on the logged-and-pending list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455)  * If an inode is provided, relog it to the new transaction.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458) xfs_defer_finish_noroll(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459) 	struct xfs_trans		**tp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 460) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 461) 	struct xfs_defer_pending	*dfp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 462) 	int				error = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 463) 	LIST_HEAD(dop_pending);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 464) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 465) 	ASSERT((*tp)->t_flags & XFS_TRANS_PERM_LOG_RES);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 466) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 467) 	trace_xfs_defer_finish(*tp, _RET_IP_);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 468) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 469) 	/* Until we run out of pending work to finish... */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 470) 	while (!list_empty(&dop_pending) || !list_empty(&(*tp)->t_dfops)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 471) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 472) 		 * Deferred items that are created in the process of finishing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 473) 		 * other deferred work items should be queued at the head of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 474) 		 * the pending list, which puts them ahead of the deferred work
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 475) 		 * that was created by the caller.  This keeps the number of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 476) 		 * pending work items to a minimum, which decreases the amount
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 477) 		 * of time that any one intent item can stick around in memory,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 478) 		 * pinning the log tail.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 479) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 480) 		xfs_defer_create_intents(*tp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 481) 		list_splice_init(&(*tp)->t_dfops, &dop_pending);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 482) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 483) 		error = xfs_defer_trans_roll(tp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 484) 		if (error)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 485) 			goto out_shutdown;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 486) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 487) 		/* Possibly relog intent items to keep the log moving. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 488) 		error = xfs_defer_relog(tp, &dop_pending);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 489) 		if (error)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 490) 			goto out_shutdown;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 491) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 492) 		dfp = list_first_entry(&dop_pending, struct xfs_defer_pending,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 493) 				       dfp_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 494) 		error = xfs_defer_finish_one(*tp, dfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 495) 		if (error && error != -EAGAIN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 496) 			goto out_shutdown;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 497) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 498) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 499) 	trace_xfs_defer_finish_done(*tp, _RET_IP_);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 500) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 501) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 502) out_shutdown:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 503) 	xfs_defer_trans_abort(*tp, &dop_pending);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 504) 	xfs_force_shutdown((*tp)->t_mountp, SHUTDOWN_CORRUPT_INCORE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 505) 	trace_xfs_defer_finish_error(*tp, error);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 506) 	xfs_defer_cancel_list((*tp)->t_mountp, &dop_pending);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 507) 	xfs_defer_cancel(*tp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 508) 	return error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 509) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 510) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 511) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 512) xfs_defer_finish(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 513) 	struct xfs_trans	**tp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 514) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 515) 	int			error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 516) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 517) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 518) 	 * Finish and roll the transaction once more to avoid returning to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 519) 	 * caller with a dirty transaction.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 520) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 521) 	error = xfs_defer_finish_noroll(tp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 522) 	if (error)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 523) 		return error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 524) 	if ((*tp)->t_flags & XFS_TRANS_DIRTY) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 525) 		error = xfs_defer_trans_roll(tp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 526) 		if (error) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 527) 			xfs_force_shutdown((*tp)->t_mountp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 528) 					   SHUTDOWN_CORRUPT_INCORE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 529) 			return error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 530) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 531) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 532) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 533) 	/* Reset LOWMODE now that we've finished all the dfops. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 534) 	ASSERT(list_empty(&(*tp)->t_dfops));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 535) 	(*tp)->t_flags &= ~XFS_TRANS_LOWMODE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 536) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 537) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 538) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 539) void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 540) xfs_defer_cancel(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 541) 	struct xfs_trans	*tp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 542) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 543) 	struct xfs_mount	*mp = tp->t_mountp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 544) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 545) 	trace_xfs_defer_cancel(tp, _RET_IP_);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 546) 	xfs_defer_cancel_list(mp, &tp->t_dfops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 547) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 548) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 549) /* Add an item for later deferred processing. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 550) void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 551) xfs_defer_add(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 552) 	struct xfs_trans		*tp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 553) 	enum xfs_defer_ops_type		type,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 554) 	struct list_head		*li)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 555) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 556) 	struct xfs_defer_pending	*dfp = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 557) 	const struct xfs_defer_op_type	*ops;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 558) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 559) 	ASSERT(tp->t_flags & XFS_TRANS_PERM_LOG_RES);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 560) 	BUILD_BUG_ON(ARRAY_SIZE(defer_op_types) != XFS_DEFER_OPS_TYPE_MAX);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 561) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 562) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 563) 	 * Add the item to a pending item at the end of the intake list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 564) 	 * If the last pending item has the same type, reuse it.  Else,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 565) 	 * create a new pending item at the end of the intake list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 566) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 567) 	if (!list_empty(&tp->t_dfops)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 568) 		dfp = list_last_entry(&tp->t_dfops,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 569) 				struct xfs_defer_pending, dfp_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 570) 		ops = defer_op_types[dfp->dfp_type];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 571) 		if (dfp->dfp_type != type ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 572) 		    (ops->max_items && dfp->dfp_count >= ops->max_items))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 573) 			dfp = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 574) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 575) 	if (!dfp) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 576) 		dfp = kmem_alloc(sizeof(struct xfs_defer_pending),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 577) 				KM_NOFS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 578) 		dfp->dfp_type = type;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 579) 		dfp->dfp_intent = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 580) 		dfp->dfp_done = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 581) 		dfp->dfp_count = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 582) 		INIT_LIST_HEAD(&dfp->dfp_work);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 583) 		list_add_tail(&dfp->dfp_list, &tp->t_dfops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 584) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 585) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 586) 	list_add_tail(li, &dfp->dfp_work);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 587) 	dfp->dfp_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 588) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 589) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 590) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 591)  * Move deferred ops from one transaction to another and reset the source to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 592)  * initial state. This is primarily used to carry state forward across
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 593)  * transaction rolls with pending dfops.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 594)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 595) void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 596) xfs_defer_move(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 597) 	struct xfs_trans	*dtp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 598) 	struct xfs_trans	*stp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 599) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 600) 	list_splice_init(&stp->t_dfops, &dtp->t_dfops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 601) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 602) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 603) 	 * Low free space mode was historically controlled by a dfops field.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 604) 	 * This meant that low mode state potentially carried across multiple
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 605) 	 * transaction rolls. Transfer low mode on a dfops move to preserve
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 606) 	 * that behavior.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 607) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 608) 	dtp->t_flags |= (stp->t_flags & XFS_TRANS_LOWMODE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 609) 	stp->t_flags &= ~XFS_TRANS_LOWMODE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 610) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 611) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 612) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 613)  * Prepare a chain of fresh deferred ops work items to be completed later.  Log
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 614)  * recovery requires the ability to put off until later the actual finishing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 615)  * work so that it can process unfinished items recovered from the log in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 616)  * correct order.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 617)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 618)  * Create and log intent items for all the work that we're capturing so that we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 619)  * can be assured that the items will get replayed if the system goes down
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 620)  * before log recovery gets a chance to finish the work it put off.  The entire
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 621)  * deferred ops state is transferred to the capture structure and the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 622)  * transaction is then ready for the caller to commit it.  If there are no
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 623)  * intent items to capture, this function returns NULL.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 624)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 625)  * If capture_ip is not NULL, the capture structure will obtain an extra
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 626)  * reference to the inode.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 627)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 628) static struct xfs_defer_capture *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 629) xfs_defer_ops_capture(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 630) 	struct xfs_trans		*tp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 631) 	struct xfs_inode		*capture_ip)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 632) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 633) 	struct xfs_defer_capture	*dfc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 634) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 635) 	if (list_empty(&tp->t_dfops))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 636) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 637) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 638) 	/* Create an object to capture the defer ops. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 639) 	dfc = kmem_zalloc(sizeof(*dfc), KM_NOFS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 640) 	INIT_LIST_HEAD(&dfc->dfc_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 641) 	INIT_LIST_HEAD(&dfc->dfc_dfops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 642) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 643) 	xfs_defer_create_intents(tp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 644) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 645) 	/* Move the dfops chain and transaction state to the capture struct. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 646) 	list_splice_init(&tp->t_dfops, &dfc->dfc_dfops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 647) 	dfc->dfc_tpflags = tp->t_flags & XFS_TRANS_LOWMODE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 648) 	tp->t_flags &= ~XFS_TRANS_LOWMODE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 649) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 650) 	/* Capture the remaining block reservations along with the dfops. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 651) 	dfc->dfc_blkres = tp->t_blk_res - tp->t_blk_res_used;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 652) 	dfc->dfc_rtxres = tp->t_rtx_res - tp->t_rtx_res_used;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 653) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 654) 	/* Preserve the log reservation size. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 655) 	dfc->dfc_logres = tp->t_log_res;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 656) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 657) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 658) 	 * Grab an extra reference to this inode and attach it to the capture
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 659) 	 * structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 660) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 661) 	if (capture_ip) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 662) 		ihold(VFS_I(capture_ip));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 663) 		dfc->dfc_capture_ip = capture_ip;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 664) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 665) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 666) 	return dfc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 667) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 668) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 669) /* Release all resources that we used to capture deferred ops. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 670) void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 671) xfs_defer_ops_release(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 672) 	struct xfs_mount		*mp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 673) 	struct xfs_defer_capture	*dfc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 674) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 675) 	xfs_defer_cancel_list(mp, &dfc->dfc_dfops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 676) 	if (dfc->dfc_capture_ip)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 677) 		xfs_irele(dfc->dfc_capture_ip);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 678) 	kmem_free(dfc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 679) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 680) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 681) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 682)  * Capture any deferred ops and commit the transaction.  This is the last step
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 683)  * needed to finish a log intent item that we recovered from the log.  If any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 684)  * of the deferred ops operate on an inode, the caller must pass in that inode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 685)  * so that the reference can be transferred to the capture structure.  The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 686)  * caller must hold ILOCK_EXCL on the inode, and must unlock it before calling
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 687)  * xfs_defer_ops_continue.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 688)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 689) int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 690) xfs_defer_ops_capture_and_commit(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 691) 	struct xfs_trans		*tp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 692) 	struct xfs_inode		*capture_ip,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 693) 	struct list_head		*capture_list)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 694) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 695) 	struct xfs_mount		*mp = tp->t_mountp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 696) 	struct xfs_defer_capture	*dfc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 697) 	int				error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 698) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 699) 	ASSERT(!capture_ip || xfs_isilocked(capture_ip, XFS_ILOCK_EXCL));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 700) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 701) 	/* If we don't capture anything, commit transaction and exit. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 702) 	dfc = xfs_defer_ops_capture(tp, capture_ip);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 703) 	if (!dfc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 704) 		return xfs_trans_commit(tp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 705) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 706) 	/* Commit the transaction and add the capture structure to the list. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 707) 	error = xfs_trans_commit(tp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 708) 	if (error) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 709) 		xfs_defer_ops_release(mp, dfc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 710) 		return error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 711) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 712) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 713) 	list_add_tail(&dfc->dfc_list, capture_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 714) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 715) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 716) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 717) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 718)  * Attach a chain of captured deferred ops to a new transaction and free the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 719)  * capture structure.  If an inode was captured, it will be passed back to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 720)  * caller with ILOCK_EXCL held and joined to the transaction with lockflags==0.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 721)  * The caller now owns the inode reference.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 722)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 723) void
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 724) xfs_defer_ops_continue(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 725) 	struct xfs_defer_capture	*dfc,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 726) 	struct xfs_trans		*tp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 727) 	struct xfs_inode		**captured_ipp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 728) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 729) 	ASSERT(tp->t_flags & XFS_TRANS_PERM_LOG_RES);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 730) 	ASSERT(!(tp->t_flags & XFS_TRANS_DIRTY));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 731) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 732) 	/* Lock and join the captured inode to the new transaction. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 733) 	if (dfc->dfc_capture_ip) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 734) 		xfs_ilock(dfc->dfc_capture_ip, XFS_ILOCK_EXCL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 735) 		xfs_trans_ijoin(tp, dfc->dfc_capture_ip, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 736) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 737) 	*captured_ipp = dfc->dfc_capture_ip;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 738) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 739) 	/* Move captured dfops chain and state to the transaction. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 740) 	list_splice_init(&dfc->dfc_dfops, &tp->t_dfops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 741) 	tp->t_flags |= dfc->dfc_tpflags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 742) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 743) 	kmem_free(dfc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 744) }