Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    1) // SPDX-License-Identifier: GPL-2.0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    2) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    3) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    4)  * fs/ext4/fast_commit.c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    5)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    6)  * Written by Harshad Shirwadkar <harshadshirwadkar@gmail.com>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    7)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    8)  * Ext4 fast commits routines.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    9)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   10) #include "ext4.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   11) #include "ext4_jbd2.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   12) #include "ext4_extents.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   13) #include "mballoc.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   14) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   15) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   16)  * Ext4 Fast Commits
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   17)  * -----------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   18)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   19)  * Ext4 fast commits implement fine grained journalling for Ext4.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   20)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   21)  * Fast commits are organized as a log of tag-length-value (TLV) structs. (See
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   22)  * struct ext4_fc_tl). Each TLV contains some delta that is replayed TLV by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   23)  * TLV during the recovery phase. For the scenarios for which we currently
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   24)  * don't have replay code, fast commit falls back to full commits.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   25)  * Fast commits record delta in one of the following three categories.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   26)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   27)  * (A) Directory entry updates:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   28)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   29)  * - EXT4_FC_TAG_UNLINK		- records directory entry unlink
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   30)  * - EXT4_FC_TAG_LINK		- records directory entry link
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   31)  * - EXT4_FC_TAG_CREAT		- records inode and directory entry creation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   32)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   33)  * (B) File specific data range updates:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   34)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   35)  * - EXT4_FC_TAG_ADD_RANGE	- records addition of new blocks to an inode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   36)  * - EXT4_FC_TAG_DEL_RANGE	- records deletion of blocks from an inode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   37)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   38)  * (C) Inode metadata (mtime / ctime etc):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   39)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   40)  * - EXT4_FC_TAG_INODE		- record the inode that should be replayed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   41)  *				  during recovery. Note that iblocks field is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   42)  *				  not replayed and instead derived during
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   43)  *				  replay.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   44)  * Commit Operation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   45)  * ----------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   46)  * With fast commits, we maintain all the directory entry operations in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   47)  * order in which they are issued in an in-memory queue. This queue is flushed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   48)  * to disk during the commit operation. We also maintain a list of inodes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   49)  * that need to be committed during a fast commit in another in memory queue of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   50)  * inodes. During the commit operation, we commit in the following order:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   51)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   52)  * [1] Lock inodes for any further data updates by setting COMMITTING state
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   53)  * [2] Submit data buffers of all the inodes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   54)  * [3] Wait for [2] to complete
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   55)  * [4] Commit all the directory entry updates in the fast commit space
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   56)  * [5] Commit all the changed inode structures
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   57)  * [6] Write tail tag (this tag ensures the atomicity, please read the following
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   58)  *     section for more details).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   59)  * [7] Wait for [4], [5] and [6] to complete.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   60)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   61)  * All the inode updates must call ext4_fc_start_update() before starting an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   62)  * update. If such an ongoing update is present, fast commit waits for it to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   63)  * complete. The completion of such an update is marked by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   64)  * ext4_fc_stop_update().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   65)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   66)  * Fast Commit Ineligibility
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   67)  * -------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   68)  * Not all operations are supported by fast commits today (e.g extended
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   69)  * attributes). Fast commit ineligiblity is marked by calling one of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   70)  * two following functions:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   71)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   72)  * - ext4_fc_mark_ineligible(): This makes next fast commit operation to fall
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   73)  *   back to full commit. This is useful in case of transient errors.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   74)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   75)  * - ext4_fc_start_ineligible() and ext4_fc_stop_ineligible() - This makes all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   76)  *   the fast commits happening between ext4_fc_start_ineligible() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   77)  *   ext4_fc_stop_ineligible() and one fast commit after the call to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   78)  *   ext4_fc_stop_ineligible() to fall back to full commits. It is important to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   79)  *   make one more fast commit to fall back to full commit after stop call so
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   80)  *   that it guaranteed that the fast commit ineligible operation contained
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   81)  *   within ext4_fc_start_ineligible() and ext4_fc_stop_ineligible() is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   82)  *   followed by at least 1 full commit.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   83)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   84)  * Atomicity of commits
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   85)  * --------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   86)  * In order to guarantee atomicity during the commit operation, fast commit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   87)  * uses "EXT4_FC_TAG_TAIL" tag that marks a fast commit as complete. Tail
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   88)  * tag contains CRC of the contents and TID of the transaction after which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   89)  * this fast commit should be applied. Recovery code replays fast commit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   90)  * logs only if there's at least 1 valid tail present. For every fast commit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   91)  * operation, there is 1 tail. This means, we may end up with multiple tails
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   92)  * in the fast commit space. Here's an example:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   93)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   94)  * - Create a new file A and remove existing file B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   95)  * - fsync()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   96)  * - Append contents to file A
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   97)  * - Truncate file A
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   98)  * - fsync()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   99)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  100)  * The fast commit space at the end of above operations would look like this:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  101)  *      [HEAD] [CREAT A] [UNLINK B] [TAIL] [ADD_RANGE A] [DEL_RANGE A] [TAIL]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  102)  *             |<---  Fast Commit 1   --->|<---      Fast Commit 2     ---->|
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  103)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  104)  * Replay code should thus check for all the valid tails in the FC area.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  105)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  106)  * TODOs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  107)  * -----
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  108)  * 1) Make fast commit atomic updates more fine grained. Today, a fast commit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  109)  *    eligible update must be protected within ext4_fc_start_update() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  110)  *    ext4_fc_stop_update(). These routines are called at much higher
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  111)  *    routines. This can be made more fine grained by combining with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  112)  *    ext4_journal_start().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  113)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  114)  * 2) Same above for ext4_fc_start_ineligible() and ext4_fc_stop_ineligible()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  115)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  116)  * 3) Handle more ineligible cases.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  117)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  118) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  119) #include <trace/events/ext4.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  120) static struct kmem_cache *ext4_fc_dentry_cachep;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  121) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  122) static void ext4_end_buffer_io_sync(struct buffer_head *bh, int uptodate)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  123) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  124) 	BUFFER_TRACE(bh, "");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  125) 	if (uptodate) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  126) 		ext4_debug("%s: Block %lld up-to-date",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  127) 			   __func__, bh->b_blocknr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  128) 		set_buffer_uptodate(bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  129) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  130) 		ext4_debug("%s: Block %lld not up-to-date",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  131) 			   __func__, bh->b_blocknr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  132) 		clear_buffer_uptodate(bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  133) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  134) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  135) 	unlock_buffer(bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  136) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  137) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  138) static inline void ext4_fc_reset_inode(struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  139) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  140) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  141) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  142) 	ei->i_fc_lblk_start = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  143) 	ei->i_fc_lblk_len = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  144) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  145) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  146) void ext4_fc_init_inode(struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  147) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  148) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  149) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  150) 	ext4_fc_reset_inode(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  151) 	ext4_clear_inode_state(inode, EXT4_STATE_FC_COMMITTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  152) 	INIT_LIST_HEAD(&ei->i_fc_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  153) 	init_waitqueue_head(&ei->i_fc_wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  154) 	atomic_set(&ei->i_fc_updates, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  155) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  156) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  157) /* This function must be called with sbi->s_fc_lock held. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  158) static void ext4_fc_wait_committing_inode(struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  159) __releases(&EXT4_SB(inode->i_sb)->s_fc_lock)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  160) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  161) 	wait_queue_head_t *wq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  162) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  163) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  164) #if (BITS_PER_LONG < 64)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  165) 	DEFINE_WAIT_BIT(wait, &ei->i_state_flags,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  166) 			EXT4_STATE_FC_COMMITTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  167) 	wq = bit_waitqueue(&ei->i_state_flags,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  168) 				EXT4_STATE_FC_COMMITTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  169) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  170) 	DEFINE_WAIT_BIT(wait, &ei->i_flags,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  171) 			EXT4_STATE_FC_COMMITTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  172) 	wq = bit_waitqueue(&ei->i_flags,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  173) 				EXT4_STATE_FC_COMMITTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  174) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  175) 	lockdep_assert_held(&EXT4_SB(inode->i_sb)->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  176) 	prepare_to_wait(wq, &wait.wq_entry, TASK_UNINTERRUPTIBLE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  177) 	spin_unlock(&EXT4_SB(inode->i_sb)->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  178) 	schedule();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  179) 	finish_wait(wq, &wait.wq_entry);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  180) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  181) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  182) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  183)  * Inform Ext4's fast about start of an inode update
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  184)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  185)  * This function is called by the high level call VFS callbacks before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  186)  * performing any inode update. This function blocks if there's an ongoing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  187)  * fast commit on the inode in question.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  188)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  189) void ext4_fc_start_update(struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  190) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  191) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  192) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  193) 	if (!test_opt2(inode->i_sb, JOURNAL_FAST_COMMIT) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  194) 	    (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  195) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  196) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  197) restart:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  198) 	spin_lock(&EXT4_SB(inode->i_sb)->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  199) 	if (list_empty(&ei->i_fc_list))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  200) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  201) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  202) 	if (ext4_test_inode_state(inode, EXT4_STATE_FC_COMMITTING)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  203) 		ext4_fc_wait_committing_inode(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  204) 		goto restart;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  205) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  206) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  207) 	atomic_inc(&ei->i_fc_updates);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  208) 	spin_unlock(&EXT4_SB(inode->i_sb)->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  209) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  210) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  211) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  212)  * Stop inode update and wake up waiting fast commits if any.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  213)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  214) void ext4_fc_stop_update(struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  215) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  216) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  217) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  218) 	if (!test_opt2(inode->i_sb, JOURNAL_FAST_COMMIT) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  219) 	    (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  220) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  221) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  222) 	if (atomic_dec_and_test(&ei->i_fc_updates))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  223) 		wake_up_all(&ei->i_fc_wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  224) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  225) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  226) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  227)  * Remove inode from fast commit list. If the inode is being committed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  228)  * we wait until inode commit is done.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  229)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  230) void ext4_fc_del(struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  231) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  232) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  233) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  234) 	if (!test_opt2(inode->i_sb, JOURNAL_FAST_COMMIT) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  235) 	    (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  236) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  237) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  238) restart:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  239) 	spin_lock(&EXT4_SB(inode->i_sb)->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  240) 	if (list_empty(&ei->i_fc_list)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  241) 		spin_unlock(&EXT4_SB(inode->i_sb)->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  242) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  243) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  244) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  245) 	if (ext4_test_inode_state(inode, EXT4_STATE_FC_COMMITTING)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  246) 		ext4_fc_wait_committing_inode(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  247) 		goto restart;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  248) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  249) 	list_del_init(&ei->i_fc_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  250) 	spin_unlock(&EXT4_SB(inode->i_sb)->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  251) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  252) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  253) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  254)  * Mark file system as fast commit ineligible. This means that next commit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  255)  * operation would result in a full jbd2 commit.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  256)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  257) void ext4_fc_mark_ineligible(struct super_block *sb, int reason)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  258) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  259) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  260) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  261) 	if (!test_opt2(sb, JOURNAL_FAST_COMMIT) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  262) 	    (EXT4_SB(sb)->s_mount_state & EXT4_FC_REPLAY))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  263) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  264) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  265) 	ext4_set_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  266) 	WARN_ON(reason >= EXT4_FC_REASON_MAX);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  267) 	sbi->s_fc_stats.fc_ineligible_reason_count[reason]++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  268) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  269) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  270) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  271)  * Start a fast commit ineligible update. Any commits that happen while
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  272)  * such an operation is in progress fall back to full commits.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  273)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  274) void ext4_fc_start_ineligible(struct super_block *sb, int reason)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  275) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  276) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  277) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  278) 	if (!test_opt2(sb, JOURNAL_FAST_COMMIT) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  279) 	    (EXT4_SB(sb)->s_mount_state & EXT4_FC_REPLAY))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  280) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  281) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  282) 	WARN_ON(reason >= EXT4_FC_REASON_MAX);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  283) 	sbi->s_fc_stats.fc_ineligible_reason_count[reason]++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  284) 	atomic_inc(&sbi->s_fc_ineligible_updates);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  285) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  286) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  287) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  288)  * Stop a fast commit ineligible update. We set EXT4_MF_FC_INELIGIBLE flag here
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  289)  * to ensure that after stopping the ineligible update, at least one full
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  290)  * commit takes place.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  291)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  292) void ext4_fc_stop_ineligible(struct super_block *sb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  293) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  294) 	if (!test_opt2(sb, JOURNAL_FAST_COMMIT) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  295) 	    (EXT4_SB(sb)->s_mount_state & EXT4_FC_REPLAY))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  296) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  297) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  298) 	ext4_set_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  299) 	atomic_dec(&EXT4_SB(sb)->s_fc_ineligible_updates);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  300) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  301) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  302) static inline int ext4_fc_is_ineligible(struct super_block *sb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  303) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  304) 	return (ext4_test_mount_flag(sb, EXT4_MF_FC_INELIGIBLE) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  305) 		atomic_read(&EXT4_SB(sb)->s_fc_ineligible_updates));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  306) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  307) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  308) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  309)  * Generic fast commit tracking function. If this is the first time this we are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  310)  * called after a full commit, we initialize fast commit fields and then call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  311)  * __fc_track_fn() with update = 0. If we have already been called after a full
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  312)  * commit, we pass update = 1. Based on that, the track function can determine
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  313)  * if it needs to track a field for the first time or if it needs to just
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  314)  * update the previously tracked value.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  315)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  316)  * If enqueue is set, this function enqueues the inode in fast commit list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  317)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  318) static int ext4_fc_track_template(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  319) 	handle_t *handle, struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  320) 	int (*__fc_track_fn)(struct inode *, void *, bool),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  321) 	void *args, int enqueue)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  322) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  323) 	bool update = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  324) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  325) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  326) 	tid_t tid = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  327) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  328) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  329) 	if (!test_opt2(inode->i_sb, JOURNAL_FAST_COMMIT) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  330) 	    (sbi->s_mount_state & EXT4_FC_REPLAY))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  331) 		return -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  332) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  333) 	if (ext4_fc_is_ineligible(inode->i_sb))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  334) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  335) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  336) 	tid = handle->h_transaction->t_tid;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  337) 	mutex_lock(&ei->i_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  338) 	if (tid == ei->i_sync_tid) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  339) 		update = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  340) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  341) 		ext4_fc_reset_inode(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  342) 		ei->i_sync_tid = tid;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  343) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  344) 	ret = __fc_track_fn(inode, args, update);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  345) 	mutex_unlock(&ei->i_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  346) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  347) 	if (!enqueue)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  348) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  349) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  350) 	spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  351) 	if (list_empty(&EXT4_I(inode)->i_fc_list))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  352) 		list_add_tail(&EXT4_I(inode)->i_fc_list,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  353) 				(ext4_test_mount_flag(inode->i_sb, EXT4_MF_FC_COMMITTING)) ?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  354) 				&sbi->s_fc_q[FC_Q_STAGING] :
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  355) 				&sbi->s_fc_q[FC_Q_MAIN]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  356) 	spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  357) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  358) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  359) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  360) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  361) struct __track_dentry_update_args {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  362) 	struct dentry *dentry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  363) 	int op;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  364) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  365) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  366) /* __track_fn for directory entry updates. Called with ei->i_fc_lock. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  367) static int __track_dentry_update(struct inode *inode, void *arg, bool update)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  368) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  369) 	struct ext4_fc_dentry_update *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  370) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  371) 	struct __track_dentry_update_args *dentry_update =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  372) 		(struct __track_dentry_update_args *)arg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  373) 	struct dentry *dentry = dentry_update->dentry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  374) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  375) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  376) 	mutex_unlock(&ei->i_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  377) 	node = kmem_cache_alloc(ext4_fc_dentry_cachep, GFP_NOFS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  378) 	if (!node) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  379) 		ext4_fc_mark_ineligible(inode->i_sb, EXT4_FC_REASON_NOMEM);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  380) 		mutex_lock(&ei->i_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  381) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  382) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  383) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  384) 	node->fcd_op = dentry_update->op;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  385) 	node->fcd_parent = dentry->d_parent->d_inode->i_ino;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  386) 	node->fcd_ino = inode->i_ino;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  387) 	if (dentry->d_name.len > DNAME_INLINE_LEN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  388) 		node->fcd_name.name = kmalloc(dentry->d_name.len, GFP_NOFS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  389) 		if (!node->fcd_name.name) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  390) 			kmem_cache_free(ext4_fc_dentry_cachep, node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  391) 			ext4_fc_mark_ineligible(inode->i_sb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  392) 				EXT4_FC_REASON_NOMEM);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  393) 			mutex_lock(&ei->i_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  394) 			return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  395) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  396) 		memcpy((u8 *)node->fcd_name.name, dentry->d_name.name,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  397) 			dentry->d_name.len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  398) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  399) 		memcpy(node->fcd_iname, dentry->d_name.name,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  400) 			dentry->d_name.len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  401) 		node->fcd_name.name = node->fcd_iname;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  402) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  403) 	node->fcd_name.len = dentry->d_name.len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  404) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  405) 	spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  406) 	if (ext4_test_mount_flag(inode->i_sb, EXT4_MF_FC_COMMITTING))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  407) 		list_add_tail(&node->fcd_list,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  408) 				&sbi->s_fc_dentry_q[FC_Q_STAGING]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  409) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  410) 		list_add_tail(&node->fcd_list, &sbi->s_fc_dentry_q[FC_Q_MAIN]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  411) 	spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  412) 	mutex_lock(&ei->i_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  413) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  414) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  415) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  416) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  417) void __ext4_fc_track_unlink(handle_t *handle,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  418) 		struct inode *inode, struct dentry *dentry)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  419) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  420) 	struct __track_dentry_update_args args;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  421) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  422) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  423) 	args.dentry = dentry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  424) 	args.op = EXT4_FC_TAG_UNLINK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  425) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  426) 	ret = ext4_fc_track_template(handle, inode, __track_dentry_update,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  427) 					(void *)&args, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  428) 	trace_ext4_fc_track_unlink(inode, dentry, ret);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  429) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  430) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  431) void ext4_fc_track_unlink(handle_t *handle, struct dentry *dentry)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  432) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  433) 	__ext4_fc_track_unlink(handle, d_inode(dentry), dentry);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  434) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  435) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  436) void __ext4_fc_track_link(handle_t *handle,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  437) 	struct inode *inode, struct dentry *dentry)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  438) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  439) 	struct __track_dentry_update_args args;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  440) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  441) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  442) 	args.dentry = dentry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  443) 	args.op = EXT4_FC_TAG_LINK;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  444) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  445) 	ret = ext4_fc_track_template(handle, inode, __track_dentry_update,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  446) 					(void *)&args, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  447) 	trace_ext4_fc_track_link(inode, dentry, ret);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  448) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  449) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  450) void ext4_fc_track_link(handle_t *handle, struct dentry *dentry)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  451) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  452) 	__ext4_fc_track_link(handle, d_inode(dentry), dentry);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  453) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  454) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  455) void __ext4_fc_track_create(handle_t *handle, struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  456) 			  struct dentry *dentry)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  457) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  458) 	struct __track_dentry_update_args args;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  459) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  460) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  461) 	args.dentry = dentry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  462) 	args.op = EXT4_FC_TAG_CREAT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  463) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  464) 	ret = ext4_fc_track_template(handle, inode, __track_dentry_update,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  465) 					(void *)&args, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  466) 	trace_ext4_fc_track_create(inode, dentry, ret);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  467) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  468) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  469) void ext4_fc_track_create(handle_t *handle, struct dentry *dentry)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  470) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  471) 	__ext4_fc_track_create(handle, d_inode(dentry), dentry);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  472) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  473) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  474) /* __track_fn for inode tracking */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  475) static int __track_inode(struct inode *inode, void *arg, bool update)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  476) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  477) 	if (update)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  478) 		return -EEXIST;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  479) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  480) 	EXT4_I(inode)->i_fc_lblk_len = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  481) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  482) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  483) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  484) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  485) void ext4_fc_track_inode(handle_t *handle, struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  486) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  487) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  488) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  489) 	if (S_ISDIR(inode->i_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  490) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  491) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  492) 	if (ext4_should_journal_data(inode)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  493) 		ext4_fc_mark_ineligible(inode->i_sb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  494) 					EXT4_FC_REASON_INODE_JOURNAL_DATA);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  495) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  496) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  497) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  498) 	ret = ext4_fc_track_template(handle, inode, __track_inode, NULL, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  499) 	trace_ext4_fc_track_inode(inode, ret);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  500) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  501) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  502) struct __track_range_args {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  503) 	ext4_lblk_t start, end;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  504) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  505) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  506) /* __track_fn for tracking data updates */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  507) static int __track_range(struct inode *inode, void *arg, bool update)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  508) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  509) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  510) 	ext4_lblk_t oldstart;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  511) 	struct __track_range_args *__arg =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  512) 		(struct __track_range_args *)arg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  513) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  514) 	if (inode->i_ino < EXT4_FIRST_INO(inode->i_sb)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  515) 		ext4_debug("Special inode %ld being modified\n", inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  516) 		return -ECANCELED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  517) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  518) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  519) 	oldstart = ei->i_fc_lblk_start;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  520) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  521) 	if (update && ei->i_fc_lblk_len > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  522) 		ei->i_fc_lblk_start = min(ei->i_fc_lblk_start, __arg->start);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  523) 		ei->i_fc_lblk_len =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  524) 			max(oldstart + ei->i_fc_lblk_len - 1, __arg->end) -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  525) 				ei->i_fc_lblk_start + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  526) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  527) 		ei->i_fc_lblk_start = __arg->start;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  528) 		ei->i_fc_lblk_len = __arg->end - __arg->start + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  529) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  530) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  531) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  532) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  533) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  534) void ext4_fc_track_range(handle_t *handle, struct inode *inode, ext4_lblk_t start,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  535) 			 ext4_lblk_t end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  536) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  537) 	struct __track_range_args args;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  538) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  539) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  540) 	if (S_ISDIR(inode->i_mode))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  541) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  542) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  543) 	args.start = start;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  544) 	args.end = end;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  545) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  546) 	ret = ext4_fc_track_template(handle, inode,  __track_range, &args, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  547) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  548) 	trace_ext4_fc_track_range(inode, start, end, ret);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  549) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  550) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  551) static void ext4_fc_submit_bh(struct super_block *sb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  552) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  553) 	int write_flags = REQ_SYNC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  554) 	struct buffer_head *bh = EXT4_SB(sb)->s_fc_bh;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  555) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  556) 	/* TODO: REQ_FUA | REQ_PREFLUSH is unnecessarily expensive. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  557) 	if (test_opt(sb, BARRIER))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  558) 		write_flags |= REQ_FUA | REQ_PREFLUSH;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  559) 	lock_buffer(bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  560) 	set_buffer_dirty(bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  561) 	set_buffer_uptodate(bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  562) 	bh->b_end_io = ext4_end_buffer_io_sync;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  563) 	submit_bh(REQ_OP_WRITE, write_flags, bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  564) 	EXT4_SB(sb)->s_fc_bh = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  565) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  566) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  567) /* Ext4 commit path routines */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  568) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  569) /* memzero and update CRC */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  570) static void *ext4_fc_memzero(struct super_block *sb, void *dst, int len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  571) 				u32 *crc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  572) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  573) 	void *ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  574) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  575) 	ret = memset(dst, 0, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  576) 	if (crc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  577) 		*crc = ext4_chksum(EXT4_SB(sb), *crc, dst, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  578) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  579) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  580) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  581) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  582)  * Allocate len bytes on a fast commit buffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  583)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  584)  * During the commit time this function is used to manage fast commit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  585)  * block space. We don't split a fast commit log onto different
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  586)  * blocks. So this function makes sure that if there's not enough space
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  587)  * on the current block, the remaining space in the current block is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  588)  * marked as unused by adding EXT4_FC_TAG_PAD tag. In that case,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  589)  * new block is from jbd2 and CRC is updated to reflect the padding
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  590)  * we added.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  591)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  592) static u8 *ext4_fc_reserve_space(struct super_block *sb, int len, u32 *crc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  593) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  594) 	struct ext4_fc_tl *tl;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  595) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  596) 	struct buffer_head *bh;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  597) 	int bsize = sbi->s_journal->j_blocksize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  598) 	int ret, off = sbi->s_fc_bytes % bsize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  599) 	int pad_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  600) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  601) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  602) 	 * After allocating len, we should have space at least for a 0 byte
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  603) 	 * padding.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  604) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  605) 	if (len + sizeof(struct ext4_fc_tl) > bsize)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  606) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  607) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  608) 	if (bsize - off - 1 > len + sizeof(struct ext4_fc_tl)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  609) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  610) 		 * Only allocate from current buffer if we have enough space for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  611) 		 * this request AND we have space to add a zero byte padding.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  612) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  613) 		if (!sbi->s_fc_bh) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  614) 			ret = jbd2_fc_get_buf(EXT4_SB(sb)->s_journal, &bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  615) 			if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  616) 				return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  617) 			sbi->s_fc_bh = bh;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  618) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  619) 		sbi->s_fc_bytes += len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  620) 		return sbi->s_fc_bh->b_data + off;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  621) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  622) 	/* Need to add PAD tag */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  623) 	tl = (struct ext4_fc_tl *)(sbi->s_fc_bh->b_data + off);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  624) 	tl->fc_tag = cpu_to_le16(EXT4_FC_TAG_PAD);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  625) 	pad_len = bsize - off - 1 - sizeof(struct ext4_fc_tl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  626) 	tl->fc_len = cpu_to_le16(pad_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  627) 	if (crc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  628) 		*crc = ext4_chksum(sbi, *crc, tl, sizeof(*tl));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  629) 	if (pad_len > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  630) 		ext4_fc_memzero(sb, tl + 1, pad_len, crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  631) 	ext4_fc_submit_bh(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  632) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  633) 	ret = jbd2_fc_get_buf(EXT4_SB(sb)->s_journal, &bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  634) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  635) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  636) 	sbi->s_fc_bh = bh;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  637) 	sbi->s_fc_bytes = (sbi->s_fc_bytes / bsize + 1) * bsize + len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  638) 	return sbi->s_fc_bh->b_data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  639) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  640) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  641) /* memcpy to fc reserved space and update CRC */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  642) static void *ext4_fc_memcpy(struct super_block *sb, void *dst, const void *src,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  643) 				int len, u32 *crc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  644) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  645) 	if (crc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  646) 		*crc = ext4_chksum(EXT4_SB(sb), *crc, src, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  647) 	return memcpy(dst, src, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  648) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  649) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  650) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  651)  * Complete a fast commit by writing tail tag.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  652)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  653)  * Writing tail tag marks the end of a fast commit. In order to guarantee
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  654)  * atomicity, after writing tail tag, even if there's space remaining
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  655)  * in the block, next commit shouldn't use it. That's why tail tag
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  656)  * has the length as that of the remaining space on the block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  657)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  658) static int ext4_fc_write_tail(struct super_block *sb, u32 crc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  659) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  660) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  661) 	struct ext4_fc_tl tl;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  662) 	struct ext4_fc_tail tail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  663) 	int off, bsize = sbi->s_journal->j_blocksize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  664) 	u8 *dst;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  665) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  666) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  667) 	 * ext4_fc_reserve_space takes care of allocating an extra block if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  668) 	 * there's no enough space on this block for accommodating this tail.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  669) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  670) 	dst = ext4_fc_reserve_space(sb, sizeof(tl) + sizeof(tail), &crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  671) 	if (!dst)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  672) 		return -ENOSPC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  673) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  674) 	off = sbi->s_fc_bytes % bsize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  675) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  676) 	tl.fc_tag = cpu_to_le16(EXT4_FC_TAG_TAIL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  677) 	tl.fc_len = cpu_to_le16(bsize - off - 1 + sizeof(struct ext4_fc_tail));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  678) 	sbi->s_fc_bytes = round_up(sbi->s_fc_bytes, bsize);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  679) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  680) 	ext4_fc_memcpy(sb, dst, &tl, sizeof(tl), &crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  681) 	dst += sizeof(tl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  682) 	tail.fc_tid = cpu_to_le32(sbi->s_journal->j_running_transaction->t_tid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  683) 	ext4_fc_memcpy(sb, dst, &tail.fc_tid, sizeof(tail.fc_tid), &crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  684) 	dst += sizeof(tail.fc_tid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  685) 	tail.fc_crc = cpu_to_le32(crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  686) 	ext4_fc_memcpy(sb, dst, &tail.fc_crc, sizeof(tail.fc_crc), NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  687) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  688) 	ext4_fc_submit_bh(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  689) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  690) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  691) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  692) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  693) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  694)  * Adds tag, length, value and updates CRC. Returns true if tlv was added.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  695)  * Returns false if there's not enough space.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  696)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  697) static bool ext4_fc_add_tlv(struct super_block *sb, u16 tag, u16 len, u8 *val,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  698) 			   u32 *crc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  699) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  700) 	struct ext4_fc_tl tl;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  701) 	u8 *dst;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  702) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  703) 	dst = ext4_fc_reserve_space(sb, sizeof(tl) + len, crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  704) 	if (!dst)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  705) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  706) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  707) 	tl.fc_tag = cpu_to_le16(tag);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  708) 	tl.fc_len = cpu_to_le16(len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  709) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  710) 	ext4_fc_memcpy(sb, dst, &tl, sizeof(tl), crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  711) 	ext4_fc_memcpy(sb, dst + sizeof(tl), val, len, crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  712) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  713) 	return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  714) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  715) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  716) /* Same as above, but adds dentry tlv. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  717) static  bool ext4_fc_add_dentry_tlv(struct super_block *sb, u16 tag,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  718) 					int parent_ino, int ino, int dlen,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  719) 					const unsigned char *dname,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  720) 					u32 *crc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  721) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  722) 	struct ext4_fc_dentry_info fcd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  723) 	struct ext4_fc_tl tl;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  724) 	u8 *dst = ext4_fc_reserve_space(sb, sizeof(tl) + sizeof(fcd) + dlen,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  725) 					crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  726) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  727) 	if (!dst)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  728) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  729) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  730) 	fcd.fc_parent_ino = cpu_to_le32(parent_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  731) 	fcd.fc_ino = cpu_to_le32(ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  732) 	tl.fc_tag = cpu_to_le16(tag);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  733) 	tl.fc_len = cpu_to_le16(sizeof(fcd) + dlen);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  734) 	ext4_fc_memcpy(sb, dst, &tl, sizeof(tl), crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  735) 	dst += sizeof(tl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  736) 	ext4_fc_memcpy(sb, dst, &fcd, sizeof(fcd), crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  737) 	dst += sizeof(fcd);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  738) 	ext4_fc_memcpy(sb, dst, dname, dlen, crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  739) 	dst += dlen;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  740) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  741) 	return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  742) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  743) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  744) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  745)  * Writes inode in the fast commit space under TLV with tag @tag.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  746)  * Returns 0 on success, error on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  747)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  748) static int ext4_fc_write_inode(struct inode *inode, u32 *crc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  749) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  750) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  751) 	int inode_len = EXT4_GOOD_OLD_INODE_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  752) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  753) 	struct ext4_iloc iloc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  754) 	struct ext4_fc_inode fc_inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  755) 	struct ext4_fc_tl tl;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  756) 	u8 *dst;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  757) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  758) 	ret = ext4_get_inode_loc(inode, &iloc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  759) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  760) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  761) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  762) 	if (EXT4_INODE_SIZE(inode->i_sb) > EXT4_GOOD_OLD_INODE_SIZE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  763) 		inode_len += ei->i_extra_isize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  764) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  765) 	fc_inode.fc_ino = cpu_to_le32(inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  766) 	tl.fc_tag = cpu_to_le16(EXT4_FC_TAG_INODE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  767) 	tl.fc_len = cpu_to_le16(inode_len + sizeof(fc_inode.fc_ino));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  768) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  769) 	dst = ext4_fc_reserve_space(inode->i_sb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  770) 			sizeof(tl) + inode_len + sizeof(fc_inode.fc_ino), crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  771) 	if (!dst)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  772) 		return -ECANCELED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  773) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  774) 	if (!ext4_fc_memcpy(inode->i_sb, dst, &tl, sizeof(tl), crc))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  775) 		return -ECANCELED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  776) 	dst += sizeof(tl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  777) 	if (!ext4_fc_memcpy(inode->i_sb, dst, &fc_inode, sizeof(fc_inode), crc))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  778) 		return -ECANCELED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  779) 	dst += sizeof(fc_inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  780) 	if (!ext4_fc_memcpy(inode->i_sb, dst, (u8 *)ext4_raw_inode(&iloc),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  781) 					inode_len, crc))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  782) 		return -ECANCELED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  783) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  784) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  785) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  786) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  787) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  788)  * Writes updated data ranges for the inode in question. Updates CRC.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  789)  * Returns 0 on success, error otherwise.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  790)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  791) static int ext4_fc_write_inode_data(struct inode *inode, u32 *crc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  792) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  793) 	ext4_lblk_t old_blk_size, cur_lblk_off, new_blk_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  794) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  795) 	struct ext4_map_blocks map;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  796) 	struct ext4_fc_add_range fc_ext;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  797) 	struct ext4_fc_del_range lrange;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  798) 	struct ext4_extent *ex;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  799) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  800) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  801) 	mutex_lock(&ei->i_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  802) 	if (ei->i_fc_lblk_len == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  803) 		mutex_unlock(&ei->i_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  804) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  805) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  806) 	old_blk_size = ei->i_fc_lblk_start;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  807) 	new_blk_size = ei->i_fc_lblk_start + ei->i_fc_lblk_len - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  808) 	ei->i_fc_lblk_len = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  809) 	mutex_unlock(&ei->i_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  810) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  811) 	cur_lblk_off = old_blk_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  812) 	jbd_debug(1, "%s: will try writing %d to %d for inode %ld\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  813) 		  __func__, cur_lblk_off, new_blk_size, inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  814) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  815) 	while (cur_lblk_off <= new_blk_size) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  816) 		map.m_lblk = cur_lblk_off;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  817) 		map.m_len = new_blk_size - cur_lblk_off + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  818) 		ret = ext4_map_blocks(NULL, inode, &map, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  819) 		if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  820) 			return -ECANCELED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  821) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  822) 		if (map.m_len == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  823) 			cur_lblk_off++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  824) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  825) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  826) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  827) 		if (ret == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  828) 			lrange.fc_ino = cpu_to_le32(inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  829) 			lrange.fc_lblk = cpu_to_le32(map.m_lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  830) 			lrange.fc_len = cpu_to_le32(map.m_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  831) 			if (!ext4_fc_add_tlv(inode->i_sb, EXT4_FC_TAG_DEL_RANGE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  832) 					    sizeof(lrange), (u8 *)&lrange, crc))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  833) 				return -ENOSPC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  834) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  835) 			unsigned int max = (map.m_flags & EXT4_MAP_UNWRITTEN) ?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  836) 				EXT_UNWRITTEN_MAX_LEN : EXT_INIT_MAX_LEN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  837) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  838) 			/* Limit the number of blocks in one extent */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  839) 			map.m_len = min(max, map.m_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  840) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  841) 			fc_ext.fc_ino = cpu_to_le32(inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  842) 			ex = (struct ext4_extent *)&fc_ext.fc_ex;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  843) 			ex->ee_block = cpu_to_le32(map.m_lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  844) 			ex->ee_len = cpu_to_le16(map.m_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  845) 			ext4_ext_store_pblock(ex, map.m_pblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  846) 			if (map.m_flags & EXT4_MAP_UNWRITTEN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  847) 				ext4_ext_mark_unwritten(ex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  848) 			else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  849) 				ext4_ext_mark_initialized(ex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  850) 			if (!ext4_fc_add_tlv(inode->i_sb, EXT4_FC_TAG_ADD_RANGE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  851) 					    sizeof(fc_ext), (u8 *)&fc_ext, crc))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  852) 				return -ENOSPC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  853) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  854) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  855) 		cur_lblk_off += map.m_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  856) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  857) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  858) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  859) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  860) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  861) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  862) /* Submit data for all the fast commit inodes */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  863) static int ext4_fc_submit_inode_data_all(journal_t *journal)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  864) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  865) 	struct super_block *sb = (struct super_block *)(journal->j_private);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  866) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  867) 	struct ext4_inode_info *ei;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  868) 	struct list_head *pos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  869) 	int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  870) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  871) 	spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  872) 	ext4_set_mount_flag(sb, EXT4_MF_FC_COMMITTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  873) 	list_for_each(pos, &sbi->s_fc_q[FC_Q_MAIN]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  874) 		ei = list_entry(pos, struct ext4_inode_info, i_fc_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  875) 		ext4_set_inode_state(&ei->vfs_inode, EXT4_STATE_FC_COMMITTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  876) 		while (atomic_read(&ei->i_fc_updates)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  877) 			DEFINE_WAIT(wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  878) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  879) 			prepare_to_wait(&ei->i_fc_wait, &wait,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  880) 						TASK_UNINTERRUPTIBLE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  881) 			if (atomic_read(&ei->i_fc_updates)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  882) 				spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  883) 				schedule();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  884) 				spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  885) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  886) 			finish_wait(&ei->i_fc_wait, &wait);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  887) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  888) 		spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  889) 		ret = jbd2_submit_inode_data(ei->jinode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  890) 		if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  891) 			return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  892) 		spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  893) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  894) 	spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  895) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  896) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  897) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  898) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  899) /* Wait for completion of data for all the fast commit inodes */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  900) static int ext4_fc_wait_inode_data_all(journal_t *journal)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  901) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  902) 	struct super_block *sb = (struct super_block *)(journal->j_private);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  903) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  904) 	struct ext4_inode_info *pos, *n;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  905) 	int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  906) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  907) 	spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  908) 	list_for_each_entry_safe(pos, n, &sbi->s_fc_q[FC_Q_MAIN], i_fc_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  909) 		if (!ext4_test_inode_state(&pos->vfs_inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  910) 					   EXT4_STATE_FC_COMMITTING))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  911) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  912) 		spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  913) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  914) 		ret = jbd2_wait_inode_data(journal, pos->jinode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  915) 		if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  916) 			return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  917) 		spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  918) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  919) 	spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  920) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  921) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  922) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  923) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  924) /* Commit all the directory entry updates */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  925) static int ext4_fc_commit_dentry_updates(journal_t *journal, u32 *crc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  926) __acquires(&sbi->s_fc_lock)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  927) __releases(&sbi->s_fc_lock)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  928) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  929) 	struct super_block *sb = (struct super_block *)(journal->j_private);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  930) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  931) 	struct ext4_fc_dentry_update *fc_dentry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  932) 	struct inode *inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  933) 	struct list_head *pos, *n, *fcd_pos, *fcd_n;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  934) 	struct ext4_inode_info *ei;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  935) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  936) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  937) 	if (list_empty(&sbi->s_fc_dentry_q[FC_Q_MAIN]))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  938) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  939) 	list_for_each_safe(fcd_pos, fcd_n, &sbi->s_fc_dentry_q[FC_Q_MAIN]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  940) 		fc_dentry = list_entry(fcd_pos, struct ext4_fc_dentry_update,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  941) 					fcd_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  942) 		if (fc_dentry->fcd_op != EXT4_FC_TAG_CREAT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  943) 			spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  944) 			if (!ext4_fc_add_dentry_tlv(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  945) 				sb, fc_dentry->fcd_op,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  946) 				fc_dentry->fcd_parent, fc_dentry->fcd_ino,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  947) 				fc_dentry->fcd_name.len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  948) 				fc_dentry->fcd_name.name, crc)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  949) 				ret = -ENOSPC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  950) 				goto lock_and_exit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  951) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  952) 			spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  953) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  954) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  955) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  956) 		inode = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  957) 		list_for_each_safe(pos, n, &sbi->s_fc_q[FC_Q_MAIN]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  958) 			ei = list_entry(pos, struct ext4_inode_info, i_fc_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  959) 			if (ei->vfs_inode.i_ino == fc_dentry->fcd_ino) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  960) 				inode = &ei->vfs_inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  961) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  962) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  963) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  964) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  965) 		 * If we don't find inode in our list, then it was deleted,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  966) 		 * in which case, we don't need to record it's create tag.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  967) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  968) 		if (!inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  969) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  970) 		spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  971) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  972) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  973) 		 * We first write the inode and then the create dirent. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  974) 		 * allows the recovery code to create an unnamed inode first
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  975) 		 * and then link it to a directory entry. This allows us
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  976) 		 * to use namei.c routines almost as is and simplifies
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  977) 		 * the recovery code.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  978) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  979) 		ret = ext4_fc_write_inode(inode, crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  980) 		if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  981) 			goto lock_and_exit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  982) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  983) 		ret = ext4_fc_write_inode_data(inode, crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  984) 		if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  985) 			goto lock_and_exit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  986) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  987) 		if (!ext4_fc_add_dentry_tlv(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  988) 			sb, fc_dentry->fcd_op,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  989) 			fc_dentry->fcd_parent, fc_dentry->fcd_ino,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  990) 			fc_dentry->fcd_name.len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  991) 			fc_dentry->fcd_name.name, crc)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  992) 			ret = -ENOSPC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  993) 			goto lock_and_exit;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  994) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  995) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  996) 		spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  997) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  998) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  999) lock_and_exit:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1000) 	spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1001) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1002) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1003) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1004) static int ext4_fc_perform_commit(journal_t *journal)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1005) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1006) 	struct super_block *sb = (struct super_block *)(journal->j_private);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1007) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1008) 	struct ext4_inode_info *iter;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1009) 	struct ext4_fc_head head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1010) 	struct list_head *pos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1011) 	struct inode *inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1012) 	struct blk_plug plug;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1013) 	int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1014) 	u32 crc = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1015) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1016) 	ret = ext4_fc_submit_inode_data_all(journal);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1017) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1018) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1019) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1020) 	ret = ext4_fc_wait_inode_data_all(journal);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1021) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1022) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1023) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1024) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1025) 	 * If file system device is different from journal device, issue a cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1026) 	 * flush before we start writing fast commit blocks.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1027) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1028) 	if (journal->j_fs_dev != journal->j_dev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1029) 		blkdev_issue_flush(journal->j_fs_dev, GFP_NOFS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1030) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1031) 	blk_start_plug(&plug);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1032) 	if (sbi->s_fc_bytes == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1033) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1034) 		 * Add a head tag only if this is the first fast commit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1035) 		 * in this TID.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1036) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1037) 		head.fc_features = cpu_to_le32(EXT4_FC_SUPPORTED_FEATURES);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1038) 		head.fc_tid = cpu_to_le32(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1039) 			sbi->s_journal->j_running_transaction->t_tid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1040) 		if (!ext4_fc_add_tlv(sb, EXT4_FC_TAG_HEAD, sizeof(head),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1041) 			(u8 *)&head, &crc)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1042) 			ret = -ENOSPC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1043) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1044) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1045) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1046) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1047) 	spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1048) 	ret = ext4_fc_commit_dentry_updates(journal, &crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1049) 	if (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1050) 		spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1051) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1052) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1053) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1054) 	list_for_each(pos, &sbi->s_fc_q[FC_Q_MAIN]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1055) 		iter = list_entry(pos, struct ext4_inode_info, i_fc_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1056) 		inode = &iter->vfs_inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1057) 		if (!ext4_test_inode_state(inode, EXT4_STATE_FC_COMMITTING))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1058) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1059) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1060) 		spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1061) 		ret = ext4_fc_write_inode_data(inode, &crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1062) 		if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1063) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1064) 		ret = ext4_fc_write_inode(inode, &crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1065) 		if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1066) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1067) 		spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1068) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1069) 	spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1070) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1071) 	ret = ext4_fc_write_tail(sb, crc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1072) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1073) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1074) 	blk_finish_plug(&plug);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1075) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1076) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1077) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1078) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1079)  * The main commit entry point. Performs a fast commit for transaction
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1080)  * commit_tid if needed. If it's not possible to perform a fast commit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1081)  * due to various reasons, we fall back to full commit. Returns 0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1082)  * on success, error otherwise.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1083)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1084) int ext4_fc_commit(journal_t *journal, tid_t commit_tid)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1085) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1086) 	struct super_block *sb = (struct super_block *)(journal->j_private);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1087) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1088) 	int nblks = 0, ret, bsize = journal->j_blocksize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1089) 	int subtid = atomic_read(&sbi->s_fc_subtid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1090) 	int reason = EXT4_FC_REASON_OK, fc_bufs_before = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1091) 	ktime_t start_time, commit_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1092) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1093) 	trace_ext4_fc_commit_start(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1094) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1095) 	start_time = ktime_get();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1096) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1097) 	if (!test_opt2(sb, JOURNAL_FAST_COMMIT) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1098) 		(ext4_fc_is_ineligible(sb))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1099) 		reason = EXT4_FC_REASON_INELIGIBLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1100) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1101) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1102) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1103) restart_fc:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1104) 	ret = jbd2_fc_begin_commit(journal, commit_tid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1105) 	if (ret == -EALREADY) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1106) 		/* There was an ongoing commit, check if we need to restart */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1107) 		if (atomic_read(&sbi->s_fc_subtid) <= subtid &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1108) 			commit_tid > journal->j_commit_sequence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1109) 			goto restart_fc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1110) 		reason = EXT4_FC_REASON_ALREADY_COMMITTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1111) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1112) 	} else if (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1113) 		sbi->s_fc_stats.fc_ineligible_reason_count[EXT4_FC_COMMIT_FAILED]++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1114) 		reason = EXT4_FC_REASON_FC_START_FAILED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1115) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1116) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1117) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1118) 	fc_bufs_before = (sbi->s_fc_bytes + bsize - 1) / bsize;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1119) 	ret = ext4_fc_perform_commit(journal);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1120) 	if (ret < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1121) 		sbi->s_fc_stats.fc_ineligible_reason_count[EXT4_FC_COMMIT_FAILED]++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1122) 		reason = EXT4_FC_REASON_FC_FAILED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1123) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1124) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1125) 	nblks = (sbi->s_fc_bytes + bsize - 1) / bsize - fc_bufs_before;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1126) 	ret = jbd2_fc_wait_bufs(journal, nblks);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1127) 	if (ret < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1128) 		sbi->s_fc_stats.fc_ineligible_reason_count[EXT4_FC_COMMIT_FAILED]++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1129) 		reason = EXT4_FC_REASON_FC_FAILED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1130) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1131) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1132) 	atomic_inc(&sbi->s_fc_subtid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1133) 	jbd2_fc_end_commit(journal);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1134) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1135) 	/* Has any ineligible update happened since we started? */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1136) 	if (reason == EXT4_FC_REASON_OK && ext4_fc_is_ineligible(sb)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1137) 		sbi->s_fc_stats.fc_ineligible_reason_count[EXT4_FC_COMMIT_FAILED]++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1138) 		reason = EXT4_FC_REASON_INELIGIBLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1139) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1140) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1141) 	spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1142) 	if (reason != EXT4_FC_REASON_OK &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1143) 		reason != EXT4_FC_REASON_ALREADY_COMMITTED) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1144) 		sbi->s_fc_stats.fc_ineligible_commits++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1145) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1146) 		sbi->s_fc_stats.fc_num_commits++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1147) 		sbi->s_fc_stats.fc_numblks += nblks;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1148) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1149) 	spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1150) 	nblks = (reason == EXT4_FC_REASON_OK) ? nblks : 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1151) 	trace_ext4_fc_commit_stop(sb, nblks, reason);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1152) 	commit_time = ktime_to_ns(ktime_sub(ktime_get(), start_time));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1153) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1154) 	 * weight the commit time higher than the average time so we don't
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1155) 	 * react too strongly to vast changes in the commit time
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1156) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1157) 	if (likely(sbi->s_fc_avg_commit_time))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1158) 		sbi->s_fc_avg_commit_time = (commit_time +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1159) 				sbi->s_fc_avg_commit_time * 3) / 4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1160) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1161) 		sbi->s_fc_avg_commit_time = commit_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1162) 	jbd_debug(1,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1163) 		"Fast commit ended with blks = %d, reason = %d, subtid - %d",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1164) 		nblks, reason, subtid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1165) 	if (reason == EXT4_FC_REASON_FC_FAILED)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1166) 		return jbd2_fc_end_commit_fallback(journal);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1167) 	if (reason == EXT4_FC_REASON_FC_START_FAILED ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1168) 		reason == EXT4_FC_REASON_INELIGIBLE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1169) 		return jbd2_complete_transaction(journal, commit_tid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1170) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1171) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1172) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1173) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1174)  * Fast commit cleanup routine. This is called after every fast commit and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1175)  * full commit. full is true if we are called after a full commit.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1176)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1177) static void ext4_fc_cleanup(journal_t *journal, int full)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1178) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1179) 	struct super_block *sb = journal->j_private;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1180) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1181) 	struct ext4_inode_info *iter;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1182) 	struct ext4_fc_dentry_update *fc_dentry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1183) 	struct list_head *pos, *n;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1184) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1185) 	if (full && sbi->s_fc_bh)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1186) 		sbi->s_fc_bh = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1187) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1188) 	jbd2_fc_release_bufs(journal);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1189) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1190) 	spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1191) 	list_for_each_safe(pos, n, &sbi->s_fc_q[FC_Q_MAIN]) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1192) 		iter = list_entry(pos, struct ext4_inode_info, i_fc_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1193) 		list_del_init(&iter->i_fc_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1194) 		ext4_clear_inode_state(&iter->vfs_inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1195) 				       EXT4_STATE_FC_COMMITTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1196) 		ext4_fc_reset_inode(&iter->vfs_inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1197) 		/* Make sure EXT4_STATE_FC_COMMITTING bit is clear */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1198) 		smp_mb();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1199) #if (BITS_PER_LONG < 64)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1200) 		wake_up_bit(&iter->i_state_flags, EXT4_STATE_FC_COMMITTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1201) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1202) 		wake_up_bit(&iter->i_flags, EXT4_STATE_FC_COMMITTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1203) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1204) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1205) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1206) 	while (!list_empty(&sbi->s_fc_dentry_q[FC_Q_MAIN])) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1207) 		fc_dentry = list_first_entry(&sbi->s_fc_dentry_q[FC_Q_MAIN],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1208) 					     struct ext4_fc_dentry_update,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1209) 					     fcd_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1210) 		list_del_init(&fc_dentry->fcd_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1211) 		spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1212) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1213) 		if (fc_dentry->fcd_name.name &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1214) 			fc_dentry->fcd_name.len > DNAME_INLINE_LEN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1215) 			kfree(fc_dentry->fcd_name.name);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1216) 		kmem_cache_free(ext4_fc_dentry_cachep, fc_dentry);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1217) 		spin_lock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1218) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1219) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1220) 	list_splice_init(&sbi->s_fc_dentry_q[FC_Q_STAGING],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1221) 				&sbi->s_fc_dentry_q[FC_Q_MAIN]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1222) 	list_splice_init(&sbi->s_fc_q[FC_Q_STAGING],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1223) 				&sbi->s_fc_q[FC_Q_MAIN]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1224) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1225) 	ext4_clear_mount_flag(sb, EXT4_MF_FC_COMMITTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1226) 	ext4_clear_mount_flag(sb, EXT4_MF_FC_INELIGIBLE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1227) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1228) 	if (full)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1229) 		sbi->s_fc_bytes = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1230) 	spin_unlock(&sbi->s_fc_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1231) 	trace_ext4_fc_stats(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1232) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1233) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1234) /* Ext4 Replay Path Routines */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1235) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1236) /* Helper struct for dentry replay routines */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1237) struct dentry_info_args {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1238) 	int parent_ino, dname_len, ino, inode_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1239) 	char *dname;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1240) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1241) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1242) static inline void tl_to_darg(struct dentry_info_args *darg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1243) 			      struct  ext4_fc_tl *tl, u8 *val)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1244) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1245) 	struct ext4_fc_dentry_info fcd;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1246) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1247) 	memcpy(&fcd, val, sizeof(fcd));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1248) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1249) 	darg->parent_ino = le32_to_cpu(fcd.fc_parent_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1250) 	darg->ino = le32_to_cpu(fcd.fc_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1251) 	darg->dname = val + offsetof(struct ext4_fc_dentry_info, fc_dname);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1252) 	darg->dname_len = le16_to_cpu(tl->fc_len) -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1253) 		sizeof(struct ext4_fc_dentry_info);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1254) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1255) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1256) /* Unlink replay function */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1257) static int ext4_fc_replay_unlink(struct super_block *sb, struct ext4_fc_tl *tl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1258) 				 u8 *val)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1259) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1260) 	struct inode *inode, *old_parent;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1261) 	struct qstr entry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1262) 	struct dentry_info_args darg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1263) 	int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1264) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1265) 	tl_to_darg(&darg, tl, val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1266) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1267) 	trace_ext4_fc_replay(sb, EXT4_FC_TAG_UNLINK, darg.ino,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1268) 			darg.parent_ino, darg.dname_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1269) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1270) 	entry.name = darg.dname;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1271) 	entry.len = darg.dname_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1272) 	inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1273) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1274) 	if (IS_ERR(inode)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1275) 		jbd_debug(1, "Inode %d not found", darg.ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1276) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1277) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1278) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1279) 	old_parent = ext4_iget(sb, darg.parent_ino,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1280) 				EXT4_IGET_NORMAL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1281) 	if (IS_ERR(old_parent)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1282) 		jbd_debug(1, "Dir with inode  %d not found", darg.parent_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1283) 		iput(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1284) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1285) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1286) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1287) 	ret = __ext4_unlink(NULL, old_parent, &entry, inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1288) 	/* -ENOENT ok coz it might not exist anymore. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1289) 	if (ret == -ENOENT)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1290) 		ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1291) 	iput(old_parent);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1292) 	iput(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1293) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1294) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1295) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1296) static int ext4_fc_replay_link_internal(struct super_block *sb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1297) 				struct dentry_info_args *darg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1298) 				struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1299) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1300) 	struct inode *dir = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1301) 	struct dentry *dentry_dir = NULL, *dentry_inode = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1302) 	struct qstr qstr_dname = QSTR_INIT(darg->dname, darg->dname_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1303) 	int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1304) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1305) 	dir = ext4_iget(sb, darg->parent_ino, EXT4_IGET_NORMAL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1306) 	if (IS_ERR(dir)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1307) 		jbd_debug(1, "Dir with inode %d not found.", darg->parent_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1308) 		dir = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1309) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1310) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1311) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1312) 	dentry_dir = d_obtain_alias(dir);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1313) 	if (IS_ERR(dentry_dir)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1314) 		jbd_debug(1, "Failed to obtain dentry");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1315) 		dentry_dir = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1316) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1317) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1318) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1319) 	dentry_inode = d_alloc(dentry_dir, &qstr_dname);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1320) 	if (!dentry_inode) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1321) 		jbd_debug(1, "Inode dentry not created.");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1322) 		ret = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1323) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1324) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1325) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1326) 	ret = __ext4_link(dir, inode, dentry_inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1327) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1328) 	 * It's possible that link already existed since data blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1329) 	 * for the dir in question got persisted before we crashed OR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1330) 	 * we replayed this tag and crashed before the entire replay
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1331) 	 * could complete.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1332) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1333) 	if (ret && ret != -EEXIST) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1334) 		jbd_debug(1, "Failed to link\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1335) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1336) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1337) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1338) 	ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1339) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1340) 	if (dentry_dir) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1341) 		d_drop(dentry_dir);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1342) 		dput(dentry_dir);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1343) 	} else if (dir) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1344) 		iput(dir);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1345) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1346) 	if (dentry_inode) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1347) 		d_drop(dentry_inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1348) 		dput(dentry_inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1349) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1350) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1351) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1352) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1353) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1354) /* Link replay function */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1355) static int ext4_fc_replay_link(struct super_block *sb, struct ext4_fc_tl *tl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1356) 			       u8 *val)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1357) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1358) 	struct inode *inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1359) 	struct dentry_info_args darg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1360) 	int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1361) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1362) 	tl_to_darg(&darg, tl, val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1363) 	trace_ext4_fc_replay(sb, EXT4_FC_TAG_LINK, darg.ino,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1364) 			darg.parent_ino, darg.dname_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1365) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1366) 	inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1367) 	if (IS_ERR(inode)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1368) 		jbd_debug(1, "Inode not found.");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1369) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1370) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1371) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1372) 	ret = ext4_fc_replay_link_internal(sb, &darg, inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1373) 	iput(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1374) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1375) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1376) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1377) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1378)  * Record all the modified inodes during replay. We use this later to setup
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1379)  * block bitmaps correctly.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1380)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1381) static int ext4_fc_record_modified_inode(struct super_block *sb, int ino)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1382) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1383) 	struct ext4_fc_replay_state *state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1384) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1385) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1386) 	state = &EXT4_SB(sb)->s_fc_replay_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1387) 	for (i = 0; i < state->fc_modified_inodes_used; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1388) 		if (state->fc_modified_inodes[i] == ino)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1389) 			return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1390) 	if (state->fc_modified_inodes_used == state->fc_modified_inodes_size) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1391) 		state->fc_modified_inodes = krealloc(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1392) 				state->fc_modified_inodes,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1393) 				sizeof(int) * (state->fc_modified_inodes_size +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1394) 				EXT4_FC_REPLAY_REALLOC_INCREMENT),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1395) 				GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1396) 		if (!state->fc_modified_inodes)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1397) 			return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1398) 		state->fc_modified_inodes_size +=
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1399) 			EXT4_FC_REPLAY_REALLOC_INCREMENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1400) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1401) 	state->fc_modified_inodes[state->fc_modified_inodes_used++] = ino;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1402) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1403) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1404) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1405) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1406)  * Inode replay function
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1407)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1408) static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1409) 				u8 *val)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1410) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1411) 	struct ext4_fc_inode fc_inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1412) 	struct ext4_inode *raw_inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1413) 	struct ext4_inode *raw_fc_inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1414) 	struct inode *inode = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1415) 	struct ext4_iloc iloc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1416) 	int inode_len, ino, ret, tag = le16_to_cpu(tl->fc_tag);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1417) 	struct ext4_extent_header *eh;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1418) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1419) 	memcpy(&fc_inode, val, sizeof(fc_inode));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1420) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1421) 	ino = le32_to_cpu(fc_inode.fc_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1422) 	trace_ext4_fc_replay(sb, tag, ino, 0, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1423) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1424) 	inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1425) 	if (!IS_ERR(inode)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1426) 		ext4_ext_clear_bb(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1427) 		iput(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1428) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1429) 	inode = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1430) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1431) 	ret = ext4_fc_record_modified_inode(sb, ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1432) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1433) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1434) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1435) 	raw_fc_inode = (struct ext4_inode *)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1436) 		(val + offsetof(struct ext4_fc_inode, fc_raw_inode));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1437) 	ret = ext4_get_fc_inode_loc(sb, ino, &iloc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1438) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1439) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1440) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1441) 	inode_len = le16_to_cpu(tl->fc_len) - sizeof(struct ext4_fc_inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1442) 	raw_inode = ext4_raw_inode(&iloc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1443) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1444) 	memcpy(raw_inode, raw_fc_inode, offsetof(struct ext4_inode, i_block));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1445) 	memcpy(&raw_inode->i_generation, &raw_fc_inode->i_generation,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1446) 		inode_len - offsetof(struct ext4_inode, i_generation));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1447) 	if (le32_to_cpu(raw_inode->i_flags) & EXT4_EXTENTS_FL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1448) 		eh = (struct ext4_extent_header *)(&raw_inode->i_block[0]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1449) 		if (eh->eh_magic != EXT4_EXT_MAGIC) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1450) 			memset(eh, 0, sizeof(*eh));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1451) 			eh->eh_magic = EXT4_EXT_MAGIC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1452) 			eh->eh_max = cpu_to_le16(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1453) 				(sizeof(raw_inode->i_block) -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1454) 				 sizeof(struct ext4_extent_header))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1455) 				 / sizeof(struct ext4_extent));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1456) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1457) 	} else if (le32_to_cpu(raw_inode->i_flags) & EXT4_INLINE_DATA_FL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1458) 		memcpy(raw_inode->i_block, raw_fc_inode->i_block,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1459) 			sizeof(raw_inode->i_block));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1460) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1461) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1462) 	/* Immediately update the inode on disk. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1463) 	ret = ext4_handle_dirty_metadata(NULL, NULL, iloc.bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1464) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1465) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1466) 	ret = sync_dirty_buffer(iloc.bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1467) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1468) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1469) 	ret = ext4_mark_inode_used(sb, ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1470) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1471) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1472) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1473) 	/* Given that we just wrote the inode on disk, this SHOULD succeed. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1474) 	inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1475) 	if (IS_ERR(inode)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1476) 		jbd_debug(1, "Inode not found.");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1477) 		return -EFSCORRUPTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1478) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1479) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1480) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1481) 	 * Our allocator could have made different decisions than before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1482) 	 * crashing. This should be fixed but until then, we calculate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1483) 	 * the number of blocks the inode.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1484) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1485) 	ext4_ext_replay_set_iblocks(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1486) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1487) 	inode->i_generation = le32_to_cpu(ext4_raw_inode(&iloc)->i_generation);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1488) 	ext4_reset_inode_seed(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1489) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1490) 	ext4_inode_csum_set(inode, ext4_raw_inode(&iloc), EXT4_I(inode));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1491) 	ret = ext4_handle_dirty_metadata(NULL, NULL, iloc.bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1492) 	sync_dirty_buffer(iloc.bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1493) 	brelse(iloc.bh);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1494) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1495) 	iput(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1496) 	if (!ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1497) 		blkdev_issue_flush(sb->s_bdev, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1498) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1499) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1500) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1501) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1502) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1503)  * Dentry create replay function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1504)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1505)  * EXT4_FC_TAG_CREAT is preceded by EXT4_FC_TAG_INODE_FULL. Which means, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1506)  * inode for which we are trying to create a dentry here, should already have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1507)  * been replayed before we start here.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1508)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1509) static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1510) 				 u8 *val)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1511) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1512) 	int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1513) 	struct inode *inode = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1514) 	struct inode *dir = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1515) 	struct dentry_info_args darg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1516) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1517) 	tl_to_darg(&darg, tl, val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1518) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1519) 	trace_ext4_fc_replay(sb, EXT4_FC_TAG_CREAT, darg.ino,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1520) 			darg.parent_ino, darg.dname_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1521) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1522) 	/* This takes care of update group descriptor and other metadata */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1523) 	ret = ext4_mark_inode_used(sb, darg.ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1524) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1525) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1526) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1527) 	inode = ext4_iget(sb, darg.ino, EXT4_IGET_NORMAL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1528) 	if (IS_ERR(inode)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1529) 		jbd_debug(1, "inode %d not found.", darg.ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1530) 		inode = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1531) 		ret = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1532) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1533) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1534) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1535) 	if (S_ISDIR(inode->i_mode)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1536) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1537) 		 * If we are creating a directory, we need to make sure that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1538) 		 * dot and dot dot dirents are setup properly.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1539) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1540) 		dir = ext4_iget(sb, darg.parent_ino, EXT4_IGET_NORMAL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1541) 		if (IS_ERR(dir)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1542) 			jbd_debug(1, "Dir %d not found.", darg.ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1543) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1544) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1545) 		ret = ext4_init_new_dir(NULL, dir, inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1546) 		iput(dir);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1547) 		if (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1548) 			ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1549) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1550) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1551) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1552) 	ret = ext4_fc_replay_link_internal(sb, &darg, inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1553) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1554) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1555) 	set_nlink(inode, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1556) 	ext4_mark_inode_dirty(NULL, inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1557) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1558) 	if (inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1559) 		iput(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1560) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1561) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1562) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1563) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1564)  * Record physical disk regions which are in use as per fast commit area,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1565)  * and used by inodes during replay phase. Our simple replay phase
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1566)  * allocator excludes these regions from allocation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1567)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1568) int ext4_fc_record_regions(struct super_block *sb, int ino,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1569) 		ext4_lblk_t lblk, ext4_fsblk_t pblk, int len, int replay)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1570) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1571) 	struct ext4_fc_replay_state *state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1572) 	struct ext4_fc_alloc_region *region;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1573) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1574) 	state = &EXT4_SB(sb)->s_fc_replay_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1575) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1576) 	 * during replay phase, the fc_regions_valid may not same as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1577) 	 * fc_regions_used, update it when do new additions.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1578) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1579) 	if (replay && state->fc_regions_used != state->fc_regions_valid)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1580) 		state->fc_regions_used = state->fc_regions_valid;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1581) 	if (state->fc_regions_used == state->fc_regions_size) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1582) 		state->fc_regions_size +=
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1583) 			EXT4_FC_REPLAY_REALLOC_INCREMENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1584) 		state->fc_regions = krealloc(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1585) 					state->fc_regions,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1586) 					state->fc_regions_size *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1587) 					sizeof(struct ext4_fc_alloc_region),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1588) 					GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1589) 		if (!state->fc_regions)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1590) 			return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1591) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1592) 	region = &state->fc_regions[state->fc_regions_used++];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1593) 	region->ino = ino;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1594) 	region->lblk = lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1595) 	region->pblk = pblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1596) 	region->len = len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1597) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1598) 	if (replay)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1599) 		state->fc_regions_valid++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1600) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1601) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1602) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1603) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1604) /* Replay add range tag */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1605) static int ext4_fc_replay_add_range(struct super_block *sb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1606) 				    struct ext4_fc_tl *tl, u8 *val)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1607) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1608) 	struct ext4_fc_add_range fc_add_ex;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1609) 	struct ext4_extent newex, *ex;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1610) 	struct inode *inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1611) 	ext4_lblk_t start, cur;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1612) 	int remaining, len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1613) 	ext4_fsblk_t start_pblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1614) 	struct ext4_map_blocks map;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1615) 	struct ext4_ext_path *path = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1616) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1617) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1618) 	memcpy(&fc_add_ex, val, sizeof(fc_add_ex));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1619) 	ex = (struct ext4_extent *)&fc_add_ex.fc_ex;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1620) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1621) 	trace_ext4_fc_replay(sb, EXT4_FC_TAG_ADD_RANGE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1622) 		le32_to_cpu(fc_add_ex.fc_ino), le32_to_cpu(ex->ee_block),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1623) 		ext4_ext_get_actual_len(ex));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1624) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1625) 	inode = ext4_iget(sb, le32_to_cpu(fc_add_ex.fc_ino), EXT4_IGET_NORMAL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1626) 	if (IS_ERR(inode)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1627) 		jbd_debug(1, "Inode not found.");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1628) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1629) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1630) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1631) 	ret = ext4_fc_record_modified_inode(sb, inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1632) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1633) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1634) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1635) 	start = le32_to_cpu(ex->ee_block);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1636) 	start_pblk = ext4_ext_pblock(ex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1637) 	len = ext4_ext_get_actual_len(ex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1638) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1639) 	cur = start;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1640) 	remaining = len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1641) 	jbd_debug(1, "ADD_RANGE, lblk %d, pblk %lld, len %d, unwritten %d, inode %ld\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1642) 		  start, start_pblk, len, ext4_ext_is_unwritten(ex),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1643) 		  inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1644) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1645) 	while (remaining > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1646) 		map.m_lblk = cur;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1647) 		map.m_len = remaining;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1648) 		map.m_pblk = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1649) 		ret = ext4_map_blocks(NULL, inode, &map, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1650) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1651) 		if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1652) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1653) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1654) 		if (ret == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1655) 			/* Range is not mapped */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1656) 			path = ext4_find_extent(inode, cur, NULL, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1657) 			if (IS_ERR(path))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1658) 				goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1659) 			memset(&newex, 0, sizeof(newex));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1660) 			newex.ee_block = cpu_to_le32(cur);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1661) 			ext4_ext_store_pblock(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1662) 				&newex, start_pblk + cur - start);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1663) 			newex.ee_len = cpu_to_le16(map.m_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1664) 			if (ext4_ext_is_unwritten(ex))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1665) 				ext4_ext_mark_unwritten(&newex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1666) 			down_write(&EXT4_I(inode)->i_data_sem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1667) 			ret = ext4_ext_insert_extent(
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1668) 				NULL, inode, &path, &newex, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1669) 			up_write((&EXT4_I(inode)->i_data_sem));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1670) 			ext4_ext_drop_refs(path);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1671) 			kfree(path);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1672) 			if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1673) 				goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1674) 			goto next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1675) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1676) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1677) 		if (start_pblk + cur - start != map.m_pblk) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1678) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1679) 			 * Logical to physical mapping changed. This can happen
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1680) 			 * if this range was removed and then reallocated to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1681) 			 * map to new physical blocks during a fast commit.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1682) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1683) 			ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1684) 					ext4_ext_is_unwritten(ex),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1685) 					start_pblk + cur - start);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1686) 			if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1687) 				goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1688) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1689) 			 * Mark the old blocks as free since they aren't used
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1690) 			 * anymore. We maintain an array of all the modified
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1691) 			 * inodes. In case these blocks are still used at either
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1692) 			 * a different logical range in the same inode or in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1693) 			 * some different inode, we will mark them as allocated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1694) 			 * at the end of the FC replay using our array of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1695) 			 * modified inodes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1696) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1697) 			ext4_mb_mark_bb(inode->i_sb, map.m_pblk, map.m_len, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1698) 			goto next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1699) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1700) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1701) 		/* Range is mapped and needs a state change */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1702) 		jbd_debug(1, "Converting from %ld to %d %lld",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1703) 				map.m_flags & EXT4_MAP_UNWRITTEN,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1704) 			ext4_ext_is_unwritten(ex), map.m_pblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1705) 		ret = ext4_ext_replay_update_ex(inode, cur, map.m_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1706) 					ext4_ext_is_unwritten(ex), map.m_pblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1707) 		if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1708) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1709) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1710) 		 * We may have split the extent tree while toggling the state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1711) 		 * Try to shrink the extent tree now.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1712) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1713) 		ext4_ext_replay_shrink_inode(inode, start + len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1714) next:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1715) 		cur += map.m_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1716) 		remaining -= map.m_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1717) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1718) 	ext4_ext_replay_shrink_inode(inode, i_size_read(inode) >>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1719) 					sb->s_blocksize_bits);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1720) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1721) 	iput(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1722) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1723) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1724) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1725) /* Replay DEL_RANGE tag */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1726) static int
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1727) ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1728) 			 u8 *val)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1729) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1730) 	struct inode *inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1731) 	struct ext4_fc_del_range lrange;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1732) 	struct ext4_map_blocks map;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1733) 	ext4_lblk_t cur, remaining;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1734) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1735) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1736) 	memcpy(&lrange, val, sizeof(lrange));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1737) 	cur = le32_to_cpu(lrange.fc_lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1738) 	remaining = le32_to_cpu(lrange.fc_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1739) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1740) 	trace_ext4_fc_replay(sb, EXT4_FC_TAG_DEL_RANGE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1741) 		le32_to_cpu(lrange.fc_ino), cur, remaining);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1742) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1743) 	inode = ext4_iget(sb, le32_to_cpu(lrange.fc_ino), EXT4_IGET_NORMAL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1744) 	if (IS_ERR(inode)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1745) 		jbd_debug(1, "Inode %d not found", le32_to_cpu(lrange.fc_ino));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1746) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1747) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1748) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1749) 	ret = ext4_fc_record_modified_inode(sb, inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1750) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1751) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1752) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1753) 	jbd_debug(1, "DEL_RANGE, inode %ld, lblk %d, len %d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1754) 			inode->i_ino, le32_to_cpu(lrange.fc_lblk),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1755) 			le32_to_cpu(lrange.fc_len));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1756) 	while (remaining > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1757) 		map.m_lblk = cur;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1758) 		map.m_len = remaining;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1759) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1760) 		ret = ext4_map_blocks(NULL, inode, &map, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1761) 		if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1762) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1763) 		if (ret > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1764) 			remaining -= ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1765) 			cur += ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1766) 			ext4_mb_mark_bb(inode->i_sb, map.m_pblk, map.m_len, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1767) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1768) 			remaining -= map.m_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1769) 			cur += map.m_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1770) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1771) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1772) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1773) 	down_write(&EXT4_I(inode)->i_data_sem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1774) 	ret = ext4_ext_remove_space(inode, le32_to_cpu(lrange.fc_lblk),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1775) 				le32_to_cpu(lrange.fc_lblk) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1776) 				le32_to_cpu(lrange.fc_len) - 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1777) 	up_write(&EXT4_I(inode)->i_data_sem);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1778) 	if (ret)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1779) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1780) 	ext4_ext_replay_shrink_inode(inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1781) 		i_size_read(inode) >> sb->s_blocksize_bits);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1782) 	ext4_mark_inode_dirty(NULL, inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1783) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1784) 	iput(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1785) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1786) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1787) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1788) static inline const char *tag2str(u16 tag)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1789) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1790) 	switch (tag) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1791) 	case EXT4_FC_TAG_LINK:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1792) 		return "TAG_ADD_ENTRY";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1793) 	case EXT4_FC_TAG_UNLINK:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1794) 		return "TAG_DEL_ENTRY";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1795) 	case EXT4_FC_TAG_ADD_RANGE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1796) 		return "TAG_ADD_RANGE";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1797) 	case EXT4_FC_TAG_CREAT:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1798) 		return "TAG_CREAT_DENTRY";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1799) 	case EXT4_FC_TAG_DEL_RANGE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1800) 		return "TAG_DEL_RANGE";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1801) 	case EXT4_FC_TAG_INODE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1802) 		return "TAG_INODE";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1803) 	case EXT4_FC_TAG_PAD:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1804) 		return "TAG_PAD";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1805) 	case EXT4_FC_TAG_TAIL:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1806) 		return "TAG_TAIL";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1807) 	case EXT4_FC_TAG_HEAD:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1808) 		return "TAG_HEAD";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1809) 	default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1810) 		return "TAG_ERROR";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1811) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1812) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1813) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1814) static void ext4_fc_set_bitmaps_and_counters(struct super_block *sb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1815) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1816) 	struct ext4_fc_replay_state *state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1817) 	struct inode *inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1818) 	struct ext4_ext_path *path = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1819) 	struct ext4_map_blocks map;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1820) 	int i, ret, j;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1821) 	ext4_lblk_t cur, end;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1822) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1823) 	state = &EXT4_SB(sb)->s_fc_replay_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1824) 	for (i = 0; i < state->fc_modified_inodes_used; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1825) 		inode = ext4_iget(sb, state->fc_modified_inodes[i],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1826) 			EXT4_IGET_NORMAL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1827) 		if (IS_ERR(inode)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1828) 			jbd_debug(1, "Inode %d not found.",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1829) 				state->fc_modified_inodes[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1830) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1831) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1832) 		cur = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1833) 		end = EXT_MAX_BLOCKS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1834) 		while (cur < end) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1835) 			map.m_lblk = cur;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1836) 			map.m_len = end - cur;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1837) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1838) 			ret = ext4_map_blocks(NULL, inode, &map, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1839) 			if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1840) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1841) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1842) 			if (ret > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1843) 				path = ext4_find_extent(inode, map.m_lblk, NULL, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1844) 				if (!IS_ERR(path)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1845) 					for (j = 0; j < path->p_depth; j++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1846) 						ext4_mb_mark_bb(inode->i_sb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1847) 							path[j].p_block, 1, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1848) 					ext4_ext_drop_refs(path);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1849) 					kfree(path);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1850) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1851) 				cur += ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1852) 				ext4_mb_mark_bb(inode->i_sb, map.m_pblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1853) 							map.m_len, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1854) 			} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1855) 				cur = cur + (map.m_len ? map.m_len : 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1856) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1857) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1858) 		iput(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1859) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1860) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1861) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1862) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1863)  * Check if block is in excluded regions for block allocation. The simple
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1864)  * allocator that runs during replay phase is calls this function to see
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1865)  * if it is okay to use a block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1866)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1867) bool ext4_fc_replay_check_excluded(struct super_block *sb, ext4_fsblk_t blk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1868) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1869) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1870) 	struct ext4_fc_replay_state *state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1871) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1872) 	state = &EXT4_SB(sb)->s_fc_replay_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1873) 	for (i = 0; i < state->fc_regions_valid; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1874) 		if (state->fc_regions[i].ino == 0 ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1875) 			state->fc_regions[i].len == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1876) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1877) 		if (blk >= state->fc_regions[i].pblk &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1878) 		    blk < state->fc_regions[i].pblk + state->fc_regions[i].len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1879) 			return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1880) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1881) 	return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1882) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1883) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1884) /* Cleanup function called after replay */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1885) void ext4_fc_replay_cleanup(struct super_block *sb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1886) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1887) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1888) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1889) 	sbi->s_mount_state &= ~EXT4_FC_REPLAY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1890) 	kfree(sbi->s_fc_replay_state.fc_regions);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1891) 	kfree(sbi->s_fc_replay_state.fc_modified_inodes);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1892) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1893) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1894) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1895)  * Recovery Scan phase handler
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1896)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1897)  * This function is called during the scan phase and is responsible
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1898)  * for doing following things:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1899)  * - Make sure the fast commit area has valid tags for replay
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1900)  * - Count number of tags that need to be replayed by the replay handler
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1901)  * - Verify CRC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1902)  * - Create a list of excluded blocks for allocation during replay phase
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1903)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1904)  * This function returns JBD2_FC_REPLAY_CONTINUE to indicate that SCAN is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1905)  * incomplete and JBD2 should send more blocks. It returns JBD2_FC_REPLAY_STOP
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1906)  * to indicate that scan has finished and JBD2 can now start replay phase.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1907)  * It returns a negative error to indicate that there was an error. At the end
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1908)  * of a successful scan phase, sbi->s_fc_replay_state.fc_replay_num_tags is set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1909)  * to indicate the number of tags that need to replayed during the replay phase.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1910)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1911) static int ext4_fc_replay_scan(journal_t *journal,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1912) 				struct buffer_head *bh, int off,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1913) 				tid_t expected_tid)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1914) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1915) 	struct super_block *sb = journal->j_private;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1916) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1917) 	struct ext4_fc_replay_state *state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1918) 	int ret = JBD2_FC_REPLAY_CONTINUE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1919) 	struct ext4_fc_add_range ext;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1920) 	struct ext4_fc_tl tl;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1921) 	struct ext4_fc_tail tail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1922) 	__u8 *start, *end, *cur, *val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1923) 	struct ext4_fc_head head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1924) 	struct ext4_extent *ex;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1925) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1926) 	state = &sbi->s_fc_replay_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1927) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1928) 	start = (u8 *)bh->b_data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1929) 	end = (__u8 *)bh->b_data + journal->j_blocksize - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1930) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1931) 	if (state->fc_replay_expected_off == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1932) 		state->fc_cur_tag = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1933) 		state->fc_replay_num_tags = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1934) 		state->fc_crc = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1935) 		state->fc_regions = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1936) 		state->fc_regions_valid = state->fc_regions_used =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1937) 			state->fc_regions_size = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1938) 		/* Check if we can stop early */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1939) 		if (le16_to_cpu(((struct ext4_fc_tl *)start)->fc_tag)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1940) 			!= EXT4_FC_TAG_HEAD)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1941) 			return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1942) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1943) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1944) 	if (off != state->fc_replay_expected_off) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1945) 		ret = -EFSCORRUPTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1946) 		goto out_err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1947) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1948) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1949) 	state->fc_replay_expected_off++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1950) 	for (cur = start; cur < end; cur = cur + sizeof(tl) + le16_to_cpu(tl.fc_len)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1951) 		memcpy(&tl, cur, sizeof(tl));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1952) 		val = cur + sizeof(tl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1953) 		jbd_debug(3, "Scan phase, tag:%s, blk %lld\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1954) 			  tag2str(le16_to_cpu(tl.fc_tag)), bh->b_blocknr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1955) 		switch (le16_to_cpu(tl.fc_tag)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1956) 		case EXT4_FC_TAG_ADD_RANGE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1957) 			memcpy(&ext, val, sizeof(ext));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1958) 			ex = (struct ext4_extent *)&ext.fc_ex;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1959) 			ret = ext4_fc_record_regions(sb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1960) 				le32_to_cpu(ext.fc_ino),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1961) 				le32_to_cpu(ex->ee_block), ext4_ext_pblock(ex),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1962) 				ext4_ext_get_actual_len(ex), 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1963) 			if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1964) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1965) 			ret = JBD2_FC_REPLAY_CONTINUE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1966) 			fallthrough;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1967) 		case EXT4_FC_TAG_DEL_RANGE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1968) 		case EXT4_FC_TAG_LINK:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1969) 		case EXT4_FC_TAG_UNLINK:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1970) 		case EXT4_FC_TAG_CREAT:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1971) 		case EXT4_FC_TAG_INODE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1972) 		case EXT4_FC_TAG_PAD:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1973) 			state->fc_cur_tag++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1974) 			state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1975) 					sizeof(tl) + le16_to_cpu(tl.fc_len));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1976) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1977) 		case EXT4_FC_TAG_TAIL:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1978) 			state->fc_cur_tag++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1979) 			memcpy(&tail, val, sizeof(tail));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1980) 			state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1981) 						sizeof(tl) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1982) 						offsetof(struct ext4_fc_tail,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1983) 						fc_crc));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1984) 			if (le32_to_cpu(tail.fc_tid) == expected_tid &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1985) 				le32_to_cpu(tail.fc_crc) == state->fc_crc) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1986) 				state->fc_replay_num_tags = state->fc_cur_tag;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1987) 				state->fc_regions_valid =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1988) 					state->fc_regions_used;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1989) 			} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1990) 				ret = state->fc_replay_num_tags ?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1991) 					JBD2_FC_REPLAY_STOP : -EFSBADCRC;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1992) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1993) 			state->fc_crc = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1994) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1995) 		case EXT4_FC_TAG_HEAD:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1996) 			memcpy(&head, val, sizeof(head));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1997) 			if (le32_to_cpu(head.fc_features) &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1998) 				~EXT4_FC_SUPPORTED_FEATURES) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1999) 				ret = -EOPNOTSUPP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2000) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2001) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2002) 			if (le32_to_cpu(head.fc_tid) != expected_tid) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2003) 				ret = JBD2_FC_REPLAY_STOP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2004) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2005) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2006) 			state->fc_cur_tag++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2007) 			state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2008) 					    sizeof(tl) + le16_to_cpu(tl.fc_len));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2009) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2010) 		default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2011) 			ret = state->fc_replay_num_tags ?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2012) 				JBD2_FC_REPLAY_STOP : -ECANCELED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2013) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2014) 		if (ret < 0 || ret == JBD2_FC_REPLAY_STOP)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2015) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2016) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2017) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2018) out_err:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2019) 	trace_ext4_fc_replay_scan(sb, ret, off);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2020) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2021) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2022) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2023) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2024)  * Main recovery path entry point.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2025)  * The meaning of return codes is similar as above.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2026)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2027) static int ext4_fc_replay(journal_t *journal, struct buffer_head *bh,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2028) 				enum passtype pass, int off, tid_t expected_tid)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2029) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2030) 	struct super_block *sb = journal->j_private;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2031) 	struct ext4_sb_info *sbi = EXT4_SB(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2032) 	struct ext4_fc_tl tl;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2033) 	__u8 *start, *end, *cur, *val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2034) 	int ret = JBD2_FC_REPLAY_CONTINUE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2035) 	struct ext4_fc_replay_state *state = &sbi->s_fc_replay_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2036) 	struct ext4_fc_tail tail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2037) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2038) 	if (pass == PASS_SCAN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2039) 		state->fc_current_pass = PASS_SCAN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2040) 		return ext4_fc_replay_scan(journal, bh, off, expected_tid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2041) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2042) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2043) 	if (state->fc_current_pass != pass) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2044) 		state->fc_current_pass = pass;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2045) 		sbi->s_mount_state |= EXT4_FC_REPLAY;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2046) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2047) 	if (!sbi->s_fc_replay_state.fc_replay_num_tags) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2048) 		jbd_debug(1, "Replay stops\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2049) 		ext4_fc_set_bitmaps_and_counters(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2050) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2051) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2052) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2053) #ifdef CONFIG_EXT4_DEBUG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2054) 	if (sbi->s_fc_debug_max_replay && off >= sbi->s_fc_debug_max_replay) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2055) 		pr_warn("Dropping fc block %d because max_replay set\n", off);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2056) 		return JBD2_FC_REPLAY_STOP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2057) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2058) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2059) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2060) 	start = (u8 *)bh->b_data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2061) 	end = (__u8 *)bh->b_data + journal->j_blocksize - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2062) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2063) 	for (cur = start; cur < end; cur = cur + sizeof(tl) + le16_to_cpu(tl.fc_len)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2064) 		memcpy(&tl, cur, sizeof(tl));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2065) 		val = cur + sizeof(tl);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2066) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2067) 		if (state->fc_replay_num_tags == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2068) 			ret = JBD2_FC_REPLAY_STOP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2069) 			ext4_fc_set_bitmaps_and_counters(sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2070) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2071) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2072) 		jbd_debug(3, "Replay phase, tag:%s\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2073) 				tag2str(le16_to_cpu(tl.fc_tag)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2074) 		state->fc_replay_num_tags--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2075) 		switch (le16_to_cpu(tl.fc_tag)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2076) 		case EXT4_FC_TAG_LINK:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2077) 			ret = ext4_fc_replay_link(sb, &tl, val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2078) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2079) 		case EXT4_FC_TAG_UNLINK:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2080) 			ret = ext4_fc_replay_unlink(sb, &tl, val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2081) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2082) 		case EXT4_FC_TAG_ADD_RANGE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2083) 			ret = ext4_fc_replay_add_range(sb, &tl, val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2084) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2085) 		case EXT4_FC_TAG_CREAT:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2086) 			ret = ext4_fc_replay_create(sb, &tl, val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2087) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2088) 		case EXT4_FC_TAG_DEL_RANGE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2089) 			ret = ext4_fc_replay_del_range(sb, &tl, val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2090) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2091) 		case EXT4_FC_TAG_INODE:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2092) 			ret = ext4_fc_replay_inode(sb, &tl, val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2093) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2094) 		case EXT4_FC_TAG_PAD:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2095) 			trace_ext4_fc_replay(sb, EXT4_FC_TAG_PAD, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2096) 					     le16_to_cpu(tl.fc_len), 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2097) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2098) 		case EXT4_FC_TAG_TAIL:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2099) 			trace_ext4_fc_replay(sb, EXT4_FC_TAG_TAIL, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2100) 					     le16_to_cpu(tl.fc_len), 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2101) 			memcpy(&tail, val, sizeof(tail));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2102) 			WARN_ON(le32_to_cpu(tail.fc_tid) != expected_tid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2103) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2104) 		case EXT4_FC_TAG_HEAD:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2105) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2106) 		default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2107) 			trace_ext4_fc_replay(sb, le16_to_cpu(tl.fc_tag), 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2108) 					     le16_to_cpu(tl.fc_len), 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2109) 			ret = -ECANCELED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2110) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2111) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2112) 		if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2113) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2114) 		ret = JBD2_FC_REPLAY_CONTINUE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2115) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2116) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2117) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2118) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2119) void ext4_fc_init(struct super_block *sb, journal_t *journal)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2120) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2121) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2122) 	 * We set replay callback even if fast commit disabled because we may
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2123) 	 * could still have fast commit blocks that need to be replayed even if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2124) 	 * fast commit has now been turned off.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2125) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2126) 	journal->j_fc_replay_callback = ext4_fc_replay;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2127) 	if (!test_opt2(sb, JOURNAL_FAST_COMMIT))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2128) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2129) 	journal->j_fc_cleanup_callback = ext4_fc_cleanup;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2130) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2131) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2132) static const char *fc_ineligible_reasons[] = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2133) 	"Extended attributes changed",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2134) 	"Cross rename",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2135) 	"Journal flag changed",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2136) 	"Insufficient memory",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2137) 	"Swap boot",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2138) 	"Resize",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2139) 	"Dir renamed",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2140) 	"Falloc range op",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2141) 	"Data journalling",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2142) 	"FC Commit Failed"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2143) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2144) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2145) int ext4_fc_info_show(struct seq_file *seq, void *v)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2146) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2147) 	struct ext4_sb_info *sbi = EXT4_SB((struct super_block *)seq->private);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2148) 	struct ext4_fc_stats *stats = &sbi->s_fc_stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2149) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2150) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2151) 	if (v != SEQ_START_TOKEN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2152) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2153) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2154) 	seq_printf(seq,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2155) 		"fc stats:\n%ld commits\n%ld ineligible\n%ld numblks\n%lluus avg_commit_time\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2156) 		   stats->fc_num_commits, stats->fc_ineligible_commits,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2157) 		   stats->fc_numblks,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2158) 		   div_u64(sbi->s_fc_avg_commit_time, 1000));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2159) 	seq_puts(seq, "Ineligible reasons:\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2160) 	for (i = 0; i < EXT4_FC_REASON_MAX; i++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2161) 		seq_printf(seq, "\"%s\":\t%d\n", fc_ineligible_reasons[i],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2162) 			stats->fc_ineligible_reason_count[i]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2163) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2164) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2165) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2166) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2167) int __init ext4_fc_init_dentry_cache(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2168) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2169) 	ext4_fc_dentry_cachep = KMEM_CACHE(ext4_fc_dentry_update,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2170) 					   SLAB_RECLAIM_ACCOUNT);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2171) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2172) 	if (ext4_fc_dentry_cachep == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2173) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2174) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2175) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2176) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2177) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2178) void ext4_fc_destroy_dentry_cache(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2179) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2180) 	kmem_cache_destroy(ext4_fc_dentry_cachep);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2181) }