Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags   |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    1) // SPDX-License-Identifier: GPL-2.0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    2) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    3)  *  fs/ext4/extents_status.c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    4)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    5)  * Written by Yongqiang Yang <xiaoqiangnk@gmail.com>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    6)  * Modified by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    7)  *	Allison Henderson <achender@linux.vnet.ibm.com>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    8)  *	Hugh Dickins <hughd@google.com>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    9)  *	Zheng Liu <wenqing.lz@taobao.com>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   10)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   11)  * Ext4 extents status tree core functions.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   12)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   13) #include <linux/list_sort.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   14) #include <linux/proc_fs.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   15) #include <linux/seq_file.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   16) #include "ext4.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   17) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   18) #include <trace/events/ext4.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   19) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   20) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   21)  * According to previous discussion in Ext4 Developer Workshop, we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   22)  * will introduce a new structure called io tree to track all extent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   23)  * status in order to solve some problems that we have met
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   24)  * (e.g. Reservation space warning), and provide extent-level locking.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   25)  * Delay extent tree is the first step to achieve this goal.  It is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   26)  * original built by Yongqiang Yang.  At that time it is called delay
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   27)  * extent tree, whose goal is only track delayed extents in memory to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   28)  * simplify the implementation of fiemap and bigalloc, and introduce
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   29)  * lseek SEEK_DATA/SEEK_HOLE support.  That is why it is still called
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   30)  * delay extent tree at the first commit.  But for better understand
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   31)  * what it does, it has been rename to extent status tree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   32)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   33)  * Step1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   34)  * Currently the first step has been done.  All delayed extents are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   35)  * tracked in the tree.  It maintains the delayed extent when a delayed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   36)  * allocation is issued, and the delayed extent is written out or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   37)  * invalidated.  Therefore the implementation of fiemap and bigalloc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   38)  * are simplified, and SEEK_DATA/SEEK_HOLE are introduced.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   39)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   40)  * The following comment describes the implemenmtation of extent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   41)  * status tree and future works.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   42)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   43)  * Step2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   44)  * In this step all extent status are tracked by extent status tree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   45)  * Thus, we can first try to lookup a block mapping in this tree before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   46)  * finding it in extent tree.  Hence, single extent cache can be removed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   47)  * because extent status tree can do a better job.  Extents in status
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   48)  * tree are loaded on-demand.  Therefore, the extent status tree may not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   49)  * contain all of the extents in a file.  Meanwhile we define a shrinker
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   50)  * to reclaim memory from extent status tree because fragmented extent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   51)  * tree will make status tree cost too much memory.  written/unwritten/-
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   52)  * hole extents in the tree will be reclaimed by this shrinker when we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   53)  * are under high memory pressure.  Delayed extents will not be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   54)  * reclimed because fiemap, bigalloc, and seek_data/hole need it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   55)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   56) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   57) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   58)  * Extent status tree implementation for ext4.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   59)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   60)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   61)  * ==========================================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   62)  * Extent status tree tracks all extent status.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   63)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   64)  * 1. Why we need to implement extent status tree?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   65)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   66)  * Without extent status tree, ext4 identifies a delayed extent by looking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   67)  * up page cache, this has several deficiencies - complicated, buggy,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   68)  * and inefficient code.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   69)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   70)  * FIEMAP, SEEK_HOLE/DATA, bigalloc, and writeout all need to know if a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   71)  * block or a range of blocks are belonged to a delayed extent.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   72)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   73)  * Let us have a look at how they do without extent status tree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   74)  *   --	FIEMAP
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   75)  *	FIEMAP looks up page cache to identify delayed allocations from holes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   76)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   77)  *   --	SEEK_HOLE/DATA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   78)  *	SEEK_HOLE/DATA has the same problem as FIEMAP.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   79)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   80)  *   --	bigalloc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   81)  *	bigalloc looks up page cache to figure out if a block is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   82)  *	already under delayed allocation or not to determine whether
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   83)  *	quota reserving is needed for the cluster.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   84)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   85)  *   --	writeout
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   86)  *	Writeout looks up whole page cache to see if a buffer is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   87)  *	mapped, If there are not very many delayed buffers, then it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   88)  *	time consuming.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   89)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   90)  * With extent status tree implementation, FIEMAP, SEEK_HOLE/DATA,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   91)  * bigalloc and writeout can figure out if a block or a range of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   92)  * blocks is under delayed allocation(belonged to a delayed extent) or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   93)  * not by searching the extent tree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   94)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   95)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   96)  * ==========================================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   97)  * 2. Ext4 extent status tree impelmentation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   98)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   99)  *   --	extent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  100)  *	A extent is a range of blocks which are contiguous logically and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  101)  *	physically.  Unlike extent in extent tree, this extent in ext4 is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  102)  *	a in-memory struct, there is no corresponding on-disk data.  There
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  103)  *	is no limit on length of extent, so an extent can contain as many
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  104)  *	blocks as they are contiguous logically and physically.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  105)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  106)  *   --	extent status tree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  107)  *	Every inode has an extent status tree and all allocation blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  108)  *	are added to the tree with different status.  The extent in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  109)  *	tree are ordered by logical block no.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  110)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  111)  *   --	operations on a extent status tree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  112)  *	There are three important operations on a delayed extent tree: find
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  113)  *	next extent, adding a extent(a range of blocks) and removing a extent.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  114)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  115)  *   --	race on a extent status tree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  116)  *	Extent status tree is protected by inode->i_es_lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  117)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  118)  *   --	memory consumption
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  119)  *      Fragmented extent tree will make extent status tree cost too much
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  120)  *      memory.  Hence, we will reclaim written/unwritten/hole extents from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  121)  *      the tree under a heavy memory pressure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  122)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  123)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  124)  * ==========================================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  125)  * 3. Performance analysis
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  126)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  127)  *   --	overhead
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  128)  *	1. There is a cache extent for write access, so if writes are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  129)  *	not very random, adding space operaions are in O(1) time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  130)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  131)  *   --	gain
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  132)  *	2. Code is much simpler, more readable, more maintainable and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  133)  *	more efficient.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  134)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  135)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  136)  * ==========================================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  137)  * 4. TODO list
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  138)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  139)  *   -- Refactor delayed space reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  140)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  141)  *   -- Extent-level locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  142)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  143) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  144) static struct kmem_cache *ext4_es_cachep;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  145) static struct kmem_cache *ext4_pending_cachep;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  146) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  147) static int __es_insert_extent(struct inode *inode, struct extent_status *newes);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  148) static int __es_remove_extent(struct inode *inode, ext4_lblk_t lblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  149) 			      ext4_lblk_t end, int *reserved);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  150) static int es_reclaim_extents(struct ext4_inode_info *ei, int *nr_to_scan);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  151) static int __es_shrink(struct ext4_sb_info *sbi, int nr_to_scan,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  152) 		       struct ext4_inode_info *locked_ei);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  153) static void __revise_pending(struct inode *inode, ext4_lblk_t lblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  154) 			     ext4_lblk_t len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  155) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  156) int __init ext4_init_es(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  157) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  158) 	ext4_es_cachep = kmem_cache_create("ext4_extent_status",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  159) 					   sizeof(struct extent_status),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  160) 					   0, (SLAB_RECLAIM_ACCOUNT), NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  161) 	if (ext4_es_cachep == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  162) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  163) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  164) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  165) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  166) void ext4_exit_es(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  167) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  168) 	kmem_cache_destroy(ext4_es_cachep);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  169) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  170) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  171) void ext4_es_init_tree(struct ext4_es_tree *tree)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  172) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  173) 	tree->root = RB_ROOT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  174) 	tree->cache_es = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  175) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  176) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  177) #ifdef ES_DEBUG__
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  178) static void ext4_es_print_tree(struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  179) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  180) 	struct ext4_es_tree *tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  181) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  183) 	printk(KERN_DEBUG "status extents for inode %lu:", inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  184) 	tree = &EXT4_I(inode)->i_es_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  185) 	node = rb_first(&tree->root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  186) 	while (node) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  187) 		struct extent_status *es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  188) 		es = rb_entry(node, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  189) 		printk(KERN_DEBUG " [%u/%u) %llu %x",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  190) 		       es->es_lblk, es->es_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  191) 		       ext4_es_pblock(es), ext4_es_status(es));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  192) 		node = rb_next(node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  193) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  194) 	printk(KERN_DEBUG "\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  195) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  196) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  197) #define ext4_es_print_tree(inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  198) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  199) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  200) static inline ext4_lblk_t ext4_es_end(struct extent_status *es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  201) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  202) 	BUG_ON(es->es_lblk + es->es_len < es->es_lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  203) 	return es->es_lblk + es->es_len - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  204) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  205) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  206) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  207)  * search through the tree for an delayed extent with a given offset.  If
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  208)  * it can't be found, try to find next extent.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  209)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  210) static struct extent_status *__es_tree_search(struct rb_root *root,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  211) 					      ext4_lblk_t lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  212) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  213) 	struct rb_node *node = root->rb_node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  214) 	struct extent_status *es = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  215) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  216) 	while (node) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  217) 		es = rb_entry(node, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  218) 		if (lblk < es->es_lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  219) 			node = node->rb_left;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  220) 		else if (lblk > ext4_es_end(es))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  221) 			node = node->rb_right;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  222) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  223) 			return es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  224) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  225) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  226) 	if (es && lblk < es->es_lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  227) 		return es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  228) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  229) 	if (es && lblk > ext4_es_end(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  230) 		node = rb_next(&es->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  231) 		return node ? rb_entry(node, struct extent_status, rb_node) :
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  232) 			      NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  233) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  234) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  235) 	return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  236) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  237) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  238) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  239)  * ext4_es_find_extent_range - find extent with specified status within block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  240)  *                             range or next extent following block range in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  241)  *                             extents status tree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  242)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  243)  * @inode - file containing the range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  244)  * @matching_fn - pointer to function that matches extents with desired status
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  245)  * @lblk - logical block defining start of range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  246)  * @end - logical block defining end of range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  247)  * @es - extent found, if any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  248)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  249)  * Find the first extent within the block range specified by @lblk and @end
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  250)  * in the extents status tree that satisfies @matching_fn.  If a match
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  251)  * is found, it's returned in @es.  If not, and a matching extent is found
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  252)  * beyond the block range, it's returned in @es.  If no match is found, an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  253)  * extent is returned in @es whose es_lblk, es_len, and es_pblk components
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  254)  * are 0.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  255)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  256) static void __es_find_extent_range(struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  257) 				   int (*matching_fn)(struct extent_status *es),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  258) 				   ext4_lblk_t lblk, ext4_lblk_t end,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  259) 				   struct extent_status *es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  260) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  261) 	struct ext4_es_tree *tree = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  262) 	struct extent_status *es1 = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  263) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  264) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  265) 	WARN_ON(es == NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  266) 	WARN_ON(end < lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  267) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  268) 	tree = &EXT4_I(inode)->i_es_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  269) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  270) 	/* see if the extent has been cached */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  271) 	es->es_lblk = es->es_len = es->es_pblk = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  272) 	if (tree->cache_es) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  273) 		es1 = tree->cache_es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  274) 		if (in_range(lblk, es1->es_lblk, es1->es_len)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  275) 			es_debug("%u cached by [%u/%u) %llu %x\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  276) 				 lblk, es1->es_lblk, es1->es_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  277) 				 ext4_es_pblock(es1), ext4_es_status(es1));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  278) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  279) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  280) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  281) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  282) 	es1 = __es_tree_search(&tree->root, lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  283) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  284) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  285) 	if (es1 && !matching_fn(es1)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  286) 		while ((node = rb_next(&es1->rb_node)) != NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  287) 			es1 = rb_entry(node, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  288) 			if (es1->es_lblk > end) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  289) 				es1 = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  290) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  291) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  292) 			if (matching_fn(es1))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  293) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  294) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  295) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  296) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  297) 	if (es1 && matching_fn(es1)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  298) 		tree->cache_es = es1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  299) 		es->es_lblk = es1->es_lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  300) 		es->es_len = es1->es_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  301) 		es->es_pblk = es1->es_pblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  302) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  303) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  304) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  305) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  306) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  307)  * Locking for __es_find_extent_range() for external use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  308)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  309) void ext4_es_find_extent_range(struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  310) 			       int (*matching_fn)(struct extent_status *es),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  311) 			       ext4_lblk_t lblk, ext4_lblk_t end,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  312) 			       struct extent_status *es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  313) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  314) 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  315) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  316) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  317) 	trace_ext4_es_find_extent_range_enter(inode, lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  318) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  319) 	read_lock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  320) 	__es_find_extent_range(inode, matching_fn, lblk, end, es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  321) 	read_unlock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  322) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  323) 	trace_ext4_es_find_extent_range_exit(inode, es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  324) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  325) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  326) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  327)  * __es_scan_range - search block range for block with specified status
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  328)  *                   in extents status tree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  329)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  330)  * @inode - file containing the range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  331)  * @matching_fn - pointer to function that matches extents with desired status
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  332)  * @lblk - logical block defining start of range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  333)  * @end - logical block defining end of range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  334)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  335)  * Returns true if at least one block in the specified block range satisfies
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  336)  * the criterion specified by @matching_fn, and false if not.  If at least
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  337)  * one extent has the specified status, then there is at least one block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  338)  * in the cluster with that status.  Should only be called by code that has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  339)  * taken i_es_lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  340)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  341) static bool __es_scan_range(struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  342) 			    int (*matching_fn)(struct extent_status *es),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  343) 			    ext4_lblk_t start, ext4_lblk_t end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  344) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  345) 	struct extent_status es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  346) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  347) 	__es_find_extent_range(inode, matching_fn, start, end, &es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  348) 	if (es.es_len == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  349) 		return false;   /* no matching extent in the tree */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  350) 	else if (es.es_lblk <= start &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  351) 		 start < es.es_lblk + es.es_len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  352) 		return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  353) 	else if (start <= es.es_lblk && es.es_lblk <= end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  354) 		return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  355) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  356) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  357) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  358) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  359)  * Locking for __es_scan_range() for external use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  360)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  361) bool ext4_es_scan_range(struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  362) 			int (*matching_fn)(struct extent_status *es),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  363) 			ext4_lblk_t lblk, ext4_lblk_t end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  364) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  365) 	bool ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  366) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  367) 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  368) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  369) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  370) 	read_lock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  371) 	ret = __es_scan_range(inode, matching_fn, lblk, end);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  372) 	read_unlock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  373) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  374) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  375) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  376) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  377) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  378)  * __es_scan_clu - search cluster for block with specified status in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  379)  *                 extents status tree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  380)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  381)  * @inode - file containing the cluster
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  382)  * @matching_fn - pointer to function that matches extents with desired status
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  383)  * @lblk - logical block in cluster to be searched
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  384)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  385)  * Returns true if at least one extent in the cluster containing @lblk
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  386)  * satisfies the criterion specified by @matching_fn, and false if not.  If at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  387)  * least one extent has the specified status, then there is at least one block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  388)  * in the cluster with that status.  Should only be called by code that has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  389)  * taken i_es_lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  390)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  391) static bool __es_scan_clu(struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  392) 			  int (*matching_fn)(struct extent_status *es),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  393) 			  ext4_lblk_t lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  394) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  395) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  396) 	ext4_lblk_t lblk_start, lblk_end;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  397) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  398) 	lblk_start = EXT4_LBLK_CMASK(sbi, lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  399) 	lblk_end = lblk_start + sbi->s_cluster_ratio - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  400) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  401) 	return __es_scan_range(inode, matching_fn, lblk_start, lblk_end);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  402) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  403) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  404) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  405)  * Locking for __es_scan_clu() for external use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  406)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  407) bool ext4_es_scan_clu(struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  408) 		      int (*matching_fn)(struct extent_status *es),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  409) 		      ext4_lblk_t lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  410) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  411) 	bool ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  412) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  413) 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  414) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  415) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  416) 	read_lock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  417) 	ret = __es_scan_clu(inode, matching_fn, lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  418) 	read_unlock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  419) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  420) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  421) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  422) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  423) static void ext4_es_list_add(struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  424) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  425) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  426) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  427) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  428) 	if (!list_empty(&ei->i_es_list))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  429) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  430) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  431) 	spin_lock(&sbi->s_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  432) 	if (list_empty(&ei->i_es_list)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  433) 		list_add_tail(&ei->i_es_list, &sbi->s_es_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  434) 		sbi->s_es_nr_inode++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  435) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  436) 	spin_unlock(&sbi->s_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  437) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  438) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  439) static void ext4_es_list_del(struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  440) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  441) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  442) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  443) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  444) 	spin_lock(&sbi->s_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  445) 	if (!list_empty(&ei->i_es_list)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  446) 		list_del_init(&ei->i_es_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  447) 		sbi->s_es_nr_inode--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  448) 		WARN_ON_ONCE(sbi->s_es_nr_inode < 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  449) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  450) 	spin_unlock(&sbi->s_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  451) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  452) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  453) static struct extent_status *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  454) ext4_es_alloc_extent(struct inode *inode, ext4_lblk_t lblk, ext4_lblk_t len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  455) 		     ext4_fsblk_t pblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  456) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  457) 	struct extent_status *es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  458) 	es = kmem_cache_alloc(ext4_es_cachep, GFP_ATOMIC);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  459) 	if (es == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  460) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  461) 	es->es_lblk = lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  462) 	es->es_len = len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  463) 	es->es_pblk = pblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  464) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  465) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  466) 	 * We don't count delayed extent because we never try to reclaim them
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  467) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  468) 	if (!ext4_es_is_delayed(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  469) 		if (!EXT4_I(inode)->i_es_shk_nr++)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  470) 			ext4_es_list_add(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  471) 		percpu_counter_inc(&EXT4_SB(inode->i_sb)->
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  472) 					s_es_stats.es_stats_shk_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  473) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  474) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  475) 	EXT4_I(inode)->i_es_all_nr++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  476) 	percpu_counter_inc(&EXT4_SB(inode->i_sb)->s_es_stats.es_stats_all_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  477) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  478) 	return es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  479) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  480) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  481) static void ext4_es_free_extent(struct inode *inode, struct extent_status *es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  482) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  483) 	EXT4_I(inode)->i_es_all_nr--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  484) 	percpu_counter_dec(&EXT4_SB(inode->i_sb)->s_es_stats.es_stats_all_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  485) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  486) 	/* Decrease the shrink counter when this es is not delayed */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  487) 	if (!ext4_es_is_delayed(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  488) 		BUG_ON(EXT4_I(inode)->i_es_shk_nr == 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  489) 		if (!--EXT4_I(inode)->i_es_shk_nr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  490) 			ext4_es_list_del(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  491) 		percpu_counter_dec(&EXT4_SB(inode->i_sb)->
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  492) 					s_es_stats.es_stats_shk_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  493) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  494) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  495) 	kmem_cache_free(ext4_es_cachep, es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  496) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  497) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  498) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  499)  * Check whether or not two extents can be merged
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  500)  * Condition:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  501)  *  - logical block number is contiguous
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  502)  *  - physical block number is contiguous
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  503)  *  - status is equal
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  504)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  505) static int ext4_es_can_be_merged(struct extent_status *es1,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  506) 				 struct extent_status *es2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  507) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  508) 	if (ext4_es_type(es1) != ext4_es_type(es2))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  509) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  510) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  511) 	if (((__u64) es1->es_len) + es2->es_len > EXT_MAX_BLOCKS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  512) 		pr_warn("ES assertion failed when merging extents. "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  513) 			"The sum of lengths of es1 (%d) and es2 (%d) "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  514) 			"is bigger than allowed file size (%d)\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  515) 			es1->es_len, es2->es_len, EXT_MAX_BLOCKS);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  516) 		WARN_ON(1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  517) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  518) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  519) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  520) 	if (((__u64) es1->es_lblk) + es1->es_len != es2->es_lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  521) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  522) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  523) 	if ((ext4_es_is_written(es1) || ext4_es_is_unwritten(es1)) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  524) 	    (ext4_es_pblock(es1) + es1->es_len == ext4_es_pblock(es2)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  525) 		return 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  526) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  527) 	if (ext4_es_is_hole(es1))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  528) 		return 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  529) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  530) 	/* we need to check delayed extent is without unwritten status */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  531) 	if (ext4_es_is_delayed(es1) && !ext4_es_is_unwritten(es1))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  532) 		return 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  533) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  534) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  535) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  536) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  537) static struct extent_status *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  538) ext4_es_try_to_merge_left(struct inode *inode, struct extent_status *es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  539) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  540) 	struct ext4_es_tree *tree = &EXT4_I(inode)->i_es_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  541) 	struct extent_status *es1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  542) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  543) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  544) 	node = rb_prev(&es->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  545) 	if (!node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  546) 		return es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  547) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  548) 	es1 = rb_entry(node, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  549) 	if (ext4_es_can_be_merged(es1, es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  550) 		es1->es_len += es->es_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  551) 		if (ext4_es_is_referenced(es))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  552) 			ext4_es_set_referenced(es1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  553) 		rb_erase(&es->rb_node, &tree->root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  554) 		ext4_es_free_extent(inode, es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  555) 		es = es1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  556) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  557) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  558) 	return es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  559) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  560) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  561) static struct extent_status *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  562) ext4_es_try_to_merge_right(struct inode *inode, struct extent_status *es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  563) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  564) 	struct ext4_es_tree *tree = &EXT4_I(inode)->i_es_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  565) 	struct extent_status *es1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  566) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  567) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  568) 	node = rb_next(&es->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  569) 	if (!node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  570) 		return es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  571) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  572) 	es1 = rb_entry(node, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  573) 	if (ext4_es_can_be_merged(es, es1)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  574) 		es->es_len += es1->es_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  575) 		if (ext4_es_is_referenced(es1))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  576) 			ext4_es_set_referenced(es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  577) 		rb_erase(node, &tree->root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  578) 		ext4_es_free_extent(inode, es1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  579) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  580) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  581) 	return es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  582) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  583) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  584) #ifdef ES_AGGRESSIVE_TEST
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  585) #include "ext4_extents.h"	/* Needed when ES_AGGRESSIVE_TEST is defined */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  586) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  587) static void ext4_es_insert_extent_ext_check(struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  588) 					    struct extent_status *es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  589) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  590) 	struct ext4_ext_path *path = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  591) 	struct ext4_extent *ex;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  592) 	ext4_lblk_t ee_block;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  593) 	ext4_fsblk_t ee_start;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  594) 	unsigned short ee_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  595) 	int depth, ee_status, es_status;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  596) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  597) 	path = ext4_find_extent(inode, es->es_lblk, NULL, EXT4_EX_NOCACHE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  598) 	if (IS_ERR(path))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  599) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  600) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  601) 	depth = ext_depth(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  602) 	ex = path[depth].p_ext;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  603) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  604) 	if (ex) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  605) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  606) 		ee_block = le32_to_cpu(ex->ee_block);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  607) 		ee_start = ext4_ext_pblock(ex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  608) 		ee_len = ext4_ext_get_actual_len(ex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  609) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  610) 		ee_status = ext4_ext_is_unwritten(ex) ? 1 : 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  611) 		es_status = ext4_es_is_unwritten(es) ? 1 : 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  612) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  613) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  614) 		 * Make sure ex and es are not overlap when we try to insert
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  615) 		 * a delayed/hole extent.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  616) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  617) 		if (!ext4_es_is_written(es) && !ext4_es_is_unwritten(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  618) 			if (in_range(es->es_lblk, ee_block, ee_len)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  619) 				pr_warn("ES insert assertion failed for "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  620) 					"inode: %lu we can find an extent "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  621) 					"at block [%d/%d/%llu/%c], but we "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  622) 					"want to add a delayed/hole extent "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  623) 					"[%d/%d/%llu/%x]\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  624) 					inode->i_ino, ee_block, ee_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  625) 					ee_start, ee_status ? 'u' : 'w',
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  626) 					es->es_lblk, es->es_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  627) 					ext4_es_pblock(es), ext4_es_status(es));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  628) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  629) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  630) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  631) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  632) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  633) 		 * We don't check ee_block == es->es_lblk, etc. because es
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  634) 		 * might be a part of whole extent, vice versa.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  635) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  636) 		if (es->es_lblk < ee_block ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  637) 		    ext4_es_pblock(es) != ee_start + es->es_lblk - ee_block) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  638) 			pr_warn("ES insert assertion failed for inode: %lu "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  639) 				"ex_status [%d/%d/%llu/%c] != "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  640) 				"es_status [%d/%d/%llu/%c]\n", inode->i_ino,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  641) 				ee_block, ee_len, ee_start,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  642) 				ee_status ? 'u' : 'w', es->es_lblk, es->es_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  643) 				ext4_es_pblock(es), es_status ? 'u' : 'w');
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  644) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  645) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  646) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  647) 		if (ee_status ^ es_status) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  648) 			pr_warn("ES insert assertion failed for inode: %lu "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  649) 				"ex_status [%d/%d/%llu/%c] != "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  650) 				"es_status [%d/%d/%llu/%c]\n", inode->i_ino,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  651) 				ee_block, ee_len, ee_start,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  652) 				ee_status ? 'u' : 'w', es->es_lblk, es->es_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  653) 				ext4_es_pblock(es), es_status ? 'u' : 'w');
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  654) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  655) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  656) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  657) 		 * We can't find an extent on disk.  So we need to make sure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  658) 		 * that we don't want to add an written/unwritten extent.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  659) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  660) 		if (!ext4_es_is_delayed(es) && !ext4_es_is_hole(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  661) 			pr_warn("ES insert assertion failed for inode: %lu "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  662) 				"can't find an extent at block %d but we want "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  663) 				"to add a written/unwritten extent "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  664) 				"[%d/%d/%llu/%x]\n", inode->i_ino,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  665) 				es->es_lblk, es->es_lblk, es->es_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  666) 				ext4_es_pblock(es), ext4_es_status(es));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  667) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  668) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  669) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  670) 	ext4_ext_drop_refs(path);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  671) 	kfree(path);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  672) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  673) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  674) static void ext4_es_insert_extent_ind_check(struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  675) 					    struct extent_status *es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  676) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  677) 	struct ext4_map_blocks map;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  678) 	int retval;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  679) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  680) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  681) 	 * Here we call ext4_ind_map_blocks to lookup a block mapping because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  682) 	 * 'Indirect' structure is defined in indirect.c.  So we couldn't
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  683) 	 * access direct/indirect tree from outside.  It is too dirty to define
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  684) 	 * this function in indirect.c file.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  685) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  686) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  687) 	map.m_lblk = es->es_lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  688) 	map.m_len = es->es_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  689) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  690) 	retval = ext4_ind_map_blocks(NULL, inode, &map, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  691) 	if (retval > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  692) 		if (ext4_es_is_delayed(es) || ext4_es_is_hole(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  693) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  694) 			 * We want to add a delayed/hole extent but this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  695) 			 * block has been allocated.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  696) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  697) 			pr_warn("ES insert assertion failed for inode: %lu "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  698) 				"We can find blocks but we want to add a "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  699) 				"delayed/hole extent [%d/%d/%llu/%x]\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  700) 				inode->i_ino, es->es_lblk, es->es_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  701) 				ext4_es_pblock(es), ext4_es_status(es));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  702) 			return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  703) 		} else if (ext4_es_is_written(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  704) 			if (retval != es->es_len) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  705) 				pr_warn("ES insert assertion failed for "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  706) 					"inode: %lu retval %d != es_len %d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  707) 					inode->i_ino, retval, es->es_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  708) 				return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  709) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  710) 			if (map.m_pblk != ext4_es_pblock(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  711) 				pr_warn("ES insert assertion failed for "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  712) 					"inode: %lu m_pblk %llu != "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  713) 					"es_pblk %llu\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  714) 					inode->i_ino, map.m_pblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  715) 					ext4_es_pblock(es));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  716) 				return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  717) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  718) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  719) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  720) 			 * We don't need to check unwritten extent because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  721) 			 * indirect-based file doesn't have it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  722) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  723) 			BUG();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  724) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  725) 	} else if (retval == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  726) 		if (ext4_es_is_written(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  727) 			pr_warn("ES insert assertion failed for inode: %lu "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  728) 				"We can't find the block but we want to add "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  729) 				"a written extent [%d/%d/%llu/%x]\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  730) 				inode->i_ino, es->es_lblk, es->es_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  731) 				ext4_es_pblock(es), ext4_es_status(es));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  732) 			return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  733) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  734) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  735) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  736) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  737) static inline void ext4_es_insert_extent_check(struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  738) 					       struct extent_status *es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  739) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  740) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  741) 	 * We don't need to worry about the race condition because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  742) 	 * caller takes i_data_sem locking.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  743) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  744) 	BUG_ON(!rwsem_is_locked(&EXT4_I(inode)->i_data_sem));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  745) 	if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  746) 		ext4_es_insert_extent_ext_check(inode, es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  747) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  748) 		ext4_es_insert_extent_ind_check(inode, es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  749) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  750) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  751) static inline void ext4_es_insert_extent_check(struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  752) 					       struct extent_status *es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  753) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  754) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  755) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  756) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  757) static int __es_insert_extent(struct inode *inode, struct extent_status *newes)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  758) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  759) 	struct ext4_es_tree *tree = &EXT4_I(inode)->i_es_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  760) 	struct rb_node **p = &tree->root.rb_node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  761) 	struct rb_node *parent = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  762) 	struct extent_status *es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  763) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  764) 	while (*p) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  765) 		parent = *p;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  766) 		es = rb_entry(parent, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  767) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  768) 		if (newes->es_lblk < es->es_lblk) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  769) 			if (ext4_es_can_be_merged(newes, es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  770) 				/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  771) 				 * Here we can modify es_lblk directly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  772) 				 * because it isn't overlapped.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  773) 				 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  774) 				es->es_lblk = newes->es_lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  775) 				es->es_len += newes->es_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  776) 				if (ext4_es_is_written(es) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  777) 				    ext4_es_is_unwritten(es))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  778) 					ext4_es_store_pblock(es,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  779) 							     newes->es_pblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  780) 				es = ext4_es_try_to_merge_left(inode, es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  781) 				goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  782) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  783) 			p = &(*p)->rb_left;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  784) 		} else if (newes->es_lblk > ext4_es_end(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  785) 			if (ext4_es_can_be_merged(es, newes)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  786) 				es->es_len += newes->es_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  787) 				es = ext4_es_try_to_merge_right(inode, es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  788) 				goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  789) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  790) 			p = &(*p)->rb_right;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  791) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  792) 			BUG();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  793) 			return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  794) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  795) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  796) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  797) 	es = ext4_es_alloc_extent(inode, newes->es_lblk, newes->es_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  798) 				  newes->es_pblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  799) 	if (!es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  800) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  801) 	rb_link_node(&es->rb_node, parent, p);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  802) 	rb_insert_color(&es->rb_node, &tree->root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  803) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  804) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  805) 	tree->cache_es = es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  806) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  807) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  808) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  809) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  810)  * ext4_es_insert_extent() adds information to an inode's extent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  811)  * status tree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  812)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  813)  * Return 0 on success, error code on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  814)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  815) int ext4_es_insert_extent(struct inode *inode, ext4_lblk_t lblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  816) 			  ext4_lblk_t len, ext4_fsblk_t pblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  817) 			  unsigned int status)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  818) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  819) 	struct extent_status newes;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  820) 	ext4_lblk_t end = lblk + len - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  821) 	int err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  822) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  823) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  824) 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  825) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  826) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  827) 	es_debug("add [%u/%u) %llu %x to extent status tree of inode %lu\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  828) 		 lblk, len, pblk, status, inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  829) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  830) 	if (!len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  831) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  832) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  833) 	BUG_ON(end < lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  834) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  835) 	if ((status & EXTENT_STATUS_DELAYED) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  836) 	    (status & EXTENT_STATUS_WRITTEN)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  837) 		ext4_warning(inode->i_sb, "Inserting extent [%u/%u] as "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  838) 				" delayed and written which can potentially "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  839) 				" cause data loss.", lblk, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  840) 		WARN_ON(1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  841) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  842) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  843) 	newes.es_lblk = lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  844) 	newes.es_len = len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  845) 	ext4_es_store_pblock_status(&newes, pblk, status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  846) 	trace_ext4_es_insert_extent(inode, &newes);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  847) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  848) 	ext4_es_insert_extent_check(inode, &newes);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  849) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  850) 	write_lock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  851) 	err = __es_remove_extent(inode, lblk, end, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  852) 	if (err != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  853) 		goto error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  854) retry:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  855) 	err = __es_insert_extent(inode, &newes);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  856) 	if (err == -ENOMEM && __es_shrink(EXT4_SB(inode->i_sb),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  857) 					  128, EXT4_I(inode)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  858) 		goto retry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  859) 	if (err == -ENOMEM && !ext4_es_is_delayed(&newes))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  860) 		err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  861) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  862) 	if (sbi->s_cluster_ratio > 1 && test_opt(inode->i_sb, DELALLOC) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  863) 	    (status & EXTENT_STATUS_WRITTEN ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  864) 	     status & EXTENT_STATUS_UNWRITTEN))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  865) 		__revise_pending(inode, lblk, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  866) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  867) error:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  868) 	write_unlock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  869) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  870) 	ext4_es_print_tree(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  871) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  872) 	return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  873) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  874) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  875) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  876)  * ext4_es_cache_extent() inserts information into the extent status
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  877)  * tree if and only if there isn't information about the range in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  878)  * question already.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  879)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  880) void ext4_es_cache_extent(struct inode *inode, ext4_lblk_t lblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  881) 			  ext4_lblk_t len, ext4_fsblk_t pblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  882) 			  unsigned int status)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  883) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  884) 	struct extent_status *es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  885) 	struct extent_status newes;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  886) 	ext4_lblk_t end = lblk + len - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  887) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  888) 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  889) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  890) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  891) 	newes.es_lblk = lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  892) 	newes.es_len = len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  893) 	ext4_es_store_pblock_status(&newes, pblk, status);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  894) 	trace_ext4_es_cache_extent(inode, &newes);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  895) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  896) 	if (!len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  897) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  898) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  899) 	BUG_ON(end < lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  900) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  901) 	write_lock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  902) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  903) 	es = __es_tree_search(&EXT4_I(inode)->i_es_tree.root, lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  904) 	if (!es || es->es_lblk > end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  905) 		__es_insert_extent(inode, &newes);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  906) 	write_unlock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  907) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  908) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  909) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  910)  * ext4_es_lookup_extent() looks up an extent in extent status tree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  911)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  912)  * ext4_es_lookup_extent is called by ext4_map_blocks/ext4_da_map_blocks.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  913)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  914)  * Return: 1 on found, 0 on not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  915)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  916) int ext4_es_lookup_extent(struct inode *inode, ext4_lblk_t lblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  917) 			  ext4_lblk_t *next_lblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  918) 			  struct extent_status *es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  919) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  920) 	struct ext4_es_tree *tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  921) 	struct ext4_es_stats *stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  922) 	struct extent_status *es1 = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  923) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  924) 	int found = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  925) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  926) 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  927) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  928) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  929) 	trace_ext4_es_lookup_extent_enter(inode, lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  930) 	es_debug("lookup extent in block %u\n", lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  931) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  932) 	tree = &EXT4_I(inode)->i_es_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  933) 	read_lock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  934) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  935) 	/* find extent in cache firstly */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  936) 	es->es_lblk = es->es_len = es->es_pblk = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  937) 	if (tree->cache_es) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  938) 		es1 = tree->cache_es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  939) 		if (in_range(lblk, es1->es_lblk, es1->es_len)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  940) 			es_debug("%u cached by [%u/%u)\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  941) 				 lblk, es1->es_lblk, es1->es_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  942) 			found = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  943) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  944) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  945) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  946) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  947) 	node = tree->root.rb_node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  948) 	while (node) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  949) 		es1 = rb_entry(node, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  950) 		if (lblk < es1->es_lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  951) 			node = node->rb_left;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  952) 		else if (lblk > ext4_es_end(es1))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  953) 			node = node->rb_right;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  954) 		else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  955) 			found = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  956) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  957) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  958) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  959) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  960) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  961) 	stats = &EXT4_SB(inode->i_sb)->s_es_stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  962) 	if (found) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  963) 		BUG_ON(!es1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  964) 		es->es_lblk = es1->es_lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  965) 		es->es_len = es1->es_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  966) 		es->es_pblk = es1->es_pblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  967) 		if (!ext4_es_is_referenced(es1))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  968) 			ext4_es_set_referenced(es1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  969) 		percpu_counter_inc(&stats->es_stats_cache_hits);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  970) 		if (next_lblk) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  971) 			node = rb_next(&es1->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  972) 			if (node) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  973) 				es1 = rb_entry(node, struct extent_status,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  974) 					       rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  975) 				*next_lblk = es1->es_lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  976) 			} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  977) 				*next_lblk = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  978) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  979) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  980) 		percpu_counter_inc(&stats->es_stats_cache_misses);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  981) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  982) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  983) 	read_unlock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  984) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  985) 	trace_ext4_es_lookup_extent_exit(inode, es, found);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  986) 	return found;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  987) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  988) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  989) struct rsvd_count {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  990) 	int ndelonly;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  991) 	bool first_do_lblk_found;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  992) 	ext4_lblk_t first_do_lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  993) 	ext4_lblk_t last_do_lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  994) 	struct extent_status *left_es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  995) 	bool partial;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  996) 	ext4_lblk_t lclu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  997) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  998) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  999) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1000)  * init_rsvd - initialize reserved count data before removing block range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1001)  *	       in file from extent status tree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1002)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1003)  * @inode - file containing range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1004)  * @lblk - first block in range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1005)  * @es - pointer to first extent in range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1006)  * @rc - pointer to reserved count data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1007)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1008)  * Assumes es is not NULL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1009)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1010) static void init_rsvd(struct inode *inode, ext4_lblk_t lblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1011) 		      struct extent_status *es, struct rsvd_count *rc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1012) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1013) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1014) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1015) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1016) 	rc->ndelonly = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1017) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1018) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1019) 	 * for bigalloc, note the first delonly block in the range has not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1020) 	 * been found, record the extent containing the block to the left of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1021) 	 * the region to be removed, if any, and note that there's no partial
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1022) 	 * cluster to track
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1023) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1024) 	if (sbi->s_cluster_ratio > 1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1025) 		rc->first_do_lblk_found = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1026) 		if (lblk > es->es_lblk) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1027) 			rc->left_es = es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1028) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1029) 			node = rb_prev(&es->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1030) 			rc->left_es = node ? rb_entry(node,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1031) 						      struct extent_status,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1032) 						      rb_node) : NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1033) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1034) 		rc->partial = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1035) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1036) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1037) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1038) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1039)  * count_rsvd - count the clusters containing delayed and not unwritten
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1040)  *		(delonly) blocks in a range within an extent and add to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1041)  *	        the running tally in rsvd_count
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1042)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1043)  * @inode - file containing extent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1044)  * @lblk - first block in range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1045)  * @len - length of range in blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1046)  * @es - pointer to extent containing clusters to be counted
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1047)  * @rc - pointer to reserved count data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1048)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1049)  * Tracks partial clusters found at the beginning and end of extents so
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1050)  * they aren't overcounted when they span adjacent extents
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1051)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1052) static void count_rsvd(struct inode *inode, ext4_lblk_t lblk, long len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1053) 		       struct extent_status *es, struct rsvd_count *rc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1054) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1055) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1056) 	ext4_lblk_t i, end, nclu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1057) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1058) 	if (!ext4_es_is_delonly(es))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1059) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1060) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1061) 	WARN_ON(len <= 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1062) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1063) 	if (sbi->s_cluster_ratio == 1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1064) 		rc->ndelonly += (int) len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1065) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1066) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1067) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1068) 	/* bigalloc */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1069) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1070) 	i = (lblk < es->es_lblk) ? es->es_lblk : lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1071) 	end = lblk + (ext4_lblk_t) len - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1072) 	end = (end > ext4_es_end(es)) ? ext4_es_end(es) : end;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1073) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1074) 	/* record the first block of the first delonly extent seen */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1075) 	if (!rc->first_do_lblk_found) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1076) 		rc->first_do_lblk = i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1077) 		rc->first_do_lblk_found = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1078) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1079) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1080) 	/* update the last lblk in the region seen so far */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1081) 	rc->last_do_lblk = end;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1082) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1083) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1084) 	 * if we're tracking a partial cluster and the current extent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1085) 	 * doesn't start with it, count it and stop tracking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1086) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1087) 	if (rc->partial && (rc->lclu != EXT4_B2C(sbi, i))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1088) 		rc->ndelonly++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1089) 		rc->partial = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1090) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1091) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1092) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1093) 	 * if the first cluster doesn't start on a cluster boundary but
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1094) 	 * ends on one, count it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1095) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1096) 	if (EXT4_LBLK_COFF(sbi, i) != 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1097) 		if (end >= EXT4_LBLK_CFILL(sbi, i)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1098) 			rc->ndelonly++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1099) 			rc->partial = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1100) 			i = EXT4_LBLK_CFILL(sbi, i) + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1101) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1102) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1103) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1104) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1105) 	 * if the current cluster starts on a cluster boundary, count the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1106) 	 * number of whole delonly clusters in the extent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1107) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1108) 	if ((i + sbi->s_cluster_ratio - 1) <= end) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1109) 		nclu = (end - i + 1) >> sbi->s_cluster_bits;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1110) 		rc->ndelonly += nclu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1111) 		i += nclu << sbi->s_cluster_bits;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1112) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1113) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1114) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1115) 	 * start tracking a partial cluster if there's a partial at the end
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1116) 	 * of the current extent and we're not already tracking one
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1117) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1118) 	if (!rc->partial && i <= end) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1119) 		rc->partial = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1120) 		rc->lclu = EXT4_B2C(sbi, i);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1121) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1122) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1123) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1124) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1125)  * __pr_tree_search - search for a pending cluster reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1126)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1127)  * @root - root of pending reservation tree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1128)  * @lclu - logical cluster to search for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1129)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1130)  * Returns the pending reservation for the cluster identified by @lclu
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1131)  * if found.  If not, returns a reservation for the next cluster if any,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1132)  * and if not, returns NULL.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1133)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1134) static struct pending_reservation *__pr_tree_search(struct rb_root *root,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1135) 						    ext4_lblk_t lclu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1136) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1137) 	struct rb_node *node = root->rb_node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1138) 	struct pending_reservation *pr = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1139) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1140) 	while (node) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1141) 		pr = rb_entry(node, struct pending_reservation, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1142) 		if (lclu < pr->lclu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1143) 			node = node->rb_left;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1144) 		else if (lclu > pr->lclu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1145) 			node = node->rb_right;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1146) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1147) 			return pr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1148) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1149) 	if (pr && lclu < pr->lclu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1150) 		return pr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1151) 	if (pr && lclu > pr->lclu) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1152) 		node = rb_next(&pr->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1153) 		return node ? rb_entry(node, struct pending_reservation,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1154) 				       rb_node) : NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1155) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1156) 	return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1157) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1158) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1159) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1160)  * get_rsvd - calculates and returns the number of cluster reservations to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1161)  *	      released when removing a block range from the extent status tree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1162)  *	      and releases any pending reservations within the range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1163)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1164)  * @inode - file containing block range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1165)  * @end - last block in range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1166)  * @right_es - pointer to extent containing next block beyond end or NULL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1167)  * @rc - pointer to reserved count data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1168)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1169)  * The number of reservations to be released is equal to the number of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1170)  * clusters containing delayed and not unwritten (delonly) blocks within
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1171)  * the range, minus the number of clusters still containing delonly blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1172)  * at the ends of the range, and minus the number of pending reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1173)  * within the range.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1174)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1175) static unsigned int get_rsvd(struct inode *inode, ext4_lblk_t end,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1176) 			     struct extent_status *right_es,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1177) 			     struct rsvd_count *rc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1178) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1179) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1180) 	struct pending_reservation *pr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1181) 	struct ext4_pending_tree *tree = &EXT4_I(inode)->i_pending_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1182) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1183) 	ext4_lblk_t first_lclu, last_lclu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1184) 	bool left_delonly, right_delonly, count_pending;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1185) 	struct extent_status *es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1186) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1187) 	if (sbi->s_cluster_ratio > 1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1188) 		/* count any remaining partial cluster */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1189) 		if (rc->partial)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1190) 			rc->ndelonly++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1191) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1192) 		if (rc->ndelonly == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1193) 			return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1194) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1195) 		first_lclu = EXT4_B2C(sbi, rc->first_do_lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1196) 		last_lclu = EXT4_B2C(sbi, rc->last_do_lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1197) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1198) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1199) 		 * decrease the delonly count by the number of clusters at the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1200) 		 * ends of the range that still contain delonly blocks -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1201) 		 * these clusters still need to be reserved
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1202) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1203) 		left_delonly = right_delonly = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1204) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1205) 		es = rc->left_es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1206) 		while (es && ext4_es_end(es) >=
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1207) 		       EXT4_LBLK_CMASK(sbi, rc->first_do_lblk)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1208) 			if (ext4_es_is_delonly(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1209) 				rc->ndelonly--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1210) 				left_delonly = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1211) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1212) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1213) 			node = rb_prev(&es->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1214) 			if (!node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1215) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1216) 			es = rb_entry(node, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1217) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1218) 		if (right_es && (!left_delonly || first_lclu != last_lclu)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1219) 			if (end < ext4_es_end(right_es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1220) 				es = right_es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1221) 			} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1222) 				node = rb_next(&right_es->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1223) 				es = node ? rb_entry(node, struct extent_status,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1224) 						     rb_node) : NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1225) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1226) 			while (es && es->es_lblk <=
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1227) 			       EXT4_LBLK_CFILL(sbi, rc->last_do_lblk)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1228) 				if (ext4_es_is_delonly(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1229) 					rc->ndelonly--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1230) 					right_delonly = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1231) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1232) 				}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1233) 				node = rb_next(&es->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1234) 				if (!node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1235) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1236) 				es = rb_entry(node, struct extent_status,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1237) 					      rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1238) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1239) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1240) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1241) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1242) 		 * Determine the block range that should be searched for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1243) 		 * pending reservations, if any.  Clusters on the ends of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1244) 		 * original removed range containing delonly blocks are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1245) 		 * excluded.  They've already been accounted for and it's not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1246) 		 * possible to determine if an associated pending reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1247) 		 * should be released with the information available in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1248) 		 * extents status tree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1249) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1250) 		if (first_lclu == last_lclu) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1251) 			if (left_delonly | right_delonly)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1252) 				count_pending = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1253) 			else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1254) 				count_pending = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1255) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1256) 			if (left_delonly)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1257) 				first_lclu++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1258) 			if (right_delonly)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1259) 				last_lclu--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1260) 			if (first_lclu <= last_lclu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1261) 				count_pending = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1262) 			else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1263) 				count_pending = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1264) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1265) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1266) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1267) 		 * a pending reservation found between first_lclu and last_lclu
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1268) 		 * represents an allocated cluster that contained at least one
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1269) 		 * delonly block, so the delonly total must be reduced by one
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1270) 		 * for each pending reservation found and released
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1271) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1272) 		if (count_pending) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1273) 			pr = __pr_tree_search(&tree->root, first_lclu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1274) 			while (pr && pr->lclu <= last_lclu) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1275) 				rc->ndelonly--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1276) 				node = rb_next(&pr->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1277) 				rb_erase(&pr->rb_node, &tree->root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1278) 				kmem_cache_free(ext4_pending_cachep, pr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1279) 				if (!node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1280) 					break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1281) 				pr = rb_entry(node, struct pending_reservation,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1282) 					      rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1283) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1284) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1285) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1286) 	return rc->ndelonly;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1287) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1288) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1289) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1290) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1291)  * __es_remove_extent - removes block range from extent status tree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1292)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1293)  * @inode - file containing range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1294)  * @lblk - first block in range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1295)  * @end - last block in range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1296)  * @reserved - number of cluster reservations released
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1297)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1298)  * If @reserved is not NULL and delayed allocation is enabled, counts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1299)  * block/cluster reservations freed by removing range and if bigalloc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1300)  * enabled cancels pending reservations as needed. Returns 0 on success,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1301)  * error code on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1302)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1303) static int __es_remove_extent(struct inode *inode, ext4_lblk_t lblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1304) 			      ext4_lblk_t end, int *reserved)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1305) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1306) 	struct ext4_es_tree *tree = &EXT4_I(inode)->i_es_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1307) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1308) 	struct extent_status *es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1309) 	struct extent_status orig_es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1310) 	ext4_lblk_t len1, len2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1311) 	ext4_fsblk_t block;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1312) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1313) 	bool count_reserved = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1314) 	struct rsvd_count rc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1315) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1316) 	if (reserved == NULL || !test_opt(inode->i_sb, DELALLOC))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1317) 		count_reserved = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1318) retry:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1319) 	err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1320) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1321) 	es = __es_tree_search(&tree->root, lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1322) 	if (!es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1323) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1324) 	if (es->es_lblk > end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1325) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1326) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1327) 	/* Simply invalidate cache_es. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1328) 	tree->cache_es = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1329) 	if (count_reserved)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1330) 		init_rsvd(inode, lblk, es, &rc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1331) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1332) 	orig_es.es_lblk = es->es_lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1333) 	orig_es.es_len = es->es_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1334) 	orig_es.es_pblk = es->es_pblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1335) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1336) 	len1 = lblk > es->es_lblk ? lblk - es->es_lblk : 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1337) 	len2 = ext4_es_end(es) > end ? ext4_es_end(es) - end : 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1338) 	if (len1 > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1339) 		es->es_len = len1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1340) 	if (len2 > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1341) 		if (len1 > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1342) 			struct extent_status newes;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1343) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1344) 			newes.es_lblk = end + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1345) 			newes.es_len = len2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1346) 			block = 0x7FDEADBEEFULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1347) 			if (ext4_es_is_written(&orig_es) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1348) 			    ext4_es_is_unwritten(&orig_es))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1349) 				block = ext4_es_pblock(&orig_es) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1350) 					orig_es.es_len - len2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1351) 			ext4_es_store_pblock_status(&newes, block,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1352) 						    ext4_es_status(&orig_es));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1353) 			err = __es_insert_extent(inode, &newes);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1354) 			if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1355) 				es->es_lblk = orig_es.es_lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1356) 				es->es_len = orig_es.es_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1357) 				if ((err == -ENOMEM) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1358) 				    __es_shrink(EXT4_SB(inode->i_sb),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1359) 							128, EXT4_I(inode)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1360) 					goto retry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1361) 				goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1362) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1363) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1364) 			es->es_lblk = end + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1365) 			es->es_len = len2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1366) 			if (ext4_es_is_written(es) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1367) 			    ext4_es_is_unwritten(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1368) 				block = orig_es.es_pblk + orig_es.es_len - len2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1369) 				ext4_es_store_pblock(es, block);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1370) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1371) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1372) 		if (count_reserved)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1373) 			count_rsvd(inode, lblk, orig_es.es_len - len1 - len2,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1374) 				   &orig_es, &rc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1375) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1376) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1377) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1378) 	if (len1 > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1379) 		if (count_reserved)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1380) 			count_rsvd(inode, lblk, orig_es.es_len - len1,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1381) 				   &orig_es, &rc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1382) 		node = rb_next(&es->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1383) 		if (node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1384) 			es = rb_entry(node, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1385) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1386) 			es = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1387) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1388) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1389) 	while (es && ext4_es_end(es) <= end) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1390) 		if (count_reserved)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1391) 			count_rsvd(inode, es->es_lblk, es->es_len, es, &rc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1392) 		node = rb_next(&es->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1393) 		rb_erase(&es->rb_node, &tree->root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1394) 		ext4_es_free_extent(inode, es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1395) 		if (!node) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1396) 			es = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1397) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1398) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1399) 		es = rb_entry(node, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1400) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1401) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1402) 	if (es && es->es_lblk < end + 1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1403) 		ext4_lblk_t orig_len = es->es_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1404) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1405) 		len1 = ext4_es_end(es) - end;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1406) 		if (count_reserved)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1407) 			count_rsvd(inode, es->es_lblk, orig_len - len1,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1408) 				   es, &rc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1409) 		es->es_lblk = end + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1410) 		es->es_len = len1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1411) 		if (ext4_es_is_written(es) || ext4_es_is_unwritten(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1412) 			block = es->es_pblk + orig_len - len1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1413) 			ext4_es_store_pblock(es, block);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1414) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1415) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1416) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1417) 	if (count_reserved)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1418) 		*reserved = get_rsvd(inode, end, es, &rc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1419) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1420) 	return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1421) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1422) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1423) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1424)  * ext4_es_remove_extent - removes block range from extent status tree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1425)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1426)  * @inode - file containing range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1427)  * @lblk - first block in range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1428)  * @len - number of blocks to remove
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1429)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1430)  * Reduces block/cluster reservation count and for bigalloc cancels pending
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1431)  * reservations as needed. Returns 0 on success, error code on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1432)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1433) int ext4_es_remove_extent(struct inode *inode, ext4_lblk_t lblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1434) 			  ext4_lblk_t len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1435) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1436) 	ext4_lblk_t end;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1437) 	int err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1438) 	int reserved = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1439) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1440) 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1441) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1442) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1443) 	trace_ext4_es_remove_extent(inode, lblk, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1444) 	es_debug("remove [%u/%u) from extent status tree of inode %lu\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1445) 		 lblk, len, inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1446) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1447) 	if (!len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1448) 		return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1449) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1450) 	end = lblk + len - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1451) 	BUG_ON(end < lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1452) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1453) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1454) 	 * ext4_clear_inode() depends on us taking i_es_lock unconditionally
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1455) 	 * so that we are sure __es_shrink() is done with the inode before it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1456) 	 * is reclaimed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1457) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1458) 	write_lock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1459) 	err = __es_remove_extent(inode, lblk, end, &reserved);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1460) 	write_unlock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1461) 	ext4_es_print_tree(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1462) 	ext4_da_release_space(inode, reserved);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1463) 	return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1464) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1465) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1466) static int __es_shrink(struct ext4_sb_info *sbi, int nr_to_scan,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1467) 		       struct ext4_inode_info *locked_ei)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1468) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1469) 	struct ext4_inode_info *ei;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1470) 	struct ext4_es_stats *es_stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1471) 	ktime_t start_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1472) 	u64 scan_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1473) 	int nr_to_walk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1474) 	int nr_shrunk = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1475) 	int retried = 0, nr_skipped = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1476) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1477) 	es_stats = &sbi->s_es_stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1478) 	start_time = ktime_get();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1479) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1480) retry:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1481) 	spin_lock(&sbi->s_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1482) 	nr_to_walk = sbi->s_es_nr_inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1483) 	while (nr_to_walk-- > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1484) 		if (list_empty(&sbi->s_es_list)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1485) 			spin_unlock(&sbi->s_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1486) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1487) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1488) 		ei = list_first_entry(&sbi->s_es_list, struct ext4_inode_info,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1489) 				      i_es_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1490) 		/* Move the inode to the tail */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1491) 		list_move_tail(&ei->i_es_list, &sbi->s_es_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1492) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1493) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1494) 		 * Normally we try hard to avoid shrinking precached inodes,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1495) 		 * but we will as a last resort.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1496) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1497) 		if (!retried && ext4_test_inode_state(&ei->vfs_inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1498) 						EXT4_STATE_EXT_PRECACHED)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1499) 			nr_skipped++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1500) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1501) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1502) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1503) 		if (ei == locked_ei || !write_trylock(&ei->i_es_lock)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1504) 			nr_skipped++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1505) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1506) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1507) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1508) 		 * Now we hold i_es_lock which protects us from inode reclaim
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1509) 		 * freeing inode under us
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1510) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1511) 		spin_unlock(&sbi->s_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1512) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1513) 		nr_shrunk += es_reclaim_extents(ei, &nr_to_scan);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1514) 		write_unlock(&ei->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1515) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1516) 		if (nr_to_scan <= 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1517) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1518) 		spin_lock(&sbi->s_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1519) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1520) 	spin_unlock(&sbi->s_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1521) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1522) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1523) 	 * If we skipped any inodes, and we weren't able to make any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1524) 	 * forward progress, try again to scan precached inodes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1525) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1526) 	if ((nr_shrunk == 0) && nr_skipped && !retried) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1527) 		retried++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1528) 		goto retry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1529) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1530) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1531) 	if (locked_ei && nr_shrunk == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1532) 		nr_shrunk = es_reclaim_extents(locked_ei, &nr_to_scan);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1533) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1534) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1535) 	scan_time = ktime_to_ns(ktime_sub(ktime_get(), start_time));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1536) 	if (likely(es_stats->es_stats_scan_time))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1537) 		es_stats->es_stats_scan_time = (scan_time +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1538) 				es_stats->es_stats_scan_time*3) / 4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1539) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1540) 		es_stats->es_stats_scan_time = scan_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1541) 	if (scan_time > es_stats->es_stats_max_scan_time)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1542) 		es_stats->es_stats_max_scan_time = scan_time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1543) 	if (likely(es_stats->es_stats_shrunk))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1544) 		es_stats->es_stats_shrunk = (nr_shrunk +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1545) 				es_stats->es_stats_shrunk*3) / 4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1546) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1547) 		es_stats->es_stats_shrunk = nr_shrunk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1548) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1549) 	trace_ext4_es_shrink(sbi->s_sb, nr_shrunk, scan_time,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1550) 			     nr_skipped, retried);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1551) 	return nr_shrunk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1552) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1553) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1554) static unsigned long ext4_es_count(struct shrinker *shrink,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1555) 				   struct shrink_control *sc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1556) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1557) 	unsigned long nr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1558) 	struct ext4_sb_info *sbi;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1559) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1560) 	sbi = container_of(shrink, struct ext4_sb_info, s_es_shrinker);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1561) 	nr = percpu_counter_read_positive(&sbi->s_es_stats.es_stats_shk_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1562) 	trace_ext4_es_shrink_count(sbi->s_sb, sc->nr_to_scan, nr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1563) 	return nr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1564) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1565) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1566) static unsigned long ext4_es_scan(struct shrinker *shrink,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1567) 				  struct shrink_control *sc)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1568) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1569) 	struct ext4_sb_info *sbi = container_of(shrink,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1570) 					struct ext4_sb_info, s_es_shrinker);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1571) 	int nr_to_scan = sc->nr_to_scan;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1572) 	int ret, nr_shrunk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1573) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1574) 	ret = percpu_counter_read_positive(&sbi->s_es_stats.es_stats_shk_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1575) 	trace_ext4_es_shrink_scan_enter(sbi->s_sb, nr_to_scan, ret);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1576) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1577) 	nr_shrunk = __es_shrink(sbi, nr_to_scan, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1578) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1579) 	ret = percpu_counter_read_positive(&sbi->s_es_stats.es_stats_shk_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1580) 	trace_ext4_es_shrink_scan_exit(sbi->s_sb, nr_shrunk, ret);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1581) 	return nr_shrunk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1582) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1583) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1584) int ext4_seq_es_shrinker_info_show(struct seq_file *seq, void *v)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1585) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1586) 	struct ext4_sb_info *sbi = EXT4_SB((struct super_block *) seq->private);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1587) 	struct ext4_es_stats *es_stats = &sbi->s_es_stats;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1588) 	struct ext4_inode_info *ei, *max = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1589) 	unsigned int inode_cnt = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1590) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1591) 	if (v != SEQ_START_TOKEN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1592) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1593) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1594) 	/* here we just find an inode that has the max nr. of objects */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1595) 	spin_lock(&sbi->s_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1596) 	list_for_each_entry(ei, &sbi->s_es_list, i_es_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1597) 		inode_cnt++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1598) 		if (max && max->i_es_all_nr < ei->i_es_all_nr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1599) 			max = ei;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1600) 		else if (!max)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1601) 			max = ei;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1602) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1603) 	spin_unlock(&sbi->s_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1604) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1605) 	seq_printf(seq, "stats:\n  %lld objects\n  %lld reclaimable objects\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1606) 		   percpu_counter_sum_positive(&es_stats->es_stats_all_cnt),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1607) 		   percpu_counter_sum_positive(&es_stats->es_stats_shk_cnt));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1608) 	seq_printf(seq, "  %lld/%lld cache hits/misses\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1609) 		   percpu_counter_sum_positive(&es_stats->es_stats_cache_hits),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1610) 		   percpu_counter_sum_positive(&es_stats->es_stats_cache_misses));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1611) 	if (inode_cnt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1612) 		seq_printf(seq, "  %d inodes on list\n", inode_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1613) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1614) 	seq_printf(seq, "average:\n  %llu us scan time\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1615) 	    div_u64(es_stats->es_stats_scan_time, 1000));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1616) 	seq_printf(seq, "  %lu shrunk objects\n", es_stats->es_stats_shrunk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1617) 	if (inode_cnt)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1618) 		seq_printf(seq,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1619) 		    "maximum:\n  %lu inode (%u objects, %u reclaimable)\n"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1620) 		    "  %llu us max scan time\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1621) 		    max->vfs_inode.i_ino, max->i_es_all_nr, max->i_es_shk_nr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1622) 		    div_u64(es_stats->es_stats_max_scan_time, 1000));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1623) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1624) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1625) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1626) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1627) int ext4_es_register_shrinker(struct ext4_sb_info *sbi)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1628) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1629) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1630) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1631) 	/* Make sure we have enough bits for physical block number */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1632) 	BUILD_BUG_ON(ES_SHIFT < 48);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1633) 	INIT_LIST_HEAD(&sbi->s_es_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1634) 	sbi->s_es_nr_inode = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1635) 	spin_lock_init(&sbi->s_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1636) 	sbi->s_es_stats.es_stats_shrunk = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1637) 	err = percpu_counter_init(&sbi->s_es_stats.es_stats_cache_hits, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1638) 				  GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1639) 	if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1640) 		return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1641) 	err = percpu_counter_init(&sbi->s_es_stats.es_stats_cache_misses, 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1642) 				  GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1643) 	if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1644) 		goto err1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1645) 	sbi->s_es_stats.es_stats_scan_time = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1646) 	sbi->s_es_stats.es_stats_max_scan_time = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1647) 	err = percpu_counter_init(&sbi->s_es_stats.es_stats_all_cnt, 0, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1648) 	if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1649) 		goto err2;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1650) 	err = percpu_counter_init(&sbi->s_es_stats.es_stats_shk_cnt, 0, GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1651) 	if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1652) 		goto err3;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1653) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1654) 	sbi->s_es_shrinker.scan_objects = ext4_es_scan;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1655) 	sbi->s_es_shrinker.count_objects = ext4_es_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1656) 	sbi->s_es_shrinker.seeks = DEFAULT_SEEKS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1657) 	err = register_shrinker(&sbi->s_es_shrinker);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1658) 	if (err)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1659) 		goto err4;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1660) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1661) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1662) err4:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1663) 	percpu_counter_destroy(&sbi->s_es_stats.es_stats_shk_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1664) err3:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1665) 	percpu_counter_destroy(&sbi->s_es_stats.es_stats_all_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1666) err2:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1667) 	percpu_counter_destroy(&sbi->s_es_stats.es_stats_cache_misses);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1668) err1:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1669) 	percpu_counter_destroy(&sbi->s_es_stats.es_stats_cache_hits);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1670) 	return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1671) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1672) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1673) void ext4_es_unregister_shrinker(struct ext4_sb_info *sbi)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1674) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1675) 	percpu_counter_destroy(&sbi->s_es_stats.es_stats_cache_hits);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1676) 	percpu_counter_destroy(&sbi->s_es_stats.es_stats_cache_misses);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1677) 	percpu_counter_destroy(&sbi->s_es_stats.es_stats_all_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1678) 	percpu_counter_destroy(&sbi->s_es_stats.es_stats_shk_cnt);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1679) 	unregister_shrinker(&sbi->s_es_shrinker);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1680) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1681) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1682) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1683)  * Shrink extents in given inode from ei->i_es_shrink_lblk till end. Scan at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1684)  * most *nr_to_scan extents, update *nr_to_scan accordingly.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1685)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1686)  * Return 0 if we hit end of tree / interval, 1 if we exhausted nr_to_scan.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1687)  * Increment *nr_shrunk by the number of reclaimed extents. Also update
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1688)  * ei->i_es_shrink_lblk to where we should continue scanning.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1689)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1690) static int es_do_reclaim_extents(struct ext4_inode_info *ei, ext4_lblk_t end,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1691) 				 int *nr_to_scan, int *nr_shrunk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1692) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1693) 	struct inode *inode = &ei->vfs_inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1694) 	struct ext4_es_tree *tree = &ei->i_es_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1695) 	struct extent_status *es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1696) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1697) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1698) 	es = __es_tree_search(&tree->root, ei->i_es_shrink_lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1699) 	if (!es)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1700) 		goto out_wrap;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1701) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1702) 	while (*nr_to_scan > 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1703) 		if (es->es_lblk > end) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1704) 			ei->i_es_shrink_lblk = end + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1705) 			return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1706) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1707) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1708) 		(*nr_to_scan)--;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1709) 		node = rb_next(&es->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1710) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1711) 		 * We can't reclaim delayed extent from status tree because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1712) 		 * fiemap, bigallic, and seek_data/hole need to use it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1713) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1714) 		if (ext4_es_is_delayed(es))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1715) 			goto next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1716) 		if (ext4_es_is_referenced(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1717) 			ext4_es_clear_referenced(es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1718) 			goto next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1719) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1720) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1721) 		rb_erase(&es->rb_node, &tree->root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1722) 		ext4_es_free_extent(inode, es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1723) 		(*nr_shrunk)++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1724) next:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1725) 		if (!node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1726) 			goto out_wrap;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1727) 		es = rb_entry(node, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1728) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1729) 	ei->i_es_shrink_lblk = es->es_lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1730) 	return 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1731) out_wrap:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1732) 	ei->i_es_shrink_lblk = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1733) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1734) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1735) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1736) static int es_reclaim_extents(struct ext4_inode_info *ei, int *nr_to_scan)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1737) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1738) 	struct inode *inode = &ei->vfs_inode;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1739) 	int nr_shrunk = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1740) 	ext4_lblk_t start = ei->i_es_shrink_lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1741) 	static DEFINE_RATELIMIT_STATE(_rs, DEFAULT_RATELIMIT_INTERVAL,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1742) 				      DEFAULT_RATELIMIT_BURST);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1743) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1744) 	if (ei->i_es_shk_nr == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1745) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1746) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1747) 	if (ext4_test_inode_state(inode, EXT4_STATE_EXT_PRECACHED) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1748) 	    __ratelimit(&_rs))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1749) 		ext4_warning(inode->i_sb, "forced shrink of precached extents");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1750) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1751) 	if (!es_do_reclaim_extents(ei, EXT_MAX_BLOCKS, nr_to_scan, &nr_shrunk) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1752) 	    start != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1753) 		es_do_reclaim_extents(ei, start - 1, nr_to_scan, &nr_shrunk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1754) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1755) 	ei->i_es_tree.cache_es = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1756) 	return nr_shrunk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1757) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1758) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1759) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1760)  * Called to support EXT4_IOC_CLEAR_ES_CACHE.  We can only remove
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1761)  * discretionary entries from the extent status cache.  (Some entries
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1762)  * must be present for proper operations.)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1763)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1764) void ext4_clear_inode_es(struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1765) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1766) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1767) 	struct extent_status *es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1768) 	struct ext4_es_tree *tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1769) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1770) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1771) 	write_lock(&ei->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1772) 	tree = &EXT4_I(inode)->i_es_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1773) 	tree->cache_es = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1774) 	node = rb_first(&tree->root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1775) 	while (node) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1776) 		es = rb_entry(node, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1777) 		node = rb_next(node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1778) 		if (!ext4_es_is_delayed(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1779) 			rb_erase(&es->rb_node, &tree->root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1780) 			ext4_es_free_extent(inode, es);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1781) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1782) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1783) 	ext4_clear_inode_state(inode, EXT4_STATE_EXT_PRECACHED);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1784) 	write_unlock(&ei->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1785) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1786) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1787) #ifdef ES_DEBUG__
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1788) static void ext4_print_pending_tree(struct inode *inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1789) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1790) 	struct ext4_pending_tree *tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1791) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1792) 	struct pending_reservation *pr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1793) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1794) 	printk(KERN_DEBUG "pending reservations for inode %lu:", inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1795) 	tree = &EXT4_I(inode)->i_pending_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1796) 	node = rb_first(&tree->root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1797) 	while (node) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1798) 		pr = rb_entry(node, struct pending_reservation, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1799) 		printk(KERN_DEBUG " %u", pr->lclu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1800) 		node = rb_next(node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1801) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1802) 	printk(KERN_DEBUG "\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1803) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1804) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1805) #define ext4_print_pending_tree(inode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1806) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1807) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1808) int __init ext4_init_pending(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1809) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1810) 	ext4_pending_cachep = kmem_cache_create("ext4_pending_reservation",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1811) 					   sizeof(struct pending_reservation),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1812) 					   0, (SLAB_RECLAIM_ACCOUNT), NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1813) 	if (ext4_pending_cachep == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1814) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1815) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1816) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1817) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1818) void ext4_exit_pending(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1819) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1820) 	kmem_cache_destroy(ext4_pending_cachep);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1821) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1822) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1823) void ext4_init_pending_tree(struct ext4_pending_tree *tree)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1824) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1825) 	tree->root = RB_ROOT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1826) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1827) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1828) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1829)  * __get_pending - retrieve a pointer to a pending reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1830)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1831)  * @inode - file containing the pending cluster reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1832)  * @lclu - logical cluster of interest
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1833)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1834)  * Returns a pointer to a pending reservation if it's a member of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1835)  * the set, and NULL if not.  Must be called holding i_es_lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1836)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1837) static struct pending_reservation *__get_pending(struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1838) 						 ext4_lblk_t lclu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1839) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1840) 	struct ext4_pending_tree *tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1841) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1842) 	struct pending_reservation *pr = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1843) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1844) 	tree = &EXT4_I(inode)->i_pending_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1845) 	node = (&tree->root)->rb_node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1846) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1847) 	while (node) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1848) 		pr = rb_entry(node, struct pending_reservation, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1849) 		if (lclu < pr->lclu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1850) 			node = node->rb_left;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1851) 		else if (lclu > pr->lclu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1852) 			node = node->rb_right;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1853) 		else if (lclu == pr->lclu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1854) 			return pr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1855) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1856) 	return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1857) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1858) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1859) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1860)  * __insert_pending - adds a pending cluster reservation to the set of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1861)  *                    pending reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1862)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1863)  * @inode - file containing the cluster
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1864)  * @lblk - logical block in the cluster to be added
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1865)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1866)  * Returns 0 on successful insertion and -ENOMEM on failure.  If the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1867)  * pending reservation is already in the set, returns successfully.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1868)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1869) static int __insert_pending(struct inode *inode, ext4_lblk_t lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1870) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1871) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1872) 	struct ext4_pending_tree *tree = &EXT4_I(inode)->i_pending_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1873) 	struct rb_node **p = &tree->root.rb_node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1874) 	struct rb_node *parent = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1875) 	struct pending_reservation *pr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1876) 	ext4_lblk_t lclu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1877) 	int ret = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1878) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1879) 	lclu = EXT4_B2C(sbi, lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1880) 	/* search to find parent for insertion */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1881) 	while (*p) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1882) 		parent = *p;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1883) 		pr = rb_entry(parent, struct pending_reservation, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1884) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1885) 		if (lclu < pr->lclu) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1886) 			p = &(*p)->rb_left;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1887) 		} else if (lclu > pr->lclu) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1888) 			p = &(*p)->rb_right;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1889) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1890) 			/* pending reservation already inserted */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1891) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1892) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1893) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1894) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1895) 	pr = kmem_cache_alloc(ext4_pending_cachep, GFP_ATOMIC);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1896) 	if (pr == NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1897) 		ret = -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1898) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1899) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1900) 	pr->lclu = lclu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1901) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1902) 	rb_link_node(&pr->rb_node, parent, p);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1903) 	rb_insert_color(&pr->rb_node, &tree->root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1904) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1905) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1906) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1907) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1908) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1909) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1910)  * __remove_pending - removes a pending cluster reservation from the set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1911)  *                    of pending reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1912)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1913)  * @inode - file containing the cluster
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1914)  * @lblk - logical block in the pending cluster reservation to be removed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1915)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1916)  * Returns successfully if pending reservation is not a member of the set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1917)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1918) static void __remove_pending(struct inode *inode, ext4_lblk_t lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1919) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1920) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1921) 	struct pending_reservation *pr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1922) 	struct ext4_pending_tree *tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1923) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1924) 	pr = __get_pending(inode, EXT4_B2C(sbi, lblk));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1925) 	if (pr != NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1926) 		tree = &EXT4_I(inode)->i_pending_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1927) 		rb_erase(&pr->rb_node, &tree->root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1928) 		kmem_cache_free(ext4_pending_cachep, pr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1929) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1930) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1931) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1932) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1933)  * ext4_remove_pending - removes a pending cluster reservation from the set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1934)  *                       of pending reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1935)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1936)  * @inode - file containing the cluster
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1937)  * @lblk - logical block in the pending cluster reservation to be removed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1938)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1939)  * Locking for external use of __remove_pending.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1940)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1941) void ext4_remove_pending(struct inode *inode, ext4_lblk_t lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1942) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1943) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1944) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1945) 	write_lock(&ei->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1946) 	__remove_pending(inode, lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1947) 	write_unlock(&ei->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1948) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1949) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1950) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1951)  * ext4_is_pending - determine whether a cluster has a pending reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1952)  *                   on it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1953)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1954)  * @inode - file containing the cluster
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1955)  * @lblk - logical block in the cluster
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1956)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1957)  * Returns true if there's a pending reservation for the cluster in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1958)  * set of pending reservations, and false if not.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1959)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1960) bool ext4_is_pending(struct inode *inode, ext4_lblk_t lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1961) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1962) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1963) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1964) 	bool ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1965) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1966) 	read_lock(&ei->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1967) 	ret = (bool)(__get_pending(inode, EXT4_B2C(sbi, lblk)) != NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1968) 	read_unlock(&ei->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1969) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1970) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1971) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1972) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1973) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1974)  * ext4_es_insert_delayed_block - adds a delayed block to the extents status
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1975)  *                                tree, adding a pending reservation where
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1976)  *                                needed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1977)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1978)  * @inode - file containing the newly added block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1979)  * @lblk - logical block to be added
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1980)  * @allocated - indicates whether a physical cluster has been allocated for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1981)  *              the logical cluster that contains the block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1982)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1983)  * Returns 0 on success, negative error code on failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1984)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1985) int ext4_es_insert_delayed_block(struct inode *inode, ext4_lblk_t lblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1986) 				 bool allocated)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1987) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1988) 	struct extent_status newes;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1989) 	int err = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1990) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1991) 	if (EXT4_SB(inode->i_sb)->s_mount_state & EXT4_FC_REPLAY)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1992) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1993) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1994) 	es_debug("add [%u/1) delayed to extent status tree of inode %lu\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1995) 		 lblk, inode->i_ino);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1996) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1997) 	newes.es_lblk = lblk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1998) 	newes.es_len = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1999) 	ext4_es_store_pblock_status(&newes, ~0, EXTENT_STATUS_DELAYED);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2000) 	trace_ext4_es_insert_delayed_block(inode, &newes, allocated);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2001) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2002) 	ext4_es_insert_extent_check(inode, &newes);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2003) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2004) 	write_lock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2005) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2006) 	err = __es_remove_extent(inode, lblk, lblk, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2007) 	if (err != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2008) 		goto error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2009) retry:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2010) 	err = __es_insert_extent(inode, &newes);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2011) 	if (err == -ENOMEM && __es_shrink(EXT4_SB(inode->i_sb),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2012) 					  128, EXT4_I(inode)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2013) 		goto retry;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2014) 	if (err != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2015) 		goto error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2016) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2017) 	if (allocated)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2018) 		__insert_pending(inode, lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2019) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2020) error:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2021) 	write_unlock(&EXT4_I(inode)->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2022) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2023) 	ext4_es_print_tree(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2024) 	ext4_print_pending_tree(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2025) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2026) 	return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2027) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2028) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2029) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2030)  * __es_delayed_clu - count number of clusters containing blocks that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2031)  *                    are delayed only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2032)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2033)  * @inode - file containing block range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2034)  * @start - logical block defining start of range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2035)  * @end - logical block defining end of range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2036)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2037)  * Returns the number of clusters containing only delayed (not delayed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2038)  * and unwritten) blocks in the range specified by @start and @end.  Any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2039)  * cluster or part of a cluster within the range and containing a delayed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2040)  * and not unwritten block within the range is counted as a whole cluster.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2041)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2042) static unsigned int __es_delayed_clu(struct inode *inode, ext4_lblk_t start,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2043) 				     ext4_lblk_t end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2044) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2045) 	struct ext4_es_tree *tree = &EXT4_I(inode)->i_es_tree;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2046) 	struct extent_status *es;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2047) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2048) 	struct rb_node *node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2049) 	ext4_lblk_t first_lclu, last_lclu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2050) 	unsigned long long last_counted_lclu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2051) 	unsigned int n = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2052) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2053) 	/* guaranteed to be unequal to any ext4_lblk_t value */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2054) 	last_counted_lclu = ~0ULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2055) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2056) 	es = __es_tree_search(&tree->root, start);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2057) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2058) 	while (es && (es->es_lblk <= end)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2059) 		if (ext4_es_is_delonly(es)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2060) 			if (es->es_lblk <= start)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2061) 				first_lclu = EXT4_B2C(sbi, start);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2062) 			else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2063) 				first_lclu = EXT4_B2C(sbi, es->es_lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2064) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2065) 			if (ext4_es_end(es) >= end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2066) 				last_lclu = EXT4_B2C(sbi, end);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2067) 			else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2068) 				last_lclu = EXT4_B2C(sbi, ext4_es_end(es));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2069) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2070) 			if (first_lclu == last_counted_lclu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2071) 				n += last_lclu - first_lclu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2072) 			else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2073) 				n += last_lclu - first_lclu + 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2074) 			last_counted_lclu = last_lclu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2075) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2076) 		node = rb_next(&es->rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2077) 		if (!node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2078) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2079) 		es = rb_entry(node, struct extent_status, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2080) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2081) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2082) 	return n;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2083) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2084) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2085) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2086)  * ext4_es_delayed_clu - count number of clusters containing blocks that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2087)  *                       are both delayed and unwritten
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2088)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2089)  * @inode - file containing block range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2090)  * @lblk - logical block defining start of range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2091)  * @len - number of blocks in range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2092)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2093)  * Locking for external use of __es_delayed_clu().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2094)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2095) unsigned int ext4_es_delayed_clu(struct inode *inode, ext4_lblk_t lblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2096) 				 ext4_lblk_t len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2097) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2098) 	struct ext4_inode_info *ei = EXT4_I(inode);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2099) 	ext4_lblk_t end;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2100) 	unsigned int n;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2101) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2102) 	if (len == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2103) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2104) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2105) 	end = lblk + len - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2106) 	WARN_ON(end < lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2107) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2108) 	read_lock(&ei->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2109) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2110) 	n = __es_delayed_clu(inode, lblk, end);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2111) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2112) 	read_unlock(&ei->i_es_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2113) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2114) 	return n;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2115) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2116) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2117) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2118)  * __revise_pending - makes, cancels, or leaves unchanged pending cluster
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2119)  *                    reservations for a specified block range depending
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2120)  *                    upon the presence or absence of delayed blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2121)  *                    outside the range within clusters at the ends of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2122)  *                    range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2123)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2124)  * @inode - file containing the range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2125)  * @lblk - logical block defining the start of range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2126)  * @len  - length of range in blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2127)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2128)  * Used after a newly allocated extent is added to the extents status tree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2129)  * Requires that the extents in the range have either written or unwritten
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2130)  * status.  Must be called while holding i_es_lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2131)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2132) static void __revise_pending(struct inode *inode, ext4_lblk_t lblk,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2133) 			     ext4_lblk_t len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2134) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2135) 	struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2136) 	ext4_lblk_t end = lblk + len - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2137) 	ext4_lblk_t first, last;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2138) 	bool f_del = false, l_del = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2139) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2140) 	if (len == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2141) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2142) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2143) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2144) 	 * Two cases - block range within single cluster and block range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2145) 	 * spanning two or more clusters.  Note that a cluster belonging
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2146) 	 * to a range starting and/or ending on a cluster boundary is treated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2147) 	 * as if it does not contain a delayed extent.  The new range may
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2148) 	 * have allocated space for previously delayed blocks out to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2149) 	 * cluster boundary, requiring that any pre-existing pending
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2150) 	 * reservation be canceled.  Because this code only looks at blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2151) 	 * outside the range, it should revise pending reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2152) 	 * correctly even if the extent represented by the range can't be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2153) 	 * inserted in the extents status tree due to ENOSPC.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2154) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2155) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2156) 	if (EXT4_B2C(sbi, lblk) == EXT4_B2C(sbi, end)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2157) 		first = EXT4_LBLK_CMASK(sbi, lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2158) 		if (first != lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2159) 			f_del = __es_scan_range(inode, &ext4_es_is_delonly,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2160) 						first, lblk - 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2161) 		if (f_del) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2162) 			__insert_pending(inode, first);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2163) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2164) 			last = EXT4_LBLK_CMASK(sbi, end) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2165) 			       sbi->s_cluster_ratio - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2166) 			if (last != end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2167) 				l_del = __es_scan_range(inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2168) 							&ext4_es_is_delonly,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2169) 							end + 1, last);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2170) 			if (l_del)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2171) 				__insert_pending(inode, last);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2172) 			else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2173) 				__remove_pending(inode, last);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2174) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2175) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2176) 		first = EXT4_LBLK_CMASK(sbi, lblk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2177) 		if (first != lblk)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2178) 			f_del = __es_scan_range(inode, &ext4_es_is_delonly,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2179) 						first, lblk - 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2180) 		if (f_del)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2181) 			__insert_pending(inode, first);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2182) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2183) 			__remove_pending(inode, first);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2184) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2185) 		last = EXT4_LBLK_CMASK(sbi, end) + sbi->s_cluster_ratio - 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2186) 		if (last != end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2187) 			l_del = __es_scan_range(inode, &ext4_es_is_delonly,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2188) 						end + 1, last);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2189) 		if (l_del)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2190) 			__insert_pending(inode, last);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2191) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2192) 			__remove_pending(inode, last);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2193) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2194) }