Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) // SPDX-License-Identifier: GPL-2.0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3)  * SLOB Allocator: Simple List Of Blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5)  * Matt Mackall <mpm@selenic.com> 12/30/03
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7)  * NUMA support by Paul Mundt, 2007.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9)  * How SLOB works:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11)  * The core of SLOB is a traditional K&R style heap allocator, with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12)  * support for returning aligned objects. The granularity of this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13)  * allocator is as little as 2 bytes, however typically most architectures
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14)  * will require 4 bytes on 32-bit and 8 bytes on 64-bit.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16)  * The slob heap is a set of linked list of pages from alloc_pages(),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17)  * and within each page, there is a singly-linked list of free blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18)  * (slob_t). The heap is grown on demand. To reduce fragmentation,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19)  * heap pages are segregated into three lists, with objects less than
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20)  * 256 bytes, objects less than 1024 bytes, and all other objects.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22)  * Allocation from heap involves first searching for a page with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23)  * sufficient free blocks (using a next-fit-like approach) followed by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24)  * a first-fit scan of the page. Deallocation inserts objects back
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25)  * into the free list in address order, so this is effectively an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26)  * address-ordered first fit.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28)  * Above this is an implementation of kmalloc/kfree. Blocks returned
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29)  * from kmalloc are prepended with a 4-byte header with the kmalloc size.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30)  * If kmalloc is asked for objects of PAGE_SIZE or larger, it calls
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31)  * alloc_pages() directly, allocating compound pages so the page order
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32)  * does not have to be separately tracked.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33)  * These objects are detected in kfree() because PageSlab()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34)  * is false for them.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36)  * SLAB is emulated on top of SLOB by simply calling constructors and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37)  * destructors for every SLAB allocation. Objects are returned with the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38)  * 4-byte alignment unless the SLAB_HWCACHE_ALIGN flag is set, in which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39)  * case the low-level allocator will fragment blocks to create the proper
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40)  * alignment. Again, objects of page-size or greater are allocated by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41)  * calling alloc_pages(). As SLAB objects know their size, no separate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42)  * size bookkeeping is necessary and there is essentially no allocation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43)  * space overhead, and compound pages aren't needed for multi-page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44)  * allocations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46)  * NUMA support in SLOB is fairly simplistic, pushing most of the real
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47)  * logic down to the page allocator, and simply doing the node accounting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48)  * on the upper levels. In the event that a node id is explicitly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49)  * provided, __alloc_pages_node() with the specified node id is used
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50)  * instead. The common case (or when the node id isn't explicitly provided)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51)  * will default to the current node, as per numa_node_id().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53)  * Node aware pages are still inserted in to the global freelist, and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54)  * these are scanned for by matching against the node id encoded in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55)  * page flags. As a result, block allocations that can be satisfied from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56)  * the freelist will only be done so on pages residing on the same node,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57)  * in order to prevent random node placement.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60) #include <linux/kernel.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61) #include <linux/slab.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63) #include <linux/mm.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64) #include <linux/swap.h> /* struct reclaim_state */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65) #include <linux/cache.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66) #include <linux/init.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67) #include <linux/export.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68) #include <linux/rcupdate.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69) #include <linux/list.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70) #include <linux/kmemleak.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72) #include <trace/events/kmem.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74) #include <linux/atomic.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76) #include "slab.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78)  * slob_block has a field 'units', which indicates size of block if +ve,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79)  * or offset of next block if -ve (in SLOB_UNITs).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81)  * Free blocks of size 1 unit simply contain the offset of the next block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82)  * Those with larger size contain their size in the first SLOB_UNIT of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83)  * memory, and the offset of the next free block in the second SLOB_UNIT.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85) #if PAGE_SIZE <= (32767 * 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86) typedef s16 slobidx_t;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88) typedef s32 slobidx_t;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91) struct slob_block {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92) 	slobidx_t units;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94) typedef struct slob_block slob_t;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97)  * All partially free slob pages go on these lists.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99) #define SLOB_BREAK1 256
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) #define SLOB_BREAK2 1024
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) static LIST_HEAD(free_slob_small);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) static LIST_HEAD(free_slob_medium);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) static LIST_HEAD(free_slob_large);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106)  * slob_page_free: true for pages on free_slob_pages list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) static inline int slob_page_free(struct page *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) 	return PageSlobFree(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) static void set_slob_page_free(struct page *sp, struct list_head *list)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) 	list_add(&sp->slab_list, list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) 	__SetPageSlobFree(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) static inline void clear_slob_page_free(struct page *sp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) 	list_del(&sp->slab_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) 	__ClearPageSlobFree(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) #define SLOB_UNIT sizeof(slob_t)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) #define SLOB_UNITS(size) DIV_ROUND_UP(size, SLOB_UNIT)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129)  * struct slob_rcu is inserted at the tail of allocated slob blocks, which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130)  * were created with a SLAB_TYPESAFE_BY_RCU slab. slob_rcu is used to free
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131)  * the block using call_rcu.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) struct slob_rcu {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) 	struct rcu_head head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) 	int size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139)  * slob_lock protects all slob allocator structures.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) static DEFINE_SPINLOCK(slob_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144)  * Encode the given size and next info into a free slob block s.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) static void set_slob(slob_t *s, slobidx_t size, slob_t *next)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) 	slob_t *base = (slob_t *)((unsigned long)s & PAGE_MASK);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) 	slobidx_t offset = next - base;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) 	if (size > 1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) 		s[0].units = size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) 		s[1].units = offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) 	} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) 		s[0].units = -offset;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159)  * Return the size of a slob block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) static slobidx_t slob_units(slob_t *s)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) 	if (s->units > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) 		return s->units;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) 	return 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169)  * Return the next free slob block pointer after this one.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) static slob_t *slob_next(slob_t *s)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) 	slob_t *base = (slob_t *)((unsigned long)s & PAGE_MASK);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) 	slobidx_t next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) 	if (s[0].units < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) 		next = -s[0].units;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) 		next = s[1].units;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) 	return base+next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184)  * Returns true if s is the last free block in its page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) static int slob_last(slob_t *s)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) 	return !((unsigned long)slob_next(s) & ~PAGE_MASK);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) static void *slob_new_pages(gfp_t gfp, int order, int node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) 	struct page *page;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) #ifdef CONFIG_NUMA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) 	if (node != NUMA_NO_NODE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) 		page = __alloc_pages_node(node, gfp, order);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) 		page = alloc_pages(gfp, order);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) 	if (!page)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) 	mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) 			    PAGE_SIZE << order);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) 	return page_address(page);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) static void slob_free_pages(void *b, int order)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) 	struct page *sp = virt_to_page(b);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) 	if (current->reclaim_state)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) 		current->reclaim_state->reclaimed_slab += 1 << order;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) 	mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) 			    -(PAGE_SIZE << order));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) 	__free_pages(sp, order);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223)  * slob_page_alloc() - Allocate a slob block within a given slob_page sp.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224)  * @sp: Page to look in.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225)  * @size: Size of the allocation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226)  * @align: Allocation alignment.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227)  * @align_offset: Offset in the allocated block that will be aligned.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228)  * @page_removed_from_list: Return parameter.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230)  * Tries to find a chunk of memory at least @size bytes big within @page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232)  * Return: Pointer to memory if allocated, %NULL otherwise.  If the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233)  *         allocation fills up @page then the page is removed from the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234)  *         freelist, in this case @page_removed_from_list will be set to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235)  *         true (set to false otherwise).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) static void *slob_page_alloc(struct page *sp, size_t size, int align,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) 			      int align_offset, bool *page_removed_from_list)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) 	slob_t *prev, *cur, *aligned = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) 	int delta = 0, units = SLOB_UNITS(size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) 	*page_removed_from_list = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) 	for (prev = NULL, cur = sp->freelist; ; prev = cur, cur = slob_next(cur)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) 		slobidx_t avail = slob_units(cur);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) 		 * 'aligned' will hold the address of the slob block so that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) 		 * address 'aligned'+'align_offset' is aligned according to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) 		 * 'align' parameter. This is for kmalloc() which prepends the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) 		 * allocated block with its size, so that the block itself is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) 		 * aligned when needed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) 		if (align) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) 			aligned = (slob_t *)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) 				(ALIGN((unsigned long)cur + align_offset, align)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) 				 - align_offset);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) 			delta = aligned - cur;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) 		if (avail >= units + delta) { /* room enough? */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) 			slob_t *next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) 			if (delta) { /* need to fragment head to align? */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) 				next = slob_next(cur);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) 				set_slob(aligned, avail - delta, next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) 				set_slob(cur, delta, aligned);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) 				prev = cur;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) 				cur = aligned;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) 				avail = slob_units(cur);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) 			next = slob_next(cur);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) 			if (avail == units) { /* exact fit? unlink. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) 				if (prev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) 					set_slob(prev, slob_units(prev), next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) 				else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) 					sp->freelist = next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) 			} else { /* fragment */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) 				if (prev)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) 					set_slob(prev, slob_units(prev), cur + units);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) 				else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) 					sp->freelist = cur + units;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) 				set_slob(cur + units, avail - units, next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) 			sp->units -= units;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) 			if (!sp->units) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) 				clear_slob_page_free(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) 				*page_removed_from_list = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) 			return cur;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) 		if (slob_last(cur))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) 			return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299)  * slob_alloc: entry point into the slob allocator.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) static void *slob_alloc(size_t size, gfp_t gfp, int align, int node,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) 							int align_offset)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) 	struct page *sp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) 	struct list_head *slob_list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) 	slob_t *b = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) 	bool _unused;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) 	if (size < SLOB_BREAK1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) 		slob_list = &free_slob_small;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312) 	else if (size < SLOB_BREAK2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) 		slob_list = &free_slob_medium;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) 		slob_list = &free_slob_large;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) 	spin_lock_irqsave(&slob_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) 	/* Iterate through each partially free page, try to find room */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) 	list_for_each_entry(sp, slob_list, slab_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) 		bool page_removed_from_list = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) #ifdef CONFIG_NUMA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) 		 * If there's a node specification, search for a partial
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) 		 * page with a matching node id in the freelist.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) 		if (node != NUMA_NO_NODE && page_to_nid(sp) != node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) 		/* Enough room on this page? */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) 		if (sp->units < SLOB_UNITS(size))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) 		b = slob_page_alloc(sp, size, align, align_offset, &page_removed_from_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) 		if (!b)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) 		 * If slob_page_alloc() removed sp from the list then we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) 		 * cannot call list functions on sp.  If so allocation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) 		 * did not fragment the page anyway so optimisation is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) 		 * unnecessary.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) 		if (!page_removed_from_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) 			 * Improve fragment distribution and reduce our average
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) 			 * search time by starting our next search here. (see
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) 			 * Knuth vol 1, sec 2.5, pg 449)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) 			if (!list_is_first(&sp->slab_list, slob_list))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) 				list_rotate_to_front(&sp->slab_list, slob_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) 	spin_unlock_irqrestore(&slob_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) 	/* Not enough space: must allocate a new page */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) 	if (!b) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) 		b = slob_new_pages(gfp & ~__GFP_ZERO, 0, node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) 		if (!b)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) 			return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) 		sp = virt_to_page(b);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) 		__SetPageSlab(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) 		spin_lock_irqsave(&slob_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) 		sp->units = SLOB_UNITS(PAGE_SIZE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) 		sp->freelist = b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) 		INIT_LIST_HEAD(&sp->slab_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) 		set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) 		set_slob_page_free(sp, slob_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) 		b = slob_page_alloc(sp, size, align, align_offset, &_unused);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) 		BUG_ON(!b);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372) 		spin_unlock_irqrestore(&slob_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) 	if (unlikely(gfp & __GFP_ZERO))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) 		memset(b, 0, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) 	return b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380)  * slob_free: entry point into the slob allocator.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382) static void slob_free(void *block, int size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384) 	struct page *sp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385) 	slob_t *prev, *next, *b = (slob_t *)block;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) 	slobidx_t units;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388) 	struct list_head *slob_list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390) 	if (unlikely(ZERO_OR_NULL_PTR(block)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392) 	BUG_ON(!size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394) 	sp = virt_to_page(block);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395) 	units = SLOB_UNITS(size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) 	spin_lock_irqsave(&slob_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) 	if (sp->units + units == SLOB_UNITS(PAGE_SIZE)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) 		/* Go directly to page allocator. Do not pass slob allocator */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401) 		if (slob_page_free(sp))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) 			clear_slob_page_free(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) 		spin_unlock_irqrestore(&slob_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) 		__ClearPageSlab(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) 		page_mapcount_reset(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) 		slob_free_pages(b, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410) 	if (!slob_page_free(sp)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) 		/* This slob page is about to become partially free. Easy! */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) 		sp->units = units;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413) 		sp->freelist = b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) 		set_slob(b, units,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415) 			(void *)((unsigned long)(b +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416) 					SLOB_UNITS(PAGE_SIZE)) & PAGE_MASK));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) 		if (size < SLOB_BREAK1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418) 			slob_list = &free_slob_small;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) 		else if (size < SLOB_BREAK2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420) 			slob_list = &free_slob_medium;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422) 			slob_list = &free_slob_large;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423) 		set_slob_page_free(sp, slob_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428) 	 * Otherwise the page is already partially free, so find reinsertion
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429) 	 * point.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431) 	sp->units += units;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433) 	if (b < (slob_t *)sp->freelist) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) 		if (b + units == sp->freelist) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435) 			units += slob_units(sp->freelist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436) 			sp->freelist = slob_next(sp->freelist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438) 		set_slob(b, units, sp->freelist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439) 		sp->freelist = b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441) 		prev = sp->freelist;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442) 		next = slob_next(prev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443) 		while (b > next) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) 			prev = next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445) 			next = slob_next(prev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448) 		if (!slob_last(prev) && b + units == next) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449) 			units += slob_units(next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450) 			set_slob(b, units, slob_next(next));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451) 		} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452) 			set_slob(b, units, next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454) 		if (prev + slob_units(prev) == b) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455) 			units = slob_units(b) + slob_units(prev);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456) 			set_slob(prev, units, slob_next(b));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457) 		} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458) 			set_slob(prev, slob_units(prev), b);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 460) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 461) 	spin_unlock_irqrestore(&slob_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 462) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 463) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 464) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 465)  * End of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 466)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 467) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 468) static __always_inline void *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 469) __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 470) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 471) 	unsigned int *m;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 472) 	int minalign = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 473) 	void *ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 474) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 475) 	gfp &= gfp_allowed_mask;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 476) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 477) 	fs_reclaim_acquire(gfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 478) 	fs_reclaim_release(gfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 479) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 480) 	if (size < PAGE_SIZE - minalign) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 481) 		int align = minalign;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 482) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 483) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 484) 		 * For power of two sizes, guarantee natural alignment for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 485) 		 * kmalloc()'d objects.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 486) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 487) 		if (is_power_of_2(size))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 488) 			align = max(minalign, (int) size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 489) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 490) 		if (!size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 491) 			return ZERO_SIZE_PTR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 492) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 493) 		m = slob_alloc(size + minalign, gfp, align, node, minalign);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 494) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 495) 		if (!m)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 496) 			return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 497) 		*m = size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 498) 		ret = (void *)m + minalign;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 499) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 500) 		trace_kmalloc_node(caller, ret,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 501) 				   size, size + minalign, gfp, node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 502) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 503) 		unsigned int order = get_order(size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 504) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 505) 		if (likely(order))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 506) 			gfp |= __GFP_COMP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 507) 		ret = slob_new_pages(gfp, order, node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 508) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 509) 		trace_kmalloc_node(caller, ret,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 510) 				   size, PAGE_SIZE << order, gfp, node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 511) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 512) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 513) 	kmemleak_alloc(ret, size, 1, gfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 514) 	return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 515) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 516) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 517) void *__kmalloc(size_t size, gfp_t gfp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 518) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 519) 	return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, _RET_IP_);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 520) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 521) EXPORT_SYMBOL(__kmalloc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 522) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 523) void *__kmalloc_track_caller(size_t size, gfp_t gfp, unsigned long caller)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 524) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 525) 	return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, caller);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 526) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 527) EXPORT_SYMBOL(__kmalloc_track_caller);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 528) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 529) #ifdef CONFIG_NUMA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 530) void *__kmalloc_node_track_caller(size_t size, gfp_t gfp,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 531) 					int node, unsigned long caller)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 532) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 533) 	return __do_kmalloc_node(size, gfp, node, caller);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 534) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 535) EXPORT_SYMBOL(__kmalloc_node_track_caller);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 536) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 537) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 538) void kfree(const void *block)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 539) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 540) 	struct page *sp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 541) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 542) 	trace_kfree(_RET_IP_, block);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 543) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 544) 	if (unlikely(ZERO_OR_NULL_PTR(block)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 545) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 546) 	kmemleak_free(block);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 547) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 548) 	sp = virt_to_page(block);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 549) 	if (PageSlab(sp)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 550) 		int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 551) 		unsigned int *m = (unsigned int *)(block - align);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 552) 		slob_free(m, *m + align);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 553) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 554) 		unsigned int order = compound_order(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 555) 		mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 556) 				    -(PAGE_SIZE << order));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 557) 		__free_pages(sp, order);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 558) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 559) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 560) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 561) EXPORT_SYMBOL(kfree);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 562) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 563) /* can't use ksize for kmem_cache_alloc memory, only kmalloc */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 564) size_t __ksize(const void *block)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 565) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 566) 	struct page *sp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 567) 	int align;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 568) 	unsigned int *m;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 569) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 570) 	BUG_ON(!block);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 571) 	if (unlikely(block == ZERO_SIZE_PTR))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 572) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 573) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 574) 	sp = virt_to_page(block);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 575) 	if (unlikely(!PageSlab(sp)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 576) 		return page_size(sp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 577) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 578) 	align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 579) 	m = (unsigned int *)(block - align);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 580) 	return SLOB_UNITS(*m) * SLOB_UNIT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 581) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 582) EXPORT_SYMBOL(__ksize);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 583) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 584) int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 585) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 586) 	if (flags & SLAB_TYPESAFE_BY_RCU) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 587) 		/* leave room for rcu footer at the end of object */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 588) 		c->size += sizeof(struct slob_rcu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 589) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 590) 	c->flags = flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 591) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 592) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 593) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 594) static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 595) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 596) 	void *b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 597) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 598) 	flags &= gfp_allowed_mask;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 599) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 600) 	fs_reclaim_acquire(flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 601) 	fs_reclaim_release(flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 602) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 603) 	if (c->size < PAGE_SIZE) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 604) 		b = slob_alloc(c->size, flags, c->align, node, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 605) 		trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 606) 					    SLOB_UNITS(c->size) * SLOB_UNIT,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 607) 					    flags, node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 608) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 609) 		b = slob_new_pages(flags, get_order(c->size), node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 610) 		trace_kmem_cache_alloc_node(_RET_IP_, b, c->object_size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 611) 					    PAGE_SIZE << get_order(c->size),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 612) 					    flags, node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 613) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 614) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 615) 	if (b && c->ctor) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 616) 		WARN_ON_ONCE(flags & __GFP_ZERO);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 617) 		c->ctor(b);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 618) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 619) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 620) 	kmemleak_alloc_recursive(b, c->size, 1, c->flags, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 621) 	return b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 622) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 623) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 624) void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 625) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 626) 	return slob_alloc_node(cachep, flags, NUMA_NO_NODE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 627) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 628) EXPORT_SYMBOL(kmem_cache_alloc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 629) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 630) #ifdef CONFIG_NUMA
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 631) void *__kmalloc_node(size_t size, gfp_t gfp, int node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 632) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 633) 	return __do_kmalloc_node(size, gfp, node, _RET_IP_);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 634) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 635) EXPORT_SYMBOL(__kmalloc_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 636) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 637) void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 638) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 639) 	return slob_alloc_node(cachep, gfp, node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 640) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 641) EXPORT_SYMBOL(kmem_cache_alloc_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 642) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 643) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 644) static void __kmem_cache_free(void *b, int size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 645) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 646) 	if (size < PAGE_SIZE)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 647) 		slob_free(b, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 648) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 649) 		slob_free_pages(b, get_order(size));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 650) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 651) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 652) static void kmem_rcu_free(struct rcu_head *head)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 653) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 654) 	struct slob_rcu *slob_rcu = (struct slob_rcu *)head;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 655) 	void *b = (void *)slob_rcu - (slob_rcu->size - sizeof(struct slob_rcu));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 656) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 657) 	__kmem_cache_free(b, slob_rcu->size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 658) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 659) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 660) void kmem_cache_free(struct kmem_cache *c, void *b)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 661) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 662) 	kmemleak_free_recursive(b, c->flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 663) 	if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 664) 		struct slob_rcu *slob_rcu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 665) 		slob_rcu = b + (c->size - sizeof(struct slob_rcu));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 666) 		slob_rcu->size = c->size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 667) 		call_rcu(&slob_rcu->head, kmem_rcu_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 668) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 669) 		__kmem_cache_free(b, c->size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 670) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 671) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 672) 	trace_kmem_cache_free(_RET_IP_, b);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 673) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 674) EXPORT_SYMBOL(kmem_cache_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 675) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 676) void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 677) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 678) 	__kmem_cache_free_bulk(s, size, p);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 679) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 680) EXPORT_SYMBOL(kmem_cache_free_bulk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 681) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 682) int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 683) 								void **p)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 684) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 685) 	return __kmem_cache_alloc_bulk(s, flags, size, p);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 686) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 687) EXPORT_SYMBOL(kmem_cache_alloc_bulk);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 688) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 689) int __kmem_cache_shutdown(struct kmem_cache *c)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 690) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 691) 	/* No way to check for remaining objects */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 692) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 693) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 694) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 695) void __kmem_cache_release(struct kmem_cache *c)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 696) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 697) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 698) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 699) int __kmem_cache_shrink(struct kmem_cache *d)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 700) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 701) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 702) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 703) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 704) struct kmem_cache kmem_cache_boot = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 705) 	.name = "kmem_cache",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 706) 	.size = sizeof(struct kmem_cache),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 707) 	.flags = SLAB_PANIC,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 708) 	.align = ARCH_KMALLOC_MINALIGN,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 709) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 710) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 711) void __init kmem_cache_init(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 712) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 713) 	kmem_cache = &kmem_cache_boot;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 714) 	slab_state = UP;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 715) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 716) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 717) void __init kmem_cache_init_late(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 718) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 719) 	slab_state = FULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 720) }