Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    1) // SPDX-License-Identifier: GPL-2.0-only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    2) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    3)  * mm/kmemleak.c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    4)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    5)  * Copyright (C) 2008 ARM Limited
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    6)  * Written by Catalin Marinas <catalin.marinas@arm.com>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    7)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    8)  * For more information on the algorithm and kmemleak usage, please see
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    9)  * Documentation/dev-tools/kmemleak.rst.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   10)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   11)  * Notes on locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   12)  * ----------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   13)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   14)  * The following locks and mutexes are used by kmemleak:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   15)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   16)  * - kmemleak_lock (raw_spinlock_t): protects the object_list modifications and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   17)  *   accesses to the object_tree_root. The object_list is the main list
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   18)  *   holding the metadata (struct kmemleak_object) for the allocated memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   19)  *   blocks. The object_tree_root is a red black tree used to look-up
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   20)  *   metadata based on a pointer to the corresponding memory block.  The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   21)  *   kmemleak_object structures are added to the object_list and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   22)  *   object_tree_root in the create_object() function called from the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   23)  *   kmemleak_alloc() callback and removed in delete_object() called from the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   24)  *   kmemleak_free() callback
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   25)  * - kmemleak_object.lock (raw_spinlock_t): protects a kmemleak_object.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   26)  *   Accesses to the metadata (e.g. count) are protected by this lock. Note
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   27)  *   that some members of this structure may be protected by other means
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   28)  *   (atomic or kmemleak_lock). This lock is also held when scanning the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   29)  *   corresponding memory block to avoid the kernel freeing it via the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   30)  *   kmemleak_free() callback. This is less heavyweight than holding a global
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   31)  *   lock like kmemleak_lock during scanning.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   32)  * - scan_mutex (mutex): ensures that only one thread may scan the memory for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   33)  *   unreferenced objects at a time. The gray_list contains the objects which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   34)  *   are already referenced or marked as false positives and need to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   35)  *   scanned. This list is only modified during a scanning episode when the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   36)  *   scan_mutex is held. At the end of a scan, the gray_list is always empty.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   37)  *   Note that the kmemleak_object.use_count is incremented when an object is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   38)  *   added to the gray_list and therefore cannot be freed. This mutex also
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   39)  *   prevents multiple users of the "kmemleak" debugfs file together with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   40)  *   modifications to the memory scanning parameters including the scan_thread
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   41)  *   pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   42)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   43)  * Locks and mutexes are acquired/nested in the following order:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   44)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   45)  *   scan_mutex [-> object->lock] -> kmemleak_lock -> other_object->lock (SINGLE_DEPTH_NESTING)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   46)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   47)  * No kmemleak_lock and object->lock nesting is allowed outside scan_mutex
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   48)  * regions.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   49)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   50)  * The kmemleak_object structures have a use_count incremented or decremented
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   51)  * using the get_object()/put_object() functions. When the use_count becomes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   52)  * 0, this count can no longer be incremented and put_object() schedules the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   53)  * kmemleak_object freeing via an RCU callback. All calls to the get_object()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   54)  * function must be protected by rcu_read_lock() to avoid accessing a freed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   55)  * structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   56)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   57) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   58) #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   59) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   60) #include <linux/init.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   61) #include <linux/kernel.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   62) #include <linux/list.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   63) #include <linux/sched/signal.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   64) #include <linux/sched/task.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   65) #include <linux/sched/task_stack.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   66) #include <linux/jiffies.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   67) #include <linux/delay.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   68) #include <linux/export.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   69) #include <linux/kthread.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   70) #include <linux/rbtree.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   71) #include <linux/fs.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   72) #include <linux/debugfs.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   73) #include <linux/seq_file.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   74) #include <linux/cpumask.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   75) #include <linux/spinlock.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   76) #include <linux/module.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   77) #include <linux/mutex.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   78) #include <linux/rcupdate.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   79) #include <linux/stacktrace.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   80) #include <linux/cache.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   81) #include <linux/percpu.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   82) #include <linux/memblock.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   83) #include <linux/pfn.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   84) #include <linux/mmzone.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   85) #include <linux/slab.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   86) #include <linux/thread_info.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   87) #include <linux/err.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   88) #include <linux/uaccess.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   89) #include <linux/string.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   90) #include <linux/nodemask.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   91) #include <linux/mm.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   92) #include <linux/workqueue.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   93) #include <linux/crc32.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   94) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   95) #include <asm/sections.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   96) #include <asm/processor.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   97) #include <linux/atomic.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   98) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   99) #include <linux/kasan.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  100) #include <linux/kfence.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  101) #include <linux/kmemleak.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  102) #include <linux/memory_hotplug.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  103) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  104) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  105)  * Kmemleak configuration and common defines.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  106)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  107) #define MAX_TRACE		16	/* stack trace length */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  108) #define MSECS_MIN_AGE		5000	/* minimum object age for reporting */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  109) #define SECS_FIRST_SCAN		60	/* delay before the first scan */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  110) #define SECS_SCAN_WAIT		600	/* subsequent auto scanning delay */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  111) #define MAX_SCAN_SIZE		4096	/* maximum size of a scanned block */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  112) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  113) #define BYTES_PER_POINTER	sizeof(void *)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  114) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  115) /* GFP bitmask for kmemleak internal allocations */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  116) #define gfp_kmemleak_mask(gfp)	(((gfp) & (GFP_KERNEL | GFP_ATOMIC)) | \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  117) 				 __GFP_NORETRY | __GFP_NOMEMALLOC | \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  118) 				 __GFP_NOWARN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  119) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  120) /* scanning area inside a memory block */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  121) struct kmemleak_scan_area {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  122) 	struct hlist_node node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  123) 	unsigned long start;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  124) 	size_t size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  125) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  126) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  127) #define KMEMLEAK_GREY	0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  128) #define KMEMLEAK_BLACK	-1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  129) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  130) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  131)  * Structure holding the metadata for each allocated memory block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  132)  * Modifications to such objects should be made while holding the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  133)  * object->lock. Insertions or deletions from object_list, gray_list or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  134)  * rb_node are already protected by the corresponding locks or mutex (see
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  135)  * the notes on locking above). These objects are reference-counted
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  136)  * (use_count) and freed using the RCU mechanism.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  137)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  138) struct kmemleak_object {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  139) 	raw_spinlock_t lock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  140) 	unsigned int flags;		/* object status flags */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  141) 	struct list_head object_list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  142) 	struct list_head gray_list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  143) 	struct rb_node rb_node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  144) 	struct rcu_head rcu;		/* object_list lockless traversal */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  145) 	/* object usage count; object freed when use_count == 0 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  146) 	atomic_t use_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  147) 	unsigned long pointer;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  148) 	size_t size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  149) 	/* pass surplus references to this pointer */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  150) 	unsigned long excess_ref;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  151) 	/* minimum number of a pointers found before it is considered leak */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  152) 	int min_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  153) 	/* the total number of pointers found pointing to this object */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  154) 	int count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  155) 	/* checksum for detecting modified objects */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  156) 	u32 checksum;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  157) 	/* memory ranges to be scanned inside an object (empty for all) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  158) 	struct hlist_head area_list;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  159) 	unsigned long trace[MAX_TRACE];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  160) 	unsigned int trace_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  161) 	unsigned long jiffies;		/* creation timestamp */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  162) 	pid_t pid;			/* pid of the current task */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  163) 	char comm[TASK_COMM_LEN];	/* executable name */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  164) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  165) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  166) /* flag representing the memory block allocation status */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  167) #define OBJECT_ALLOCATED	(1 << 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  168) /* flag set after the first reporting of an unreference object */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  169) #define OBJECT_REPORTED		(1 << 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  170) /* flag set to not scan the object */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  171) #define OBJECT_NO_SCAN		(1 << 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  172) /* flag set to fully scan the object when scan_area allocation failed */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  173) #define OBJECT_FULL_SCAN	(1 << 3)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  174) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  175) #define HEX_PREFIX		"    "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  176) /* number of bytes to print per line; must be 16 or 32 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  177) #define HEX_ROW_SIZE		16
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  178) /* number of bytes to print at a time (1, 2, 4, 8) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  179) #define HEX_GROUP_SIZE		1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  180) /* include ASCII after the hex output */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  181) #define HEX_ASCII		1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  182) /* max number of lines to be printed */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  183) #define HEX_MAX_LINES		2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  184) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  185) /* the list of all allocated objects */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  186) static LIST_HEAD(object_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  187) /* the list of gray-colored objects (see color_gray comment below) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  188) static LIST_HEAD(gray_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  189) /* memory pool allocation */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  190) static struct kmemleak_object mem_pool[CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  191) static int mem_pool_free_count = ARRAY_SIZE(mem_pool);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  192) static LIST_HEAD(mem_pool_free_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  193) /* search tree for object boundaries */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  194) static struct rb_root object_tree_root = RB_ROOT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  195) /* protecting the access to object_list and object_tree_root */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  196) static DEFINE_RAW_SPINLOCK(kmemleak_lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  197) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  198) /* allocation caches for kmemleak internal data */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  199) static struct kmem_cache *object_cache;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  200) static struct kmem_cache *scan_area_cache;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  201) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  202) /* set if tracing memory operations is enabled */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  203) static int kmemleak_enabled = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  204) /* same as above but only for the kmemleak_free() callback */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  205) static int kmemleak_free_enabled = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  206) /* set in the late_initcall if there were no errors */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  207) static int kmemleak_initialized;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  208) /* set if a kmemleak warning was issued */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  209) static int kmemleak_warning;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  210) /* set if a fatal kmemleak error has occurred */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  211) static int kmemleak_error;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  212) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  213) /* minimum and maximum address that may be valid pointers */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  214) static unsigned long min_addr = ULONG_MAX;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  215) static unsigned long max_addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  216) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  217) static struct task_struct *scan_thread;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  218) /* used to avoid reporting of recently allocated objects */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  219) static unsigned long jiffies_min_age;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  220) static unsigned long jiffies_last_scan;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  221) /* delay between automatic memory scannings */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  222) static signed long jiffies_scan_wait;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  223) /* enables or disables the task stacks scanning */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  224) static int kmemleak_stack_scan = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  225) /* protects the memory scanning, parameters and debug/kmemleak file access */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  226) static DEFINE_MUTEX(scan_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  227) /* setting kmemleak=on, will set this var, skipping the disable */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  228) static int kmemleak_skip_disable;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  229) /* If there are leaks that can be reported */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  230) static bool kmemleak_found_leaks;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  231) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  232) static bool kmemleak_verbose;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  233) module_param_named(verbose, kmemleak_verbose, bool, 0600);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  234) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  235) static void kmemleak_disable(void);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  236) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  237) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  238)  * Print a warning and dump the stack trace.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  239)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  240) #define kmemleak_warn(x...)	do {		\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  241) 	pr_warn(x);				\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  242) 	dump_stack();				\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  243) 	kmemleak_warning = 1;			\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  244) } while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  245) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  246) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  247)  * Macro invoked when a serious kmemleak condition occurred and cannot be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  248)  * recovered from. Kmemleak will be disabled and further allocation/freeing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  249)  * tracing no longer available.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  250)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  251) #define kmemleak_stop(x...)	do {	\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  252) 	kmemleak_warn(x);		\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  253) 	kmemleak_disable();		\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  254) } while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  255) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  256) #define warn_or_seq_printf(seq, fmt, ...)	do {	\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  257) 	if (seq)					\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  258) 		seq_printf(seq, fmt, ##__VA_ARGS__);	\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  259) 	else						\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  260) 		pr_warn(fmt, ##__VA_ARGS__);		\
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  261) } while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  262) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  263) static void warn_or_seq_hex_dump(struct seq_file *seq, int prefix_type,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  264) 				 int rowsize, int groupsize, const void *buf,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  265) 				 size_t len, bool ascii)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  266) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  267) 	if (seq)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  268) 		seq_hex_dump(seq, HEX_PREFIX, prefix_type, rowsize, groupsize,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  269) 			     buf, len, ascii);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  270) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  271) 		print_hex_dump(KERN_WARNING, pr_fmt(HEX_PREFIX), prefix_type,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  272) 			       rowsize, groupsize, buf, len, ascii);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  273) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  274) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  275) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  276)  * Printing of the objects hex dump to the seq file. The number of lines to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  277)  * printed is limited to HEX_MAX_LINES to prevent seq file spamming. The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  278)  * actual number of printed bytes depends on HEX_ROW_SIZE. It must be called
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  279)  * with the object->lock held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  280)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  281) static void hex_dump_object(struct seq_file *seq,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  282) 			    struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  283) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  284) 	const u8 *ptr = (const u8 *)object->pointer;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  285) 	size_t len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  286) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  287) 	/* limit the number of lines to HEX_MAX_LINES */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  288) 	len = min_t(size_t, object->size, HEX_MAX_LINES * HEX_ROW_SIZE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  289) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  290) 	warn_or_seq_printf(seq, "  hex dump (first %zu bytes):\n", len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  291) 	kasan_disable_current();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  292) 	warn_or_seq_hex_dump(seq, DUMP_PREFIX_NONE, HEX_ROW_SIZE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  293) 			     HEX_GROUP_SIZE, kasan_reset_tag((void *)ptr), len, HEX_ASCII);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  294) 	kasan_enable_current();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  295) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  296) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  297) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  298)  * Object colors, encoded with count and min_count:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  299)  * - white - orphan object, not enough references to it (count < min_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  300)  * - gray  - not orphan, not marked as false positive (min_count == 0) or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  301)  *		sufficient references to it (count >= min_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  302)  * - black - ignore, it doesn't contain references (e.g. text section)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  303)  *		(min_count == -1). No function defined for this color.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  304)  * Newly created objects don't have any color assigned (object->count == -1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  305)  * before the next memory scan when they become white.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  306)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  307) static bool color_white(const struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  308) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  309) 	return object->count != KMEMLEAK_BLACK &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  310) 		object->count < object->min_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  311) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  312) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  313) static bool color_gray(const struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  314) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  315) 	return object->min_count != KMEMLEAK_BLACK &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  316) 		object->count >= object->min_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  317) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  318) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  319) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  320)  * Objects are considered unreferenced only if their color is white, they have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  321)  * not be deleted and have a minimum age to avoid false positives caused by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  322)  * pointers temporarily stored in CPU registers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  323)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  324) static bool unreferenced_object(struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  325) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  326) 	return (color_white(object) && object->flags & OBJECT_ALLOCATED) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  327) 		time_before_eq(object->jiffies + jiffies_min_age,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  328) 			       jiffies_last_scan);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  329) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  330) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  331) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  332)  * Printing of the unreferenced objects information to the seq file. The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  333)  * print_unreferenced function must be called with the object->lock held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  334)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  335) static void print_unreferenced(struct seq_file *seq,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  336) 			       struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  337) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  338) 	int i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  339) 	unsigned int msecs_age = jiffies_to_msecs(jiffies - object->jiffies);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  340) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  341) 	warn_or_seq_printf(seq, "unreferenced object 0x%08lx (size %zu):\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  342) 		   object->pointer, object->size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  343) 	warn_or_seq_printf(seq, "  comm \"%s\", pid %d, jiffies %lu (age %d.%03ds)\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  344) 		   object->comm, object->pid, object->jiffies,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  345) 		   msecs_age / 1000, msecs_age % 1000);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  346) 	hex_dump_object(seq, object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  347) 	warn_or_seq_printf(seq, "  backtrace:\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  348) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  349) 	for (i = 0; i < object->trace_len; i++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  350) 		void *ptr = (void *)object->trace[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  351) 		warn_or_seq_printf(seq, "    [<%p>] %pS\n", ptr, ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  352) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  353) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  354) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  355) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  356)  * Print the kmemleak_object information. This function is used mainly for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  357)  * debugging special cases when kmemleak operations. It must be called with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  358)  * the object->lock held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  359)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  360) static void dump_object_info(struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  361) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  362) 	pr_notice("Object 0x%08lx (size %zu):\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  363) 		  object->pointer, object->size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  364) 	pr_notice("  comm \"%s\", pid %d, jiffies %lu\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  365) 		  object->comm, object->pid, object->jiffies);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  366) 	pr_notice("  min_count = %d\n", object->min_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  367) 	pr_notice("  count = %d\n", object->count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  368) 	pr_notice("  flags = 0x%x\n", object->flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  369) 	pr_notice("  checksum = %u\n", object->checksum);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  370) 	pr_notice("  backtrace:\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  371) 	stack_trace_print(object->trace, object->trace_len, 4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  372) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  373) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  374) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  375)  * Look-up a memory block metadata (kmemleak_object) in the object search
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  376)  * tree based on a pointer value. If alias is 0, only values pointing to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  377)  * beginning of the memory block are allowed. The kmemleak_lock must be held
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  378)  * when calling this function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  379)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  380) static struct kmemleak_object *lookup_object(unsigned long ptr, int alias)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  381) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  382) 	struct rb_node *rb = object_tree_root.rb_node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  383) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  384) 	while (rb) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  385) 		struct kmemleak_object *object =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  386) 			rb_entry(rb, struct kmemleak_object, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  387) 		if (ptr < object->pointer)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  388) 			rb = object->rb_node.rb_left;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  389) 		else if (object->pointer + object->size <= ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  390) 			rb = object->rb_node.rb_right;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  391) 		else if (object->pointer == ptr || alias)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  392) 			return object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  393) 		else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  394) 			kmemleak_warn("Found object by alias at 0x%08lx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  395) 				      ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  396) 			dump_object_info(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  397) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  398) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  399) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  400) 	return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  401) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  402) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  403) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  404)  * Increment the object use_count. Return 1 if successful or 0 otherwise. Note
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  405)  * that once an object's use_count reached 0, the RCU freeing was already
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  406)  * registered and the object should no longer be used. This function must be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  407)  * called under the protection of rcu_read_lock().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  408)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  409) static int get_object(struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  410) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  411) 	return atomic_inc_not_zero(&object->use_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  412) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  413) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  414) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  415)  * Memory pool allocation and freeing. kmemleak_lock must not be held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  416)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  417) static struct kmemleak_object *mem_pool_alloc(gfp_t gfp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  418) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  419) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  420) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  421) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  422) 	/* try the slab allocator first */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  423) 	if (object_cache) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  424) 		object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  425) 		if (object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  426) 			return object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  427) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  428) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  429) 	/* slab allocation failed, try the memory pool */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  430) 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  431) 	object = list_first_entry_or_null(&mem_pool_free_list,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  432) 					  typeof(*object), object_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  433) 	if (object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  434) 		list_del(&object->object_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  435) 	else if (mem_pool_free_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  436) 		object = &mem_pool[--mem_pool_free_count];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  437) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  438) 		pr_warn_once("Memory pool empty, consider increasing CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  439) 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  440) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  441) 	return object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  442) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  443) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  444) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  445)  * Return the object to either the slab allocator or the memory pool.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  446)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  447) static void mem_pool_free(struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  448) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  449) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  450) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  451) 	if (object < mem_pool || object >= mem_pool + ARRAY_SIZE(mem_pool)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  452) 		kmem_cache_free(object_cache, object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  453) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  454) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  455) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  456) 	/* add the object to the memory pool free list */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  457) 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  458) 	list_add(&object->object_list, &mem_pool_free_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  459) 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  460) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  461) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  462) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  463)  * RCU callback to free a kmemleak_object.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  464)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  465) static void free_object_rcu(struct rcu_head *rcu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  466) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  467) 	struct hlist_node *tmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  468) 	struct kmemleak_scan_area *area;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  469) 	struct kmemleak_object *object =
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  470) 		container_of(rcu, struct kmemleak_object, rcu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  471) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  472) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  473) 	 * Once use_count is 0 (guaranteed by put_object), there is no other
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  474) 	 * code accessing this object, hence no need for locking.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  475) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  476) 	hlist_for_each_entry_safe(area, tmp, &object->area_list, node) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  477) 		hlist_del(&area->node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  478) 		kmem_cache_free(scan_area_cache, area);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  479) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  480) 	mem_pool_free(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  481) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  482) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  483) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  484)  * Decrement the object use_count. Once the count is 0, free the object using
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  485)  * an RCU callback. Since put_object() may be called via the kmemleak_free() ->
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  486)  * delete_object() path, the delayed RCU freeing ensures that there is no
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  487)  * recursive call to the kernel allocator. Lock-less RCU object_list traversal
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  488)  * is also possible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  489)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  490) static void put_object(struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  491) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  492) 	if (!atomic_dec_and_test(&object->use_count))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  493) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  494) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  495) 	/* should only get here after delete_object was called */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  496) 	WARN_ON(object->flags & OBJECT_ALLOCATED);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  497) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  498) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  499) 	 * It may be too early for the RCU callbacks, however, there is no
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  500) 	 * concurrent object_list traversal when !object_cache and all objects
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  501) 	 * came from the memory pool. Free the object directly.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  502) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  503) 	if (object_cache)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  504) 		call_rcu(&object->rcu, free_object_rcu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  505) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  506) 		free_object_rcu(&object->rcu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  507) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  508) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  509) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  510)  * Look up an object in the object search tree and increase its use_count.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  511)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  512) static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  513) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  514) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  515) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  516) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  517) 	rcu_read_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  518) 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  519) 	object = lookup_object(ptr, alias);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  520) 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  521) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  522) 	/* check whether the object is still available */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  523) 	if (object && !get_object(object))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  524) 		object = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  525) 	rcu_read_unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  526) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  527) 	return object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  528) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  529) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  530) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  531)  * Remove an object from the object_tree_root and object_list. Must be called
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  532)  * with the kmemleak_lock held _if_ kmemleak is still enabled.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  533)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  534) static void __remove_object(struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  535) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  536) 	rb_erase(&object->rb_node, &object_tree_root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  537) 	list_del_rcu(&object->object_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  538) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  539) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  540) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  541)  * Look up an object in the object search tree and remove it from both
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  542)  * object_tree_root and object_list. The returned object's use_count should be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  543)  * at least 1, as initially set by create_object().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  544)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  545) static struct kmemleak_object *find_and_remove_object(unsigned long ptr, int alias)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  546) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  547) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  548) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  549) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  550) 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  551) 	object = lookup_object(ptr, alias);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  552) 	if (object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  553) 		__remove_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  554) 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  555) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  556) 	return object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  557) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  558) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  559) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  560)  * Save stack trace to the given array of MAX_TRACE size.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  561)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  562) static int __save_stack_trace(unsigned long *trace)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  563) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  564) 	return stack_trace_save(trace, MAX_TRACE, 2);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  565) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  566) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  567) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  568)  * Create the metadata (struct kmemleak_object) corresponding to an allocated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  569)  * memory block and add it to the object_list and object_tree_root.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  570)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  571) static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  572) 					     int min_count, gfp_t gfp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  573) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  574) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  575) 	struct kmemleak_object *object, *parent;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  576) 	struct rb_node **link, *rb_parent;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  577) 	unsigned long untagged_ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  578) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  579) 	object = mem_pool_alloc(gfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  580) 	if (!object) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  581) 		pr_warn("Cannot allocate a kmemleak_object structure\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  582) 		kmemleak_disable();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  583) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  584) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  585) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  586) 	INIT_LIST_HEAD(&object->object_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  587) 	INIT_LIST_HEAD(&object->gray_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  588) 	INIT_HLIST_HEAD(&object->area_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  589) 	raw_spin_lock_init(&object->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  590) 	atomic_set(&object->use_count, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  591) 	object->flags = OBJECT_ALLOCATED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  592) 	object->pointer = ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  593) 	object->size = kfence_ksize((void *)ptr) ?: size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  594) 	object->excess_ref = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  595) 	object->min_count = min_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  596) 	object->count = 0;			/* white color initially */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  597) 	object->jiffies = jiffies;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  598) 	object->checksum = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  599) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  600) 	/* task information */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  601) 	if (in_irq()) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  602) 		object->pid = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  603) 		strncpy(object->comm, "hardirq", sizeof(object->comm));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  604) 	} else if (in_serving_softirq()) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  605) 		object->pid = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  606) 		strncpy(object->comm, "softirq", sizeof(object->comm));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  607) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  608) 		object->pid = current->pid;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  609) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  610) 		 * There is a small chance of a race with set_task_comm(),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  611) 		 * however using get_task_comm() here may cause locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  612) 		 * dependency issues with current->alloc_lock. In the worst
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  613) 		 * case, the command line is not correct.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  614) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  615) 		strncpy(object->comm, current->comm, sizeof(object->comm));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  616) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  617) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  618) 	/* kernel backtrace */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  619) 	object->trace_len = __save_stack_trace(object->trace);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  620) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  621) 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  622) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  623) 	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  624) 	min_addr = min(min_addr, untagged_ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  625) 	max_addr = max(max_addr, untagged_ptr + size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  626) 	link = &object_tree_root.rb_node;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  627) 	rb_parent = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  628) 	while (*link) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  629) 		rb_parent = *link;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  630) 		parent = rb_entry(rb_parent, struct kmemleak_object, rb_node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  631) 		if (ptr + size <= parent->pointer)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  632) 			link = &parent->rb_node.rb_left;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  633) 		else if (parent->pointer + parent->size <= ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  634) 			link = &parent->rb_node.rb_right;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  635) 		else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  636) 			kmemleak_stop("Cannot insert 0x%lx into the object search tree (overlaps existing)\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  637) 				      ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  638) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  639) 			 * No need for parent->lock here since "parent" cannot
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  640) 			 * be freed while the kmemleak_lock is held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  641) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  642) 			dump_object_info(parent);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  643) 			kmem_cache_free(object_cache, object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  644) 			object = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  645) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  646) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  647) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  648) 	rb_link_node(&object->rb_node, rb_parent, link);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  649) 	rb_insert_color(&object->rb_node, &object_tree_root);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  650) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  651) 	list_add_tail_rcu(&object->object_list, &object_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  652) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  653) 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  654) 	return object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  655) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  656) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  657) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  658)  * Mark the object as not allocated and schedule RCU freeing via put_object().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  659)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  660) static void __delete_object(struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  661) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  662) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  663) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  664) 	WARN_ON(!(object->flags & OBJECT_ALLOCATED));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  665) 	WARN_ON(atomic_read(&object->use_count) < 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  666) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  667) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  668) 	 * Locking here also ensures that the corresponding memory block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  669) 	 * cannot be freed when it is being scanned.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  670) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  671) 	raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  672) 	object->flags &= ~OBJECT_ALLOCATED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  673) 	raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  674) 	put_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  675) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  676) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  677) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  678)  * Look up the metadata (struct kmemleak_object) corresponding to ptr and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  679)  * delete it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  680)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  681) static void delete_object_full(unsigned long ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  682) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  683) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  684) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  685) 	object = find_and_remove_object(ptr, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  686) 	if (!object) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  687) #ifdef DEBUG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  688) 		kmemleak_warn("Freeing unknown object at 0x%08lx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  689) 			      ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  690) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  691) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  692) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  693) 	__delete_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  694) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  695) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  696) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  697)  * Look up the metadata (struct kmemleak_object) corresponding to ptr and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  698)  * delete it. If the memory block is partially freed, the function may create
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  699)  * additional metadata for the remaining parts of the block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  700)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  701) static void delete_object_part(unsigned long ptr, size_t size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  702) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  703) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  704) 	unsigned long start, end;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  705) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  706) 	object = find_and_remove_object(ptr, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  707) 	if (!object) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  708) #ifdef DEBUG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  709) 		kmemleak_warn("Partially freeing unknown object at 0x%08lx (size %zu)\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  710) 			      ptr, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  711) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  712) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  713) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  714) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  715) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  716) 	 * Create one or two objects that may result from the memory block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  717) 	 * split. Note that partial freeing is only done by free_bootmem() and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  718) 	 * this happens before kmemleak_init() is called.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  719) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  720) 	start = object->pointer;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  721) 	end = object->pointer + object->size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  722) 	if (ptr > start)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  723) 		create_object(start, ptr - start, object->min_count,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  724) 			      GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  725) 	if (ptr + size < end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  726) 		create_object(ptr + size, end - ptr - size, object->min_count,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  727) 			      GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  728) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  729) 	__delete_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  730) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  731) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  732) static void __paint_it(struct kmemleak_object *object, int color)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  733) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  734) 	object->min_count = color;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  735) 	if (color == KMEMLEAK_BLACK)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  736) 		object->flags |= OBJECT_NO_SCAN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  737) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  738) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  739) static void paint_it(struct kmemleak_object *object, int color)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  740) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  741) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  742) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  743) 	raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  744) 	__paint_it(object, color);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  745) 	raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  746) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  747) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  748) static void paint_ptr(unsigned long ptr, int color)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  749) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  750) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  751) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  752) 	object = find_and_get_object(ptr, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  753) 	if (!object) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  754) 		kmemleak_warn("Trying to color unknown object at 0x%08lx as %s\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  755) 			      ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  756) 			      (color == KMEMLEAK_GREY) ? "Grey" :
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  757) 			      (color == KMEMLEAK_BLACK) ? "Black" : "Unknown");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  758) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  759) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  760) 	paint_it(object, color);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  761) 	put_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  762) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  763) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  764) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  765)  * Mark an object permanently as gray-colored so that it can no longer be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  766)  * reported as a leak. This is used in general to mark a false positive.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  767)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  768) static void make_gray_object(unsigned long ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  769) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  770) 	paint_ptr(ptr, KMEMLEAK_GREY);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  771) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  772) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  773) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  774)  * Mark the object as black-colored so that it is ignored from scans and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  775)  * reporting.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  776)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  777) static void make_black_object(unsigned long ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  778) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  779) 	paint_ptr(ptr, KMEMLEAK_BLACK);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  780) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  781) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  782) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  783)  * Add a scanning area to the object. If at least one such area is added,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  784)  * kmemleak will only scan these ranges rather than the whole memory block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  785)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  786) static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  787) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  788) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  789) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  790) 	struct kmemleak_scan_area *area = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  791) 	unsigned long untagged_ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  792) 	unsigned long untagged_objp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  793) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  794) 	object = find_and_get_object(ptr, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  795) 	if (!object) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  796) 		kmemleak_warn("Adding scan area to unknown object at 0x%08lx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  797) 			      ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  798) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  799) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  800) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  801) 	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  802) 	untagged_objp = (unsigned long)kasan_reset_tag((void *)object->pointer);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  803) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  804) 	if (scan_area_cache)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  805) 		area = kmem_cache_alloc(scan_area_cache, gfp_kmemleak_mask(gfp));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  806) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  807) 	raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  808) 	if (!area) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  809) 		pr_warn_once("Cannot allocate a scan area, scanning the full object\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  810) 		/* mark the object for full scan to avoid false positives */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  811) 		object->flags |= OBJECT_FULL_SCAN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  812) 		goto out_unlock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  813) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  814) 	if (size == SIZE_MAX) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  815) 		size = untagged_objp + object->size - untagged_ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  816) 	} else if (untagged_ptr + size > untagged_objp + object->size) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  817) 		kmemleak_warn("Scan area larger than object 0x%08lx\n", ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  818) 		dump_object_info(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  819) 		kmem_cache_free(scan_area_cache, area);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  820) 		goto out_unlock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  821) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  822) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  823) 	INIT_HLIST_NODE(&area->node);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  824) 	area->start = ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  825) 	area->size = size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  826) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  827) 	hlist_add_head(&area->node, &object->area_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  828) out_unlock:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  829) 	raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  830) 	put_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  831) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  832) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  833) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  834)  * Any surplus references (object already gray) to 'ptr' are passed to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  835)  * 'excess_ref'. This is used in the vmalloc() case where a pointer to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  836)  * vm_struct may be used as an alternative reference to the vmalloc'ed object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  837)  * (see free_thread_stack()).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  838)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  839) static void object_set_excess_ref(unsigned long ptr, unsigned long excess_ref)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  840) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  841) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  842) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  843) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  844) 	object = find_and_get_object(ptr, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  845) 	if (!object) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  846) 		kmemleak_warn("Setting excess_ref on unknown object at 0x%08lx\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  847) 			      ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  848) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  849) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  850) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  851) 	raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  852) 	object->excess_ref = excess_ref;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  853) 	raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  854) 	put_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  855) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  856) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  857) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  858)  * Set the OBJECT_NO_SCAN flag for the object corresponding to the give
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  859)  * pointer. Such object will not be scanned by kmemleak but references to it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  860)  * are searched.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  861)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  862) static void object_no_scan(unsigned long ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  863) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  864) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  865) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  866) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  867) 	object = find_and_get_object(ptr, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  868) 	if (!object) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  869) 		kmemleak_warn("Not scanning unknown object at 0x%08lx\n", ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  870) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  871) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  872) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  873) 	raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  874) 	object->flags |= OBJECT_NO_SCAN;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  875) 	raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  876) 	put_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  877) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  878) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  879) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  880)  * kmemleak_alloc - register a newly allocated object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  881)  * @ptr:	pointer to beginning of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  882)  * @size:	size of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  883)  * @min_count:	minimum number of references to this object. If during memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  884)  *		scanning a number of references less than @min_count is found,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  885)  *		the object is reported as a memory leak. If @min_count is 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  886)  *		the object is never reported as a leak. If @min_count is -1,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  887)  *		the object is ignored (not scanned and not reported as a leak)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  888)  * @gfp:	kmalloc() flags used for kmemleak internal memory allocations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  889)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  890)  * This function is called from the kernel allocators when a new object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  891)  * (memory block) is allocated (kmem_cache_alloc, kmalloc etc.).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  892)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  893) void __ref kmemleak_alloc(const void *ptr, size_t size, int min_count,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  894) 			  gfp_t gfp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  895) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  896) 	pr_debug("%s(0x%p, %zu, %d)\n", __func__, ptr, size, min_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  897) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  898) 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  899) 		create_object((unsigned long)ptr, size, min_count, gfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  900) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  901) EXPORT_SYMBOL_GPL(kmemleak_alloc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  902) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  903) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  904)  * kmemleak_alloc_percpu - register a newly allocated __percpu object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  905)  * @ptr:	__percpu pointer to beginning of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  906)  * @size:	size of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  907)  * @gfp:	flags used for kmemleak internal memory allocations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  908)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  909)  * This function is called from the kernel percpu allocator when a new object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  910)  * (memory block) is allocated (alloc_percpu).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  911)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  912) void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  913) 				 gfp_t gfp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  914) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  915) 	unsigned int cpu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  916) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  917) 	pr_debug("%s(0x%p, %zu)\n", __func__, ptr, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  918) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  919) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  920) 	 * Percpu allocations are only scanned and not reported as leaks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  921) 	 * (min_count is set to 0).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  922) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  923) 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  924) 		for_each_possible_cpu(cpu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  925) 			create_object((unsigned long)per_cpu_ptr(ptr, cpu),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  926) 				      size, 0, gfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  927) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  928) EXPORT_SYMBOL_GPL(kmemleak_alloc_percpu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  929) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  930) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  931)  * kmemleak_vmalloc - register a newly vmalloc'ed object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  932)  * @area:	pointer to vm_struct
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  933)  * @size:	size of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  934)  * @gfp:	__vmalloc() flags used for kmemleak internal memory allocations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  935)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  936)  * This function is called from the vmalloc() kernel allocator when a new
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  937)  * object (memory block) is allocated.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  938)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  939) void __ref kmemleak_vmalloc(const struct vm_struct *area, size_t size, gfp_t gfp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  940) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  941) 	pr_debug("%s(0x%p, %zu)\n", __func__, area, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  942) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  943) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  944) 	 * A min_count = 2 is needed because vm_struct contains a reference to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  945) 	 * the virtual address of the vmalloc'ed block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  946) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  947) 	if (kmemleak_enabled) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  948) 		create_object((unsigned long)area->addr, size, 2, gfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  949) 		object_set_excess_ref((unsigned long)area,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  950) 				      (unsigned long)area->addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  951) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  952) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  953) EXPORT_SYMBOL_GPL(kmemleak_vmalloc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  954) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  955) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  956)  * kmemleak_free - unregister a previously registered object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  957)  * @ptr:	pointer to beginning of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  958)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  959)  * This function is called from the kernel allocators when an object (memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  960)  * block) is freed (kmem_cache_free, kfree, vfree etc.).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  961)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  962) void __ref kmemleak_free(const void *ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  963) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  964) 	pr_debug("%s(0x%p)\n", __func__, ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  965) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  966) 	if (kmemleak_free_enabled && ptr && !IS_ERR(ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  967) 		delete_object_full((unsigned long)ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  968) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  969) EXPORT_SYMBOL_GPL(kmemleak_free);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  970) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  971) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  972)  * kmemleak_free_part - partially unregister a previously registered object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  973)  * @ptr:	pointer to the beginning or inside the object. This also
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  974)  *		represents the start of the range to be freed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  975)  * @size:	size to be unregistered
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  976)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  977)  * This function is called when only a part of a memory block is freed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  978)  * (usually from the bootmem allocator).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  979)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  980) void __ref kmemleak_free_part(const void *ptr, size_t size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  981) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  982) 	pr_debug("%s(0x%p)\n", __func__, ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  983) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  984) 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  985) 		delete_object_part((unsigned long)ptr, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  986) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  987) EXPORT_SYMBOL_GPL(kmemleak_free_part);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  988) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  989) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  990)  * kmemleak_free_percpu - unregister a previously registered __percpu object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  991)  * @ptr:	__percpu pointer to beginning of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  992)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  993)  * This function is called from the kernel percpu allocator when an object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  994)  * (memory block) is freed (free_percpu).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  995)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  996) void __ref kmemleak_free_percpu(const void __percpu *ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  997) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  998) 	unsigned int cpu;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  999) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1000) 	pr_debug("%s(0x%p)\n", __func__, ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1001) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1002) 	if (kmemleak_free_enabled && ptr && !IS_ERR(ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1003) 		for_each_possible_cpu(cpu)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1004) 			delete_object_full((unsigned long)per_cpu_ptr(ptr,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1005) 								      cpu));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1006) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1007) EXPORT_SYMBOL_GPL(kmemleak_free_percpu);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1008) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1009) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1010)  * kmemleak_update_trace - update object allocation stack trace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1011)  * @ptr:	pointer to beginning of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1012)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1013)  * Override the object allocation stack trace for cases where the actual
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1014)  * allocation place is not always useful.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1015)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1016) void __ref kmemleak_update_trace(const void *ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1017) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1018) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1019) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1020) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1021) 	pr_debug("%s(0x%p)\n", __func__, ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1022) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1023) 	if (!kmemleak_enabled || IS_ERR_OR_NULL(ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1024) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1025) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1026) 	object = find_and_get_object((unsigned long)ptr, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1027) 	if (!object) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1028) #ifdef DEBUG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1029) 		kmemleak_warn("Updating stack trace for unknown object at %p\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1030) 			      ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1031) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1032) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1033) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1034) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1035) 	raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1036) 	object->trace_len = __save_stack_trace(object->trace);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1037) 	raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1038) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1039) 	put_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1040) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1041) EXPORT_SYMBOL(kmemleak_update_trace);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1042) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1043) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1044)  * kmemleak_not_leak - mark an allocated object as false positive
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1045)  * @ptr:	pointer to beginning of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1046)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1047)  * Calling this function on an object will cause the memory block to no longer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1048)  * be reported as leak and always be scanned.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1049)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1050) void __ref kmemleak_not_leak(const void *ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1051) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1052) 	pr_debug("%s(0x%p)\n", __func__, ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1053) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1054) 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1055) 		make_gray_object((unsigned long)ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1056) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1057) EXPORT_SYMBOL(kmemleak_not_leak);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1058) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1059) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1060)  * kmemleak_ignore - ignore an allocated object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1061)  * @ptr:	pointer to beginning of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1062)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1063)  * Calling this function on an object will cause the memory block to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1064)  * ignored (not scanned and not reported as a leak). This is usually done when
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1065)  * it is known that the corresponding block is not a leak and does not contain
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1066)  * any references to other allocated memory blocks.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1067)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1068) void __ref kmemleak_ignore(const void *ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1069) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1070) 	pr_debug("%s(0x%p)\n", __func__, ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1071) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1072) 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1073) 		make_black_object((unsigned long)ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1074) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1075) EXPORT_SYMBOL(kmemleak_ignore);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1076) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1077) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1078)  * kmemleak_scan_area - limit the range to be scanned in an allocated object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1079)  * @ptr:	pointer to beginning or inside the object. This also
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1080)  *		represents the start of the scan area
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1081)  * @size:	size of the scan area
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1082)  * @gfp:	kmalloc() flags used for kmemleak internal memory allocations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1083)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1084)  * This function is used when it is known that only certain parts of an object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1085)  * contain references to other objects. Kmemleak will only scan these areas
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1086)  * reducing the number false negatives.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1087)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1088) void __ref kmemleak_scan_area(const void *ptr, size_t size, gfp_t gfp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1089) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1090) 	pr_debug("%s(0x%p)\n", __func__, ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1091) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1092) 	if (kmemleak_enabled && ptr && size && !IS_ERR(ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1093) 		add_scan_area((unsigned long)ptr, size, gfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1094) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1095) EXPORT_SYMBOL(kmemleak_scan_area);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1096) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1097) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1098)  * kmemleak_no_scan - do not scan an allocated object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1099)  * @ptr:	pointer to beginning of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1100)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1101)  * This function notifies kmemleak not to scan the given memory block. Useful
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1102)  * in situations where it is known that the given object does not contain any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1103)  * references to other objects. Kmemleak will not scan such objects reducing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1104)  * the number of false negatives.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1105)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1106) void __ref kmemleak_no_scan(const void *ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1107) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1108) 	pr_debug("%s(0x%p)\n", __func__, ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1109) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1110) 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1111) 		object_no_scan((unsigned long)ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1112) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1113) EXPORT_SYMBOL(kmemleak_no_scan);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1114) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1115) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1116)  * kmemleak_alloc_phys - similar to kmemleak_alloc but taking a physical
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1117)  *			 address argument
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1118)  * @phys:	physical address of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1119)  * @size:	size of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1120)  * @min_count:	minimum number of references to this object.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1121)  *              See kmemleak_alloc()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1122)  * @gfp:	kmalloc() flags used for kmemleak internal memory allocations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1123)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1124) void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1125) 			       gfp_t gfp)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1126) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1127) 	if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1128) 		kmemleak_alloc(__va(phys), size, min_count, gfp);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1129) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1130) EXPORT_SYMBOL(kmemleak_alloc_phys);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1131) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1132) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1133)  * kmemleak_free_part_phys - similar to kmemleak_free_part but taking a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1134)  *			     physical address argument
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1135)  * @phys:	physical address if the beginning or inside an object. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1136)  *		also represents the start of the range to be freed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1137)  * @size:	size to be unregistered
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1138)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1139) void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1140) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1141) 	if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1142) 		kmemleak_free_part(__va(phys), size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1143) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1144) EXPORT_SYMBOL(kmemleak_free_part_phys);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1145) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1146) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1147)  * kmemleak_not_leak_phys - similar to kmemleak_not_leak but taking a physical
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1148)  *			    address argument
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1149)  * @phys:	physical address of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1150)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1151) void __ref kmemleak_not_leak_phys(phys_addr_t phys)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1152) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1153) 	if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1154) 		kmemleak_not_leak(__va(phys));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1155) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1156) EXPORT_SYMBOL(kmemleak_not_leak_phys);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1157) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1158) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1159)  * kmemleak_ignore_phys - similar to kmemleak_ignore but taking a physical
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1160)  *			  address argument
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1161)  * @phys:	physical address of the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1162)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1163) void __ref kmemleak_ignore_phys(phys_addr_t phys)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1164) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1165) 	if (!IS_ENABLED(CONFIG_HIGHMEM) || PHYS_PFN(phys) < max_low_pfn)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1166) 		kmemleak_ignore(__va(phys));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1167) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1168) EXPORT_SYMBOL(kmemleak_ignore_phys);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1169) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1170) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1171)  * Update an object's checksum and return true if it was modified.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1172)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1173) static bool update_checksum(struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1174) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1175) 	u32 old_csum = object->checksum;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1176) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1177) 	kasan_disable_current();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1178) 	kcsan_disable_current();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1179) 	object->checksum = crc32(0, kasan_reset_tag((void *)object->pointer), object->size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1180) 	kasan_enable_current();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1181) 	kcsan_enable_current();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1183) 	return object->checksum != old_csum;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1184) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1185) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1186) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1187)  * Update an object's references. object->lock must be held by the caller.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1188)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1189) static void update_refs(struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1190) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1191) 	if (!color_white(object)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1192) 		/* non-orphan, ignored or new */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1193) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1194) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1195) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1196) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1197) 	 * Increase the object's reference count (number of pointers to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1198) 	 * memory block). If this count reaches the required minimum, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1199) 	 * object's color will become gray and it will be added to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1200) 	 * gray_list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1201) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1202) 	object->count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1203) 	if (color_gray(object)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1204) 		/* put_object() called when removing from gray_list */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1205) 		WARN_ON(!get_object(object));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1206) 		list_add_tail(&object->gray_list, &gray_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1207) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1208) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1209) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1210) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1211)  * Memory scanning is a long process and it needs to be interruptable. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1212)  * function checks whether such interrupt condition occurred.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1213)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1214) static int scan_should_stop(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1215) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1216) 	if (!kmemleak_enabled)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1217) 		return 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1218) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1219) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1220) 	 * This function may be called from either process or kthread context,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1221) 	 * hence the need to check for both stop conditions.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1222) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1223) 	if (current->mm)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1224) 		return signal_pending(current);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1225) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1226) 		return kthread_should_stop();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1227) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1228) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1229) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1230) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1231) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1232)  * Scan a memory block (exclusive range) for valid pointers and add those
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1233)  * found to the gray list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1234)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1235) static void scan_block(void *_start, void *_end,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1236) 		       struct kmemleak_object *scanned)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1237) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1238) 	unsigned long *ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1239) 	unsigned long *start = PTR_ALIGN(_start, BYTES_PER_POINTER);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1240) 	unsigned long *end = _end - (BYTES_PER_POINTER - 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1241) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1242) 	unsigned long untagged_ptr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1243) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1244) 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1245) 	for (ptr = start; ptr < end; ptr++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1246) 		struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1247) 		unsigned long pointer;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1248) 		unsigned long excess_ref;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1249) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1250) 		if (scan_should_stop())
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1251) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1252) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1253) 		kasan_disable_current();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1254) 		pointer = *(unsigned long *)kasan_reset_tag((void *)ptr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1255) 		kasan_enable_current();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1256) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1257) 		untagged_ptr = (unsigned long)kasan_reset_tag((void *)pointer);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1258) 		if (untagged_ptr < min_addr || untagged_ptr >= max_addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1259) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1260) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1261) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1262) 		 * No need for get_object() here since we hold kmemleak_lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1263) 		 * object->use_count cannot be dropped to 0 while the object
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1264) 		 * is still present in object_tree_root and object_list
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1265) 		 * (with updates protected by kmemleak_lock).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1266) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1267) 		object = lookup_object(pointer, 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1268) 		if (!object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1269) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1270) 		if (object == scanned)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1271) 			/* self referenced, ignore */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1272) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1273) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1274) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1275) 		 * Avoid the lockdep recursive warning on object->lock being
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1276) 		 * previously acquired in scan_object(). These locks are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1277) 		 * enclosed by scan_mutex.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1278) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1279) 		raw_spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1280) 		/* only pass surplus references (object already gray) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1281) 		if (color_gray(object)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1282) 			excess_ref = object->excess_ref;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1283) 			/* no need for update_refs() if object already gray */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1284) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1285) 			excess_ref = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1286) 			update_refs(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1287) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1288) 		raw_spin_unlock(&object->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1289) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1290) 		if (excess_ref) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1291) 			object = lookup_object(excess_ref, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1292) 			if (!object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1293) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1294) 			if (object == scanned)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1295) 				/* circular reference, ignore */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1296) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1297) 			raw_spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1298) 			update_refs(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1299) 			raw_spin_unlock(&object->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1300) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1301) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1302) 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1303) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1304) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1305) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1306)  * Scan a large memory block in MAX_SCAN_SIZE chunks to reduce the latency.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1307)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1308) #ifdef CONFIG_SMP
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1309) static void scan_large_block(void *start, void *end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1310) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1311) 	void *next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1312) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1313) 	while (start < end) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1314) 		next = min(start + MAX_SCAN_SIZE, end);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1315) 		scan_block(start, next, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1316) 		start = next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1317) 		cond_resched();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1318) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1319) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1320) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1321) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1322) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1323)  * Scan a memory block corresponding to a kmemleak_object. A condition is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1324)  * that object->use_count >= 1.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1325)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1326) static void scan_object(struct kmemleak_object *object)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1327) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1328) 	struct kmemleak_scan_area *area;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1329) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1330) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1331) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1332) 	 * Once the object->lock is acquired, the corresponding memory block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1333) 	 * cannot be freed (the same lock is acquired in delete_object).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1334) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1335) 	raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1336) 	if (object->flags & OBJECT_NO_SCAN)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1337) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1338) 	if (!(object->flags & OBJECT_ALLOCATED))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1339) 		/* already freed object */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1340) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1341) 	if (hlist_empty(&object->area_list) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1342) 	    object->flags & OBJECT_FULL_SCAN) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1343) 		void *start = (void *)object->pointer;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1344) 		void *end = (void *)(object->pointer + object->size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1345) 		void *next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1346) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1347) 		do {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1348) 			next = min(start + MAX_SCAN_SIZE, end);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1349) 			scan_block(start, next, object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1350) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1351) 			start = next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1352) 			if (start >= end)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1353) 				break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1354) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1355) 			raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1356) 			cond_resched();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1357) 			raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1358) 		} while (object->flags & OBJECT_ALLOCATED);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1359) 	} else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1360) 		hlist_for_each_entry(area, &object->area_list, node)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1361) 			scan_block((void *)area->start,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1362) 				   (void *)(area->start + area->size),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1363) 				   object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1364) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1365) 	raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1366) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1367) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1368) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1369)  * Scan the objects already referenced (gray objects). More objects will be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1370)  * referenced and, if there are no memory leaks, all the objects are scanned.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1371)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1372) static void scan_gray_list(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1373) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1374) 	struct kmemleak_object *object, *tmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1375) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1376) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1377) 	 * The list traversal is safe for both tail additions and removals
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1378) 	 * from inside the loop. The kmemleak objects cannot be freed from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1379) 	 * outside the loop because their use_count was incremented.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1380) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1381) 	object = list_entry(gray_list.next, typeof(*object), gray_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1382) 	while (&object->gray_list != &gray_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1383) 		cond_resched();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1384) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1385) 		/* may add new objects to the list */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1386) 		if (!scan_should_stop())
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1387) 			scan_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1388) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1389) 		tmp = list_entry(object->gray_list.next, typeof(*object),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1390) 				 gray_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1391) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1392) 		/* remove the object from the list and release it */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1393) 		list_del(&object->gray_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1394) 		put_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1395) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1396) 		object = tmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1397) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1398) 	WARN_ON(!list_empty(&gray_list));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1399) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1400) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1401) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1402)  * Scan data sections and all the referenced memory blocks allocated via the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1403)  * kernel's standard allocators. This function must be called with the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1404)  * scan_mutex held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1405)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1406) static void kmemleak_scan(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1407) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1408) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1409) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1410) 	struct zone *zone;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1411) 	int __maybe_unused i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1412) 	int new_leaks = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1413) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1414) 	jiffies_last_scan = jiffies;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1415) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1416) 	/* prepare the kmemleak_object's */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1417) 	rcu_read_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1418) 	list_for_each_entry_rcu(object, &object_list, object_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1419) 		raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1420) #ifdef DEBUG
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1421) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1422) 		 * With a few exceptions there should be a maximum of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1423) 		 * 1 reference to any object at this point.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1424) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1425) 		if (atomic_read(&object->use_count) > 1) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1426) 			pr_debug("object->use_count = %d\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1427) 				 atomic_read(&object->use_count));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1428) 			dump_object_info(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1429) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1430) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1431) 		/* reset the reference count (whiten the object) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1432) 		object->count = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1433) 		if (color_gray(object) && get_object(object))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1434) 			list_add_tail(&object->gray_list, &gray_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1435) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1436) 		raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1437) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1438) 	rcu_read_unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1439) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1440) #ifdef CONFIG_SMP
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1441) 	/* per-cpu sections scanning */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1442) 	for_each_possible_cpu(i)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1443) 		scan_large_block(__per_cpu_start + per_cpu_offset(i),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1444) 				 __per_cpu_end + per_cpu_offset(i));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1445) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1446) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1447) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1448) 	 * Struct page scanning for each node.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1449) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1450) 	get_online_mems();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1451) 	for_each_populated_zone(zone) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1452) 		unsigned long start_pfn = zone->zone_start_pfn;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1453) 		unsigned long end_pfn = zone_end_pfn(zone);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1454) 		unsigned long pfn;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1455) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1456) 		for (pfn = start_pfn; pfn < end_pfn; pfn++) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1457) 			struct page *page = pfn_to_online_page(pfn);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1458) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1459) 			if (!page)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1460) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1461) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1462) 			/* only scan pages belonging to this zone */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1463) 			if (page_zone(page) != zone)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1464) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1465) 			/* only scan if page is in use */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1466) 			if (page_count(page) == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1467) 				continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1468) 			scan_block(page, page + 1, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1469) 			if (!(pfn & 63))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1470) 				cond_resched();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1471) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1472) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1473) 	put_online_mems();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1474) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1475) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1476) 	 * Scanning the task stacks (may introduce false negatives).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1477) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1478) 	if (kmemleak_stack_scan) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1479) 		struct task_struct *p, *g;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1480) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1481) 		rcu_read_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1482) 		for_each_process_thread(g, p) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1483) 			void *stack = try_get_task_stack(p);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1484) 			if (stack) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1485) 				scan_block(stack, stack + THREAD_SIZE, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1486) 				put_task_stack(p);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1487) 			}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1488) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1489) 		rcu_read_unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1490) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1491) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1492) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1493) 	 * Scan the objects already referenced from the sections scanned
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1494) 	 * above.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1495) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1496) 	scan_gray_list();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1497) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1498) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1499) 	 * Check for new or unreferenced objects modified since the previous
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1500) 	 * scan and color them gray until the next scan.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1501) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1502) 	rcu_read_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1503) 	list_for_each_entry_rcu(object, &object_list, object_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1504) 		raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1505) 		if (color_white(object) && (object->flags & OBJECT_ALLOCATED)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1506) 		    && update_checksum(object) && get_object(object)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1507) 			/* color it gray temporarily */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1508) 			object->count = object->min_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1509) 			list_add_tail(&object->gray_list, &gray_list);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1510) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1511) 		raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1512) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1513) 	rcu_read_unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1514) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1515) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1516) 	 * Re-scan the gray list for modified unreferenced objects.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1517) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1518) 	scan_gray_list();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1519) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1520) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1521) 	 * If scanning was stopped do not report any new unreferenced objects.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1522) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1523) 	if (scan_should_stop())
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1524) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1525) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1526) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1527) 	 * Scanning result reporting.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1528) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1529) 	rcu_read_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1530) 	list_for_each_entry_rcu(object, &object_list, object_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1531) 		raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1532) 		if (unreferenced_object(object) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1533) 		    !(object->flags & OBJECT_REPORTED)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1534) 			object->flags |= OBJECT_REPORTED;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1535) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1536) 			if (kmemleak_verbose)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1537) 				print_unreferenced(NULL, object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1538) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1539) 			new_leaks++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1540) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1541) 		raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1542) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1543) 	rcu_read_unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1544) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1545) 	if (new_leaks) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1546) 		kmemleak_found_leaks = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1547) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1548) 		pr_info("%d new suspected memory leaks (see /sys/kernel/debug/kmemleak)\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1549) 			new_leaks);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1550) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1551) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1552) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1553) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1554) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1555)  * Thread function performing automatic memory scanning. Unreferenced objects
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1556)  * at the end of a memory scan are reported but only the first time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1557)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1558) static int kmemleak_scan_thread(void *arg)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1559) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1560) 	static int first_run = IS_ENABLED(CONFIG_DEBUG_KMEMLEAK_AUTO_SCAN);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1561) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1562) 	pr_info("Automatic memory scanning thread started\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1563) 	set_user_nice(current, 10);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1564) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1565) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1566) 	 * Wait before the first scan to allow the system to fully initialize.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1567) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1568) 	if (first_run) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1569) 		signed long timeout = msecs_to_jiffies(SECS_FIRST_SCAN * 1000);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1570) 		first_run = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1571) 		while (timeout && !kthread_should_stop())
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1572) 			timeout = schedule_timeout_interruptible(timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1573) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1574) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1575) 	while (!kthread_should_stop()) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1576) 		signed long timeout = jiffies_scan_wait;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1577) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1578) 		mutex_lock(&scan_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1579) 		kmemleak_scan();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1580) 		mutex_unlock(&scan_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1581) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1582) 		/* wait before the next scan */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1583) 		while (timeout && !kthread_should_stop())
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1584) 			timeout = schedule_timeout_interruptible(timeout);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1585) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1586) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1587) 	pr_info("Automatic memory scanning thread ended\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1588) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1589) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1590) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1591) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1592) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1593)  * Start the automatic memory scanning thread. This function must be called
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1594)  * with the scan_mutex held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1595)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1596) static void start_scan_thread(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1597) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1598) 	if (scan_thread)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1599) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1600) 	scan_thread = kthread_run(kmemleak_scan_thread, NULL, "kmemleak");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1601) 	if (IS_ERR(scan_thread)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1602) 		pr_warn("Failed to create the scan thread\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1603) 		scan_thread = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1604) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1605) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1606) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1607) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1608)  * Stop the automatic memory scanning thread.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1609)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1610) static void stop_scan_thread(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1611) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1612) 	if (scan_thread) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1613) 		kthread_stop(scan_thread);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1614) 		scan_thread = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1615) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1616) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1617) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1618) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1619)  * Iterate over the object_list and return the first valid object at or after
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1620)  * the required position with its use_count incremented. The function triggers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1621)  * a memory scanning when the pos argument points to the first position.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1622)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1623) static void *kmemleak_seq_start(struct seq_file *seq, loff_t *pos)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1624) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1625) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1626) 	loff_t n = *pos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1627) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1628) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1629) 	err = mutex_lock_interruptible(&scan_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1630) 	if (err < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1631) 		return ERR_PTR(err);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1632) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1633) 	rcu_read_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1634) 	list_for_each_entry_rcu(object, &object_list, object_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1635) 		if (n-- > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1636) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1637) 		if (get_object(object))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1638) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1639) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1640) 	object = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1641) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1642) 	return object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1643) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1644) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1645) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1646)  * Return the next object in the object_list. The function decrements the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1647)  * use_count of the previous object and increases that of the next one.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1648)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1649) static void *kmemleak_seq_next(struct seq_file *seq, void *v, loff_t *pos)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1650) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1651) 	struct kmemleak_object *prev_obj = v;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1652) 	struct kmemleak_object *next_obj = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1653) 	struct kmemleak_object *obj = prev_obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1654) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1655) 	++(*pos);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1656) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1657) 	list_for_each_entry_continue_rcu(obj, &object_list, object_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1658) 		if (get_object(obj)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1659) 			next_obj = obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1660) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1661) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1662) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1663) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1664) 	put_object(prev_obj);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1665) 	return next_obj;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1666) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1667) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1668) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1669)  * Decrement the use_count of the last object required, if any.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1670)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1671) static void kmemleak_seq_stop(struct seq_file *seq, void *v)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1672) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1673) 	if (!IS_ERR(v)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1674) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1675) 		 * kmemleak_seq_start may return ERR_PTR if the scan_mutex
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1676) 		 * waiting was interrupted, so only release it if !IS_ERR.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1677) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1678) 		rcu_read_unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1679) 		mutex_unlock(&scan_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1680) 		if (v)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1681) 			put_object(v);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1682) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1683) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1684) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1685) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1686)  * Print the information for an unreferenced object to the seq file.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1687)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1688) static int kmemleak_seq_show(struct seq_file *seq, void *v)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1689) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1690) 	struct kmemleak_object *object = v;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1691) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1692) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1693) 	raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1694) 	if ((object->flags & OBJECT_REPORTED) && unreferenced_object(object))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1695) 		print_unreferenced(seq, object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1696) 	raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1697) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1698) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1699) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1700) static const struct seq_operations kmemleak_seq_ops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1701) 	.start = kmemleak_seq_start,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1702) 	.next  = kmemleak_seq_next,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1703) 	.stop  = kmemleak_seq_stop,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1704) 	.show  = kmemleak_seq_show,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1705) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1706) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1707) static int kmemleak_open(struct inode *inode, struct file *file)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1708) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1709) 	return seq_open(file, &kmemleak_seq_ops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1710) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1711) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1712) static int dump_str_object_info(const char *str)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1713) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1714) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1715) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1716) 	unsigned long addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1717) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1718) 	if (kstrtoul(str, 0, &addr))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1719) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1720) 	object = find_and_get_object(addr, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1721) 	if (!object) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1722) 		pr_info("Unknown object at 0x%08lx\n", addr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1723) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1724) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1725) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1726) 	raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1727) 	dump_object_info(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1728) 	raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1729) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1730) 	put_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1731) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1732) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1733) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1734) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1735)  * We use grey instead of black to ensure we can do future scans on the same
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1736)  * objects. If we did not do future scans these black objects could
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1737)  * potentially contain references to newly allocated objects in the future and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1738)  * we'd end up with false positives.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1739)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1740) static void kmemleak_clear(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1741) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1742) 	struct kmemleak_object *object;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1743) 	unsigned long flags;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1744) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1745) 	rcu_read_lock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1746) 	list_for_each_entry_rcu(object, &object_list, object_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1747) 		raw_spin_lock_irqsave(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1748) 		if ((object->flags & OBJECT_REPORTED) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1749) 		    unreferenced_object(object))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1750) 			__paint_it(object, KMEMLEAK_GREY);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1751) 		raw_spin_unlock_irqrestore(&object->lock, flags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1752) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1753) 	rcu_read_unlock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1754) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1755) 	kmemleak_found_leaks = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1756) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1757) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1758) static void __kmemleak_do_cleanup(void);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1759) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1760) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1761)  * File write operation to configure kmemleak at run-time. The following
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1762)  * commands can be written to the /sys/kernel/debug/kmemleak file:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1763)  *   off	- disable kmemleak (irreversible)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1764)  *   stack=on	- enable the task stacks scanning
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1765)  *   stack=off	- disable the tasks stacks scanning
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1766)  *   scan=on	- start the automatic memory scanning thread
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1767)  *   scan=off	- stop the automatic memory scanning thread
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1768)  *   scan=...	- set the automatic memory scanning period in seconds (0 to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1769)  *		  disable it)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1770)  *   scan	- trigger a memory scan
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1771)  *   clear	- mark all current reported unreferenced kmemleak objects as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1772)  *		  grey to ignore printing them, or free all kmemleak objects
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1773)  *		  if kmemleak has been disabled.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1774)  *   dump=...	- dump information about the object found at the given address
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1775)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1776) static ssize_t kmemleak_write(struct file *file, const char __user *user_buf,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1777) 			      size_t size, loff_t *ppos)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1778) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1779) 	char buf[64];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1780) 	int buf_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1781) 	int ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1782) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1783) 	buf_size = min(size, (sizeof(buf) - 1));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1784) 	if (strncpy_from_user(buf, user_buf, buf_size) < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1785) 		return -EFAULT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1786) 	buf[buf_size] = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1787) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1788) 	ret = mutex_lock_interruptible(&scan_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1789) 	if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1790) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1791) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1792) 	if (strncmp(buf, "clear", 5) == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1793) 		if (kmemleak_enabled)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1794) 			kmemleak_clear();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1795) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1796) 			__kmemleak_do_cleanup();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1797) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1798) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1799) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1800) 	if (!kmemleak_enabled) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1801) 		ret = -EPERM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1802) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1803) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1804) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1805) 	if (strncmp(buf, "off", 3) == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1806) 		kmemleak_disable();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1807) 	else if (strncmp(buf, "stack=on", 8) == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1808) 		kmemleak_stack_scan = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1809) 	else if (strncmp(buf, "stack=off", 9) == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1810) 		kmemleak_stack_scan = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1811) 	else if (strncmp(buf, "scan=on", 7) == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1812) 		start_scan_thread();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1813) 	else if (strncmp(buf, "scan=off", 8) == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1814) 		stop_scan_thread();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1815) 	else if (strncmp(buf, "scan=", 5) == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1816) 		unsigned long secs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1817) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1818) 		ret = kstrtoul(buf + 5, 0, &secs);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1819) 		if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1820) 			goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1821) 		stop_scan_thread();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1822) 		if (secs) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1823) 			jiffies_scan_wait = msecs_to_jiffies(secs * 1000);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1824) 			start_scan_thread();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1825) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1826) 	} else if (strncmp(buf, "scan", 4) == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1827) 		kmemleak_scan();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1828) 	else if (strncmp(buf, "dump=", 5) == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1829) 		ret = dump_str_object_info(buf + 5);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1830) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1831) 		ret = -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1832) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1833) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1834) 	mutex_unlock(&scan_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1835) 	if (ret < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1836) 		return ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1837) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1838) 	/* ignore the rest of the buffer, only one command at a time */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1839) 	*ppos += size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1840) 	return size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1841) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1842) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1843) static const struct file_operations kmemleak_fops = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1844) 	.owner		= THIS_MODULE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1845) 	.open		= kmemleak_open,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1846) 	.read		= seq_read,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1847) 	.write		= kmemleak_write,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1848) 	.llseek		= seq_lseek,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1849) 	.release	= seq_release,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1850) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1851) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1852) static void __kmemleak_do_cleanup(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1853) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1854) 	struct kmemleak_object *object, *tmp;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1855) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1856) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1857) 	 * Kmemleak has already been disabled, no need for RCU list traversal
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1858) 	 * or kmemleak_lock held.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1859) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1860) 	list_for_each_entry_safe(object, tmp, &object_list, object_list) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1861) 		__remove_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1862) 		__delete_object(object);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1863) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1864) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1865) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1866) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1867)  * Stop the memory scanning thread and free the kmemleak internal objects if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1868)  * no previous scan thread (otherwise, kmemleak may still have some useful
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1869)  * information on memory leaks).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1870)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1871) static void kmemleak_do_cleanup(struct work_struct *work)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1872) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1873) 	stop_scan_thread();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1874) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1875) 	mutex_lock(&scan_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1876) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1877) 	 * Once it is made sure that kmemleak_scan has stopped, it is safe to no
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1878) 	 * longer track object freeing. Ordering of the scan thread stopping and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1879) 	 * the memory accesses below is guaranteed by the kthread_stop()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1880) 	 * function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1881) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1882) 	kmemleak_free_enabled = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1883) 	mutex_unlock(&scan_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1884) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1885) 	if (!kmemleak_found_leaks)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1886) 		__kmemleak_do_cleanup();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1887) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1888) 		pr_info("Kmemleak disabled without freeing internal data. Reclaim the memory with \"echo clear > /sys/kernel/debug/kmemleak\".\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1889) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1890) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1891) static DECLARE_WORK(cleanup_work, kmemleak_do_cleanup);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1892) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1893) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1894)  * Disable kmemleak. No memory allocation/freeing will be traced once this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1895)  * function is called. Disabling kmemleak is an irreversible operation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1896)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1897) static void kmemleak_disable(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1898) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1899) 	/* atomically check whether it was already invoked */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1900) 	if (cmpxchg(&kmemleak_error, 0, 1))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1901) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1902) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1903) 	/* stop any memory operation tracing */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1904) 	kmemleak_enabled = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1905) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1906) 	/* check whether it is too early for a kernel thread */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1907) 	if (kmemleak_initialized)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1908) 		schedule_work(&cleanup_work);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1909) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1910) 		kmemleak_free_enabled = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1911) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1912) 	pr_info("Kernel memory leak detector disabled\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1913) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1914) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1915) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1916)  * Allow boot-time kmemleak disabling (enabled by default).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1917)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1918) static int __init kmemleak_boot_config(char *str)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1919) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1920) 	if (!str)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1921) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1922) 	if (strcmp(str, "off") == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1923) 		kmemleak_disable();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1924) 	else if (strcmp(str, "on") == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1925) 		kmemleak_skip_disable = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1926) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1927) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1928) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1929) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1930) early_param("kmemleak", kmemleak_boot_config);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1931) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1932) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1933)  * Kmemleak initialization.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1934)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1935) void __init kmemleak_init(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1936) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1937) #ifdef CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1938) 	if (!kmemleak_skip_disable) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1939) 		kmemleak_disable();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1940) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1941) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1942) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1943) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1944) 	if (kmemleak_error)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1945) 		return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1946) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1947) 	jiffies_min_age = msecs_to_jiffies(MSECS_MIN_AGE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1948) 	jiffies_scan_wait = msecs_to_jiffies(SECS_SCAN_WAIT * 1000);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1949) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1950) 	object_cache = KMEM_CACHE(kmemleak_object, SLAB_NOLEAKTRACE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1951) 	scan_area_cache = KMEM_CACHE(kmemleak_scan_area, SLAB_NOLEAKTRACE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1952) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1953) 	/* register the data/bss sections */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1954) 	create_object((unsigned long)_sdata, _edata - _sdata,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1955) 		      KMEMLEAK_GREY, GFP_ATOMIC);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1956) 	create_object((unsigned long)__bss_start, __bss_stop - __bss_start,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1957) 		      KMEMLEAK_GREY, GFP_ATOMIC);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1958) 	/* only register .data..ro_after_init if not within .data */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1959) 	if (&__start_ro_after_init < &_sdata || &__end_ro_after_init > &_edata)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1960) 		create_object((unsigned long)__start_ro_after_init,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1961) 			      __end_ro_after_init - __start_ro_after_init,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1962) 			      KMEMLEAK_GREY, GFP_ATOMIC);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1963) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1964) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1965) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1966)  * Late initialization function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1967)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1968) static int __init kmemleak_late_init(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1969) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1970) 	kmemleak_initialized = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1971) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1972) 	debugfs_create_file("kmemleak", 0644, NULL, NULL, &kmemleak_fops);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1973) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1974) 	if (kmemleak_error) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1975) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1976) 		 * Some error occurred and kmemleak was disabled. There is a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1977) 		 * small chance that kmemleak_disable() was called immediately
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1978) 		 * after setting kmemleak_initialized and we may end up with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1979) 		 * two clean-up threads but serialized by scan_mutex.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1980) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1981) 		schedule_work(&cleanup_work);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1982) 		return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1983) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1984) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1985) 	if (IS_ENABLED(CONFIG_DEBUG_KMEMLEAK_AUTO_SCAN)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1986) 		mutex_lock(&scan_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1987) 		start_scan_thread();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1988) 		mutex_unlock(&scan_mutex);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1989) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1990) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1991) 	pr_info("Kernel memory leak detector initialized (mem pool available: %d)\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1992) 		mem_pool_free_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1993) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1994) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1995) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1996) late_initcall(kmemleak_late_init);