Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) .. _hugetlbfs_reserve:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3) =====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4) Hugetlbfs Reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5) =====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7) Overview
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8) ========
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10) Huge pages as described at :ref:`hugetlbpage` are typically
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11) preallocated for application use.  These huge pages are instantiated in a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12) task's address space at page fault time if the VMA indicates huge pages are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13) to be used.  If no huge page exists at page fault time, the task is sent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14) a SIGBUS and often dies an unhappy death.  Shortly after huge page support
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15) was added, it was determined that it would be better to detect a shortage
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16) of huge pages at mmap() time.  The idea is that if there were not enough
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17) huge pages to cover the mapping, the mmap() would fail.  This was first
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18) done with a simple check in the code at mmap() time to determine if there
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19) were enough free huge pages to cover the mapping.  Like most things in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20) kernel, the code has evolved over time.  However, the basic idea was to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21) 'reserve' huge pages at mmap() time to ensure that huge pages would be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22) available for page faults in that mapping.  The description below attempts to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23) describe how huge page reserve processing is done in the v4.10 kernel.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26) Audience
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27) ========
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28) This description is primarily targeted at kernel developers who are modifying
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29) hugetlbfs code.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32) The Data Structures
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33) ===================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35) resv_huge_pages
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36) 	This is a global (per-hstate) count of reserved huge pages.  Reserved
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37) 	huge pages are only available to the task which reserved them.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38) 	Therefore, the number of huge pages generally available is computed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39) 	as (``free_huge_pages - resv_huge_pages``).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40) Reserve Map
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41) 	A reserve map is described by the structure::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43) 		struct resv_map {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44) 			struct kref refs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45) 			spinlock_t lock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46) 			struct list_head regions;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47) 			long adds_in_progress;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48) 			struct list_head region_cache;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49) 			long region_cache_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50) 		};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52) 	There is one reserve map for each huge page mapping in the system.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53) 	The regions list within the resv_map describes the regions within
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54) 	the mapping.  A region is described as::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56) 		struct file_region {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57) 			struct list_head link;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58) 			long from;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59) 			long to;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60) 		};
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62) 	The 'from' and 'to' fields of the file region structure are huge page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63) 	indices into the mapping.  Depending on the type of mapping, a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64) 	region in the reserv_map may indicate reservations exist for the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65) 	range, or reservations do not exist.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66) Flags for MAP_PRIVATE Reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67) 	These are stored in the bottom bits of the reservation map pointer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69) 	``#define HPAGE_RESV_OWNER    (1UL << 0)``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70) 		Indicates this task is the owner of the reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71) 		associated with the mapping.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72) 	``#define HPAGE_RESV_UNMAPPED (1UL << 1)``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73) 		Indicates task originally mapping this range (and creating
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74) 		reserves) has unmapped a page from this task (the child)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75) 		due to a failed COW.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76) Page Flags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77) 	The PagePrivate page flag is used to indicate that a huge page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78) 	reservation must be restored when the huge page is freed.  More
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79) 	details will be discussed in the "Freeing huge pages" section.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82) Reservation Map Location (Private or Shared)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83) ============================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85) A huge page mapping or segment is either private or shared.  If private,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86) it is typically only available to a single address space (task).  If shared,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87) it can be mapped into multiple address spaces (tasks).  The location and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88) semantics of the reservation map is significantly different for the two types
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89) of mappings.  Location differences are:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91) - For private mappings, the reservation map hangs off the VMA structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92)   Specifically, vma->vm_private_data.  This reserve map is created at the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93)   time the mapping (mmap(MAP_PRIVATE)) is created.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94) - For shared mappings, the reservation map hangs off the inode.  Specifically,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95)   inode->i_mapping->private_data.  Since shared mappings are always backed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96)   by files in the hugetlbfs filesystem, the hugetlbfs code ensures each inode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97)   contains a reservation map.  As a result, the reservation map is allocated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98)   when the inode is created.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) Creating Reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) =====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) Reservations are created when a huge page backed shared memory segment is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) created (shmget(SHM_HUGETLB)) or a mapping is created via mmap(MAP_HUGETLB).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) These operations result in a call to the routine hugetlb_reserve_pages()::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) 	int hugetlb_reserve_pages(struct inode *inode,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) 				  long from, long to,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) 				  struct vm_area_struct *vma,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) 				  vm_flags_t vm_flags)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) The first thing hugetlb_reserve_pages() does is check if the NORESERVE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) flag was specified in either the shmget() or mmap() call.  If NORESERVE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) was specified, then this routine returns immediately as no reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) are desired.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) The arguments 'from' and 'to' are huge page indices into the mapping or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) underlying file.  For shmget(), 'from' is always 0 and 'to' corresponds to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) the length of the segment/mapping.  For mmap(), the offset argument could
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) be used to specify the offset into the underlying file.  In such a case,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) the 'from' and 'to' arguments have been adjusted by this offset.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) One of the big differences between PRIVATE and SHARED mappings is the way
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) in which reservations are represented in the reservation map.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) - For shared mappings, an entry in the reservation map indicates a reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127)   exists or did exist for the corresponding page.  As reservations are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128)   consumed, the reservation map is not modified.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) - For private mappings, the lack of an entry in the reservation map indicates
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130)   a reservation exists for the corresponding page.  As reservations are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131)   consumed, entries are added to the reservation map.  Therefore, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132)   reservation map can also be used to determine which reservations have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133)   been consumed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) For private mappings, hugetlb_reserve_pages() creates the reservation map and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) hangs it off the VMA structure.  In addition, the HPAGE_RESV_OWNER flag is set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) to indicate this VMA owns the reservations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) The reservation map is consulted to determine how many huge page reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) are needed for the current mapping/segment.  For private mappings, this is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) always the value (to - from).  However, for shared mappings it is possible that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) some reservations may already exist within the range (to - from).  See the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) section :ref:`Reservation Map Modifications <resv_map_modifications>`
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) for details on how this is accomplished.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) The mapping may be associated with a subpool.  If so, the subpool is consulted
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) to ensure there is sufficient space for the mapping.  It is possible that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) subpool has set aside reservations that can be used for the mapping.  See the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) section :ref:`Subpool Reservations <sub_pool_resv>` for more details.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) After consulting the reservation map and subpool, the number of needed new
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) reservations is known.  The routine hugetlb_acct_memory() is called to check
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) for and take the requested number of reservations.  hugetlb_acct_memory()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) calls into routines that potentially allocate and adjust surplus page counts.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) However, within those routines the code is simply checking to ensure there
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) are enough free huge pages to accommodate the reservation.  If there are,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) the global reservation count resv_huge_pages is adjusted something like the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) following::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) 	if (resv_needed <= (resv_huge_pages - free_huge_pages))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) 		resv_huge_pages += resv_needed;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) Note that the global lock hugetlb_lock is held when checking and adjusting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) these counters.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) If there were enough free huge pages and the global count resv_huge_pages
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) was adjusted, then the reservation map associated with the mapping is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) modified to reflect the reservations.  In the case of a shared mapping, a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) file_region will exist that includes the range 'from' - 'to'.  For private
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) mappings, no modifications are made to the reservation map as lack of an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) entry indicates a reservation exists.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) If hugetlb_reserve_pages() was successful, the global reservation count and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) reservation map associated with the mapping will be modified as required to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) ensure reservations exist for the range 'from' - 'to'.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) .. _consume_resv:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) Consuming Reservations/Allocating a Huge Page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) =============================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) Reservations are consumed when huge pages associated with the reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) are allocated and instantiated in the corresponding mapping.  The allocation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) is performed within the routine alloc_huge_page()::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) 	struct page *alloc_huge_page(struct vm_area_struct *vma,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) 				     unsigned long addr, int avoid_reserve)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) alloc_huge_page is passed a VMA pointer and a virtual address, so it can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) consult the reservation map to determine if a reservation exists.  In addition,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) alloc_huge_page takes the argument avoid_reserve which indicates reserves
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) should not be used even if it appears they have been set aside for the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) specified address.  The avoid_reserve argument is most often used in the case
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) of Copy on Write and Page Migration where additional copies of an existing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) page are being allocated.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) The helper routine vma_needs_reservation() is called to determine if a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) reservation exists for the address within the mapping(vma).  See the section
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) :ref:`Reservation Map Helper Routines <resv_map_helpers>` for detailed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) information on what this routine does.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) The value returned from vma_needs_reservation() is generally
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) 0 or 1.  0 if a reservation exists for the address, 1 if no reservation exists.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) If a reservation does not exist, and there is a subpool associated with the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) mapping the subpool is consulted to determine if it contains reservations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) If the subpool contains reservations, one can be used for this allocation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) However, in every case the avoid_reserve argument overrides the use of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) a reservation for the allocation.  After determining whether a reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) exists and can be used for the allocation, the routine dequeue_huge_page_vma()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) is called.  This routine takes two arguments related to reservations:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) - avoid_reserve, this is the same value/argument passed to alloc_huge_page()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) - chg, even though this argument is of type long only the values 0 or 1 are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213)   passed to dequeue_huge_page_vma.  If the value is 0, it indicates a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214)   reservation exists (see the section "Memory Policy and Reservations" for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215)   possible issues).  If the value is 1, it indicates a reservation does not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216)   exist and the page must be taken from the global free pool if possible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) The free lists associated with the memory policy of the VMA are searched for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) a free page.  If a page is found, the value free_huge_pages is decremented
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) when the page is removed from the free list.  If there was a reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) associated with the page, the following adjustments are made::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) 	SetPagePrivate(page);	/* Indicates allocating this page consumed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 				 * a reservation, and if an error is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) 				 * encountered such that the page must be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) 				 * freed, the reservation will be restored. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) 	resv_huge_pages--;	/* Decrement the global reservation count */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) Note, if no huge page can be found that satisfies the VMA's memory policy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) an attempt will be made to allocate one using the buddy allocator.  This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) brings up the issue of surplus huge pages and overcommit which is beyond
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) the scope reservations.  Even if a surplus page is allocated, the same
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) reservation based adjustments as above will be made: SetPagePrivate(page) and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) resv_huge_pages--.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) After obtaining a new huge page, (page)->private is set to the value of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) the subpool associated with the page if it exists.  This will be used for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) subpool accounting when the page is freed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) The routine vma_commit_reservation() is then called to adjust the reserve
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) map based on the consumption of the reservation.  In general, this involves
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) ensuring the page is represented within a file_region structure of the region
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) map.  For shared mappings where the reservation was present, an entry
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) in the reserve map already existed so no change is made.  However, if there
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) was no reservation in a shared mapping or this was a private mapping a new
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) entry must be created.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) It is possible that the reserve map could have been changed between the call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) to vma_needs_reservation() at the beginning of alloc_huge_page() and the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) call to vma_commit_reservation() after the page was allocated.  This would
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) be possible if hugetlb_reserve_pages was called for the same page in a shared
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) mapping.  In such cases, the reservation count and subpool free page count
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) will be off by one.  This rare condition can be identified by comparing the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) return value from vma_needs_reservation and vma_commit_reservation.  If such
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) a race is detected, the subpool and global reserve counts are adjusted to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) compensate.  See the section
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) :ref:`Reservation Map Helper Routines <resv_map_helpers>` for more
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) information on these routines.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) Instantiate Huge Pages
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) ======================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) After huge page allocation, the page is typically added to the page tables
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) of the allocating task.  Before this, pages in a shared mapping are added
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) to the page cache and pages in private mappings are added to an anonymous
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) reverse mapping.  In both cases, the PagePrivate flag is cleared.  Therefore,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) when a huge page that has been instantiated is freed no adjustment is made
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) to the global reservation count (resv_huge_pages).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) Freeing Huge Pages
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) ==================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) Huge page freeing is performed by the routine free_huge_page().  This routine
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) is the destructor for hugetlbfs compound pages.  As a result, it is only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) passed a pointer to the page struct.  When a huge page is freed, reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) accounting may need to be performed.  This would be the case if the page was
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) associated with a subpool that contained reserves, or the page is being freed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) on an error path where a global reserve count must be restored.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) The page->private field points to any subpool associated with the page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) If the PagePrivate flag is set, it indicates the global reserve count should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) be adjusted (see the section
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) :ref:`Consuming Reservations/Allocating a Huge Page <consume_resv>`
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) for information on how these are set).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) The routine first calls hugepage_subpool_put_pages() for the page.  If this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) routine returns a value of 0 (which does not equal the value passed 1) it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) indicates reserves are associated with the subpool, and this newly free page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) must be used to keep the number of subpool reserves above the minimum size.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) Therefore, the global resv_huge_pages counter is incremented in this case.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) If the PagePrivate flag was set in the page, the global resv_huge_pages counter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) will always be incremented.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) .. _sub_pool_resv:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) Subpool Reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) ====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) There is a struct hstate associated with each huge page size.  The hstate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) tracks all huge pages of the specified size.  A subpool represents a subset
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) of pages within a hstate that is associated with a mounted hugetlbfs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) filesystem.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) When a hugetlbfs filesystem is mounted a min_size option can be specified
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) which indicates the minimum number of huge pages required by the filesystem.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) If this option is specified, the number of huge pages corresponding to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) min_size are reserved for use by the filesystem.  This number is tracked in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) the min_hpages field of a struct hugepage_subpool.  At mount time,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312) hugetlb_acct_memory(min_hpages) is called to reserve the specified number of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) huge pages.  If they can not be reserved, the mount fails.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) The routines hugepage_subpool_get/put_pages() are called when pages are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) obtained from or released back to a subpool.  They perform all subpool
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) accounting, and track any reservations associated with the subpool.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) hugepage_subpool_get/put_pages are passed the number of huge pages by which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) to adjust the subpool 'used page' count (down for get, up for put).  Normally,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) they return the same value that was passed or an error if not enough pages
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) exist in the subpool.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) However, if reserves are associated with the subpool a return value less
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) than the passed value may be returned.  This return value indicates the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) number of additional global pool adjustments which must be made.  For example,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) suppose a subpool contains 3 reserved huge pages and someone asks for 5.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) The 3 reserved pages associated with the subpool can be used to satisfy part
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) of the request.  But, 2 pages must be obtained from the global pools.  To
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) relay this information to the caller, the value 2 is returned.  The caller
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) is then responsible for attempting to obtain the additional two pages from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) the global pools.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) COW and Reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) ====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) Since shared mappings all point to and use the same underlying pages, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) biggest reservation concern for COW is private mappings.  In this case,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) two tasks can be pointing at the same previously allocated page.  One task
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) attempts to write to the page, so a new page must be allocated so that each
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) task points to its own page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) When the page was originally allocated, the reservation for that page was
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) consumed.  When an attempt to allocate a new page is made as a result of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) COW, it is possible that no free huge pages are free and the allocation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) will fail.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) When the private mapping was originally created, the owner of the mapping
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) was noted by setting the HPAGE_RESV_OWNER bit in the pointer to the reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) map of the owner.  Since the owner created the mapping, the owner owns all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) the reservations associated with the mapping.  Therefore, when a write fault
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) occurs and there is no page available, different action is taken for the owner
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) and non-owner of the reservation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) In the case where the faulting task is not the owner, the fault will fail and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) the task will typically receive a SIGBUS.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) If the owner is the faulting task, we want it to succeed since it owned the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) original reservation.  To accomplish this, the page is unmapped from the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) non-owning task.  In this way, the only reference is from the owning task.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) In addition, the HPAGE_RESV_UNMAPPED bit is set in the reservation map pointer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) of the non-owning task.  The non-owning task may receive a SIGBUS if it later
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) faults on a non-present page.  But, the original owner of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) mapping/reservation will behave as expected.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) .. _resv_map_modifications:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) Reservation Map Modifications
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) =============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372) The following low level routines are used to make modifications to a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) reservation map.  Typically, these routines are not called directly.  Rather,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) a reservation map helper routine is called which calls one of these low level
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) routines.  These low level routines are fairly well documented in the source
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) code (mm/hugetlb.c).  These routines are::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) 	long region_chg(struct resv_map *resv, long f, long t);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) 	long region_add(struct resv_map *resv, long f, long t);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) 	void region_abort(struct resv_map *resv, long f, long t);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381) 	long region_count(struct resv_map *resv, long f, long t);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383) Operations on the reservation map typically involve two operations:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385) 1) region_chg() is called to examine the reserve map and determine how
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386)    many pages in the specified range [f, t) are NOT currently represented.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388)    The calling code performs global checks and allocations to determine if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389)    there are enough huge pages for the operation to succeed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391) 2)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392)   a) If the operation can succeed, region_add() is called to actually modify
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393)      the reservation map for the same range [f, t) previously passed to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394)      region_chg().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395)   b) If the operation can not succeed, region_abort is called for the same
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396)      range [f, t) to abort the operation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) Note that this is a two step process where region_add() and region_abort()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) are guaranteed to succeed after a prior call to region_chg() for the same
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) range.  region_chg() is responsible for pre-allocating any data structures
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401) necessary to ensure the subsequent operations (specifically region_add()))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) will succeed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) As mentioned above, region_chg() determines the number of pages in the range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) which are NOT currently represented in the map.  This number is returned to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) the caller.  region_add() returns the number of pages in the range added to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407) the map.  In most cases, the return value of region_add() is the same as the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) return value of region_chg().  However, in the case of shared mappings it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) possible for changes to the reservation map to be made between the calls to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410) region_chg() and region_add().  In this case, the return value of region_add()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) will not match the return value of region_chg().  It is likely that in such
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) cases global counts and subpool accounting will be incorrect and in need of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413) adjustment.  It is the responsibility of the caller to check for this condition
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) and make the appropriate adjustments.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416) The routine region_del() is called to remove regions from a reservation map.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) It is typically called in the following situations:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) - When a file in the hugetlbfs filesystem is being removed, the inode will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420)   be released and the reservation map freed.  Before freeing the reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421)   map, all the individual file_region structures must be freed.  In this case
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422)   region_del is passed the range [0, LONG_MAX).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423) - When a hugetlbfs file is being truncated.  In this case, all allocated pages
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424)   after the new file size must be freed.  In addition, any file_region entries
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425)   in the reservation map past the new end of file must be deleted.  In this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426)   case, region_del is passed the range [new_end_of_file, LONG_MAX).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) - When a hole is being punched in a hugetlbfs file.  In this case, huge pages
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428)   are removed from the middle of the file one at a time.  As the pages are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429)   removed, region_del() is called to remove the corresponding entry from the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430)   reservation map.  In this case, region_del is passed the range
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431)   [page_idx, page_idx + 1).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433) In every case, region_del() will return the number of pages removed from the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) reservation map.  In VERY rare cases, region_del() can fail.  This can only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435) happen in the hole punch case where it has to split an existing file_region
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436) entry and can not allocate a new structure.  In this error case, region_del()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437) will return -ENOMEM.  The problem here is that the reservation map will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438) indicate that there is a reservation for the page.  However, the subpool and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439) global reservation counts will not reflect the reservation.  To handle this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440) situation, the routine hugetlb_fix_reserve_counts() is called to adjust the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441) counters so that they correspond with the reservation map entry that could
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442) not be deleted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) region_count() is called when unmapping a private huge page mapping.  In
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445) private mappings, the lack of a entry in the reservation map indicates that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446) a reservation exists.  Therefore, by counting the number of entries in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447) reservation map we know how many reservations were consumed and how many are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448) outstanding (outstanding = (end - start) - region_count(resv, start, end)).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449) Since the mapping is going away, the subpool and global reservation counts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450) are decremented by the number of outstanding reservations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452) .. _resv_map_helpers:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454) Reservation Map Helper Routines
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455) ===============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457) Several helper routines exist to query and modify the reservation maps.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458) These routines are only interested with reservations for a specific huge
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459) page, so they just pass in an address instead of a range.  In addition,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 460) they pass in the associated VMA.  From the VMA, the type of mapping (private
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 461) or shared) and the location of the reservation map (inode or VMA) can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 462) determined.  These routines simply call the underlying routines described
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 463) in the section "Reservation Map Modifications".  However, they do take into
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 464) account the 'opposite' meaning of reservation map entries for private and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 465) shared mappings and hide this detail from the caller::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 466) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 467) 	long vma_needs_reservation(struct hstate *h,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 468) 				   struct vm_area_struct *vma,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 469) 				   unsigned long addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 470) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 471) This routine calls region_chg() for the specified page.  If no reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 472) exists, 1 is returned.  If a reservation exists, 0 is returned::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 473) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 474) 	long vma_commit_reservation(struct hstate *h,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 475) 				    struct vm_area_struct *vma,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 476) 				    unsigned long addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 477) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 478) This calls region_add() for the specified page.  As in the case of region_chg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 479) and region_add, this routine is to be called after a previous call to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 480) vma_needs_reservation.  It will add a reservation entry for the page.  It
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 481) returns 1 if the reservation was added and 0 if not.  The return value should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 482) be compared with the return value of the previous call to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 483) vma_needs_reservation.  An unexpected difference indicates the reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 484) map was modified between calls::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 485) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 486) 	void vma_end_reservation(struct hstate *h,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 487) 				 struct vm_area_struct *vma,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 488) 				 unsigned long addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 489) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 490) This calls region_abort() for the specified page.  As in the case of region_chg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 491) and region_abort, this routine is to be called after a previous call to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 492) vma_needs_reservation.  It will abort/end the in progress reservation add
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 493) operation::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 494) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 495) 	long vma_add_reservation(struct hstate *h,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 496) 				 struct vm_area_struct *vma,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 497) 				 unsigned long addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 498) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 499) This is a special wrapper routine to help facilitate reservation cleanup
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 500) on error paths.  It is only called from the routine restore_reserve_on_error().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 501) This routine is used in conjunction with vma_needs_reservation in an attempt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 502) to add a reservation to the reservation map.  It takes into account the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 503) different reservation map semantics for private and shared mappings.  Hence,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 504) region_add is called for shared mappings (as an entry present in the map
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 505) indicates a reservation), and region_del is called for private mappings (as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 506) the absence of an entry in the map indicates a reservation).  See the section
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 507) "Reservation cleanup in error paths" for more information on what needs to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 508) be done on error paths.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 509) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 510) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 511) Reservation Cleanup in Error Paths
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 512) ==================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 513) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 514) As mentioned in the section
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 515) :ref:`Reservation Map Helper Routines <resv_map_helpers>`, reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 516) map modifications are performed in two steps.  First vma_needs_reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 517) is called before a page is allocated.  If the allocation is successful,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 518) then vma_commit_reservation is called.  If not, vma_end_reservation is called.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 519) Global and subpool reservation counts are adjusted based on success or failure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 520) of the operation and all is well.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 521) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 522) Additionally, after a huge page is instantiated the PagePrivate flag is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 523) cleared so that accounting when the page is ultimately freed is correct.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 524) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 525) However, there are several instances where errors are encountered after a huge
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 526) page is allocated but before it is instantiated.  In this case, the page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 527) allocation has consumed the reservation and made the appropriate subpool,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 528) reservation map and global count adjustments.  If the page is freed at this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 529) time (before instantiation and clearing of PagePrivate), then free_huge_page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 530) will increment the global reservation count.  However, the reservation map
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 531) indicates the reservation was consumed.  This resulting inconsistent state
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 532) will cause the 'leak' of a reserved huge page.  The global reserve count will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 533) be  higher than it should and prevent allocation of a pre-allocated page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 534) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 535) The routine restore_reserve_on_error() attempts to handle this situation.  It
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 536) is fairly well documented.  The intention of this routine is to restore
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 537) the reservation map to the way it was before the page allocation.   In this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 538) way, the state of the reservation map will correspond to the global reservation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 539) count after the page is freed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 540) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 541) The routine restore_reserve_on_error itself may encounter errors while
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 542) attempting to restore the reservation map entry.  In this case, it will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 543) simply clear the PagePrivate flag of the page.  In this way, the global
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 544) reserve count will not be incremented when the page is freed.  However, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 545) reservation map will continue to look as though the reservation was consumed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 546) A page can still be allocated for the address, but it will not use a reserved
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 547) page as originally intended.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 548) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 549) There is some code (most notably userfaultfd) which can not call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 550) restore_reserve_on_error.  In this case, it simply modifies the PagePrivate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 551) so that a reservation will not be leaked when the huge page is freed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 552) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 553) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 554) Reservations and Memory Policy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 555) ==============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 556) Per-node huge page lists existed in struct hstate when git was first used
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 557) to manage Linux code.  The concept of reservations was added some time later.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 558) When reservations were added, no attempt was made to take memory policy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 559) into account.  While cpusets are not exactly the same as memory policy, this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 560) comment in hugetlb_acct_memory sums up the interaction between reservations
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 561) and cpusets/memory policy::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 562) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 563) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 564) 	 * When cpuset is configured, it breaks the strict hugetlb page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 565) 	 * reservation as the accounting is done on a global variable. Such
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 566) 	 * reservation is completely rubbish in the presence of cpuset because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 567) 	 * the reservation is not checked against page availability for the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 568) 	 * current cpuset. Application can still potentially OOM'ed by kernel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 569) 	 * with lack of free htlb page in cpuset that the task is in.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 570) 	 * Attempt to enforce strict accounting with cpuset is almost
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 571) 	 * impossible (or too ugly) because cpuset is too fluid that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 572) 	 * task or memory node can be dynamically moved between cpusets.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 573) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 574) 	 * The change of semantics for shared hugetlb mapping with cpuset is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 575) 	 * undesirable. However, in order to preserve some of the semantics,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 576) 	 * we fall back to check against current free page availability as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 577) 	 * a best attempt and hopefully to minimize the impact of changing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 578) 	 * semantics that cpuset has.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 579) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 580) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 581) Huge page reservations were added to prevent unexpected page allocation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 582) failures (OOM) at page fault time.  However, if an application makes use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 583) of cpusets or memory policy there is no guarantee that huge pages will be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 584) available on the required nodes.  This is true even if there are a sufficient
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 585) number of global reservations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 586) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 587) Hugetlbfs regression testing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 588) ============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 589) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 590) The most complete set of hugetlb tests are in the libhugetlbfs repository.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 591) If you modify any hugetlb related code, use the libhugetlbfs test suite
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 592) to check for regressions.  In addition, if you add any new hugetlb
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 593) functionality, please add appropriate tests to libhugetlbfs.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 594) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 595) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 596) Mike Kravetz, 7 April 2017