Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) .. _page_migration:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3) ==============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4) Page migration
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5) ==============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7) Page migration allows moving the physical location of pages between
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8) nodes in a NUMA system while the process is running. This means that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9) virtual addresses that the process sees do not change. However, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10) system rearranges the physical location of those pages.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12) Also see :ref:`Heterogeneous Memory Management (HMM) <hmm>`
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13) for migrating pages to or from device private memory.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15) The main intent of page migration is to reduce the latency of memory accesses
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16) by moving pages near to the processor where the process accessing that memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17) is running.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19) Page migration allows a process to manually relocate the node on which its
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20) pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21) a new memory policy via mbind(). The pages of a process can also be relocated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22) from another process using the sys_migrate_pages() function call. The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23) migrate_pages() function call takes two sets of nodes and moves pages of a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24) process that are located on the from nodes to the destination nodes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25) Page migration functions are provided by the numactl package by Andi Kleen
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26) (a version later than 0.9.3 is required. Get it from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27) https://github.com/numactl/numactl.git). numactl provides libnuma
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28) which provides an interface similar to other NUMA functionality for page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29) migration.  cat ``/proc/<pid>/numa_maps`` allows an easy review of where the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30) pages of a process are located. See also the numa_maps documentation in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31) proc(5) man page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33) Manual migration is useful if for example the scheduler has relocated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34) a process to a processor on a distant node. A batch scheduler or an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35) administrator may detect the situation and move the pages of the process
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36) nearer to the new processor. The kernel itself only provides
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37) manual page migration support. Automatic page migration may be implemented
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38) through user space processes that move pages. A special function call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39) "move_pages" allows the moving of individual pages within a process.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40) For example, A NUMA profiler may obtain a log showing frequent off-node
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41) accesses and may use the result to move pages to more advantageous
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42) locations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44) Larger installations usually partition the system using cpusets into
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45) sections of nodes. Paul Jackson has equipped cpusets with the ability to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46) move pages when a task is moved to another cpuset (See
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47) :ref:`CPUSETS <cpusets>`).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48) Cpusets allow the automation of process locality. If a task is moved to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49) a new cpuset then also all its pages are moved with it so that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50) performance of the process does not sink dramatically. Also the pages
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51) of processes in a cpuset are moved if the allowed memory nodes of a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52) cpuset are changed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54) Page migration allows the preservation of the relative location of pages
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55) within a group of nodes for all migration techniques which will preserve a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56) particular memory allocation pattern generated even after migrating a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57) process. This is necessary in order to preserve the memory latencies.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58) Processes will run with similar performance after migration.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60) Page migration occurs in several steps. First a high level
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61) description for those trying to use migrate_pages() from the kernel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62) (for userspace usage see the Andi Kleen's numactl package mentioned above)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63) and then a low level description of how the low level details work.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65) In kernel use of migrate_pages()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66) ================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68) 1. Remove pages from the LRU.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70)    Lists of pages to be migrated are generated by scanning over
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71)    pages and moving them into lists. This is done by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72)    calling isolate_lru_page().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73)    Calling isolate_lru_page() increases the references to the page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74)    so that it cannot vanish while the page migration occurs.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75)    It also prevents the swapper or other scans from encountering
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76)    the page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78) 2. We need to have a function of type new_page_t that can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79)    passed to migrate_pages(). This function should figure out
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80)    how to allocate the correct new page given the old page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82) 3. The migrate_pages() function is called which attempts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83)    to do the migration. It will call the function to allocate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84)    the new page for each page that is considered for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85)    moving.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87) How migrate_pages() works
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88) =========================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90) migrate_pages() does several passes over its list of pages. A page is moved
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91) if all references to a page are removable at the time. The page has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92) already been removed from the LRU via isolate_lru_page() and the refcount
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93) is increased so that the page cannot be freed while page migration occurs.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95) Steps:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97) 1. Lock the page to be migrated.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99) 2. Ensure that writeback is complete.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) 3. Lock the new page that we want to move to. It is locked so that accesses to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102)    this (not yet up-to-date) page immediately block while the move is in progress.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) 4. All the page table references to the page are converted to migration
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105)    entries. This decreases the mapcount of a page. If the resulting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106)    mapcount is not zero then we do not migrate the page. All user space
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107)    processes that attempt to access the page will now wait on the page lock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108)    or wait for the migration page table entry to be removed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) 5. The i_pages lock is taken. This will cause all processes trying
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111)    to access the page via the mapping to block on the spinlock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) 6. The refcount of the page is examined and we back out if references remain.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114)    Otherwise, we know that we are the only one referencing this page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) 7. The radix tree is checked and if it does not contain the pointer to this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117)    page then we back out because someone else modified the radix tree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) 8. The new page is prepped with some settings from the old page so that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120)    accesses to the new page will discover a page with the correct settings.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) 9. The radix tree is changed to point to the new page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) 10. The reference count of the old page is dropped because the address space
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125)     reference is gone. A reference to the new page is established because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126)     the new page is referenced by the address space.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) 11. The i_pages lock is dropped. With that lookups in the mapping
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129)     become possible again. Processes will move from spinning on the lock
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130)     to sleeping on the locked new page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) 12. The page contents are copied to the new page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) 13. The remaining page flags are copied to the new page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) 14. The old page flags are cleared to indicate that the page does
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137)     not provide any information anymore.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) 15. Queued up writeback on the new page is triggered.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) 16. If migration entries were inserted into the page table, then replace them
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142)     with real ptes. Doing so will enable access for user space processes not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143)     already waiting for the page lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) 17. The page locks are dropped from the old and new page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146)     Processes waiting on the page lock will redo their page faults
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147)     and will reach the new page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) 18. The new page is moved to the LRU and can be scanned by the swapper,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150)     etc. again.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) Non-LRU page migration
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) ======================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) Although migration originally aimed for reducing the latency of memory accesses
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) for NUMA, compaction also uses migration to create high-order pages.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) Current problem of the implementation is that it is designed to migrate only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) *LRU* pages. However, there are potential non-LRU pages which can be migrated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) in drivers, for example, zsmalloc, virtio-balloon pages.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) For virtio-balloon pages, some parts of migration code path have been hooked
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) up and added virtio-balloon specific functions to intercept migration logics.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) It's too specific to a driver so other drivers who want to make their pages
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) movable would have to add their own specific hooks in the migration path.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) To overcome the problem, VM supports non-LRU page migration which provides
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) generic functions for non-LRU movable pages without driver specific hooks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) in the migration path.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) If a driver wants to make its pages movable, it should define three functions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) which are function pointers of struct address_space_operations.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) 1. ``bool (*isolate_page) (struct page *page, isolate_mode_t mode);``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176)    What VM expects from isolate_page() function of driver is to return *true*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177)    if driver isolates the page successfully. On returning true, VM marks the page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178)    as PG_isolated so concurrent isolation in several CPUs skip the page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179)    for isolation. If a driver cannot isolate the page, it should return *false*.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181)    Once page is successfully isolated, VM uses page.lru fields so driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182)    shouldn't expect to preserve values in those fields.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) 2. ``int (*migratepage) (struct address_space *mapping,``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) |	``struct page *newpage, struct page *oldpage, enum migrate_mode);``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187)    After isolation, VM calls migratepage() of driver with the isolated page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188)    The function of migratepage() is to move the contents of the old page to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189)    new page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190)    and set up fields of struct page newpage. Keep in mind that you should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191)    indicate to the VM the oldpage is no longer movable via __ClearPageMovable()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192)    under page_lock if you migrated the oldpage successfully and returned
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193)    MIGRATEPAGE_SUCCESS. If driver cannot migrate the page at the moment, driver
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194)    can return -EAGAIN. On -EAGAIN, VM will retry page migration in a short time
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195)    because VM interprets -EAGAIN as "temporary migration failure". On returning
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196)    any error except -EAGAIN, VM will give up the page migration without
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197)    retrying.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199)    Driver shouldn't touch the page.lru field while in the migratepage() function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) 3. ``void (*putback_page)(struct page *);``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203)    If migration fails on the isolated page, VM should return the isolated page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204)    to the driver so VM calls the driver's putback_page() with the isolated page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205)    In this function, the driver should put the isolated page back into its own data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206)    structure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) 4. non-LRU movable page flags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210)    There are two page flags for supporting non-LRU movable page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212)    * PG_movable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214)      Driver should use the function below to make page movable under page_lock::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) 	void __SetPageMovable(struct page *page, struct address_space *mapping)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218)      It needs argument of address_space for registering migration
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219)      family functions which will be called by VM. Exactly speaking,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220)      PG_movable is not a real flag of struct page. Rather, VM
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221)      reuses the page->mapping's lower bits to represent it::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) 	#define PAGE_MAPPING_MOVABLE 0x2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 	page->mapping = page->mapping | PAGE_MAPPING_MOVABLE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226)      so driver shouldn't access page->mapping directly. Instead, driver should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227)      use page_mapping() which masks off the low two bits of page->mapping under
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228)      page lock so it can get the right struct address_space.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230)      For testing of non-LRU movable pages, VM supports __PageMovable() function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231)      However, it doesn't guarantee to identify non-LRU movable pages because
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232)      the page->mapping field is unified with other variables in struct page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233)      If the driver releases the page after isolation by VM, page->mapping
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234)      doesn't have a stable value although it has PAGE_MAPPING_MOVABLE set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235)      (look at __ClearPageMovable). But __PageMovable() is cheap to call whether
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236)      page is LRU or non-LRU movable once the page has been isolated because LRU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237)      pages can never have PAGE_MAPPING_MOVABLE set in page->mapping. It is also
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238)      good for just peeking to test non-LRU movable pages before more expensive
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239)      checking with lock_page() in pfn scanning to select a victim.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241)      For guaranteeing non-LRU movable page, VM provides PageMovable() function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242)      Unlike __PageMovable(), PageMovable() validates page->mapping and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243)      mapping->a_ops->isolate_page under lock_page(). The lock_page() prevents
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244)      sudden destroying of page->mapping.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246)      Drivers using __SetPageMovable() should clear the flag via
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247)      __ClearMovablePage() under page_lock() before the releasing the page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249)    * PG_isolated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251)      To prevent concurrent isolation among several CPUs, VM marks isolated page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252)      as PG_isolated under lock_page(). So if a CPU encounters PG_isolated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253)      non-LRU movable page, it can skip it. Driver doesn't need to manipulate the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254)      flag because VM will set/clear it automatically. Keep in mind that if the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255)      driver sees a PG_isolated page, it means the page has been isolated by the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256)      VM so it shouldn't touch the page.lru field.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257)      The PG_isolated flag is aliased with the PG_reclaim flag so drivers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258)      shouldn't use PG_isolated for its own purposes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) Monitoring Migration
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) =====================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) The following events (counters) can be used to monitor page migration.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) 1. PGMIGRATE_SUCCESS: Normal page migration success. Each count means that a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266)    page was migrated. If the page was a non-THP page, then this counter is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267)    increased by one. If the page was a THP, then this counter is increased by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268)    the number of THP subpages. For example, migration of a single 2MB THP that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269)    has 4KB-size base pages (subpages) will cause this counter to increase by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270)    512.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) 2. PGMIGRATE_FAIL: Normal page migration failure. Same counting rules as for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273)    PGMIGRATE_SUCCESS, above: this will be increased by the number of subpages,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274)    if it was a THP.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) 3. THP_MIGRATION_SUCCESS: A THP was migrated without being split.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) 4. THP_MIGRATION_FAIL: A THP could not be migrated nor it could be split.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) 5. THP_MIGRATION_SPLIT: A THP was migrated, but not as such: first, the THP had
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281)    to be split. After splitting, a migration retry was used for it's sub-pages.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) THP_MIGRATION_* events also update the appropriate PGMIGRATE_SUCCESS or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) PGMIGRATE_FAIL events. For example, a THP migration failure will cause both
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) THP_MIGRATION_FAIL and PGMIGRATE_FAIL to increase.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) Christoph Lameter, May 8, 2006.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) Minchan Kim, Mar 28, 2016.