^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) =====================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) Memory Resource Controller(Memcg) Implementation Memo
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) =====================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) Last Updated: 2010/2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) Base Kernel Version: based on 2.6.33-rc7-mm(candidate for 34).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) Because VM is getting complex (one of reasons is memcg...), memcg's behavior
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) is complex. This is a document for memcg's internal behavior.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) Please note that implementation details can be changed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) (*) Topics on API should be in Documentation/admin-guide/cgroup-v1/memory.rst)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) 0. How to record usage ?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) ========================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) 2 objects are used.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) page_cgroup ....an object per page.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) Allocated at boot or memory hotplug. Freed at memory hot removal.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) swap_cgroup ... an entry per swp_entry.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) Allocated at swapon(). Freed at swapoff().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28) The page_cgroup has USED bit and double count against a page_cgroup never
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) occurs. swap_cgroup is used only when a charged page is swapped-out.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) 1. Charge
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) =========
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) a page/swp_entry may be charged (usage += PAGE_SIZE) at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) mem_cgroup_try_charge()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) 2. Uncharge
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) ===========
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) a page/swp_entry may be uncharged (usage -= PAGE_SIZE) by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) mem_cgroup_uncharge()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) Called when a page's refcount goes down to 0.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) mem_cgroup_uncharge_swap()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) Called when swp_entry's refcnt goes down to 0. A charge against swap
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) disappears.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) 3. charge-commit-cancel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) =======================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) Memcg pages are charged in two steps:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) - mem_cgroup_try_charge()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56) - mem_cgroup_commit_charge() or mem_cgroup_cancel_charge()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) At try_charge(), there are no flags to say "this page is charged".
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) at this point, usage += PAGE_SIZE.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) At commit(), the page is associated with the memcg.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) At cancel(), simply usage -= PAGE_SIZE.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67) 4. Anonymous
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) ============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70) Anonymous page is newly allocated at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) - page fault into MAP_ANONYMOUS mapping.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) - Copy-On-Write.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) 4.1 Swap-in.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) At swap-in, the page is taken from swap-cache. There are 2 cases.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77) (a) If the SwapCache is newly allocated and read, it has no charges.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) (b) If the SwapCache has been mapped by processes, it has been
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) charged already.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) 4.2 Swap-out.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) At swap-out, typical state transition is below.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) (a) add to swap cache. (marked as SwapCache)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) swp_entry's refcnt += 1.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) (b) fully unmapped.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) swp_entry's refcnt += # of ptes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) (c) write back to swap.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) (d) delete from swap cache. (remove from SwapCache)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) swp_entry's refcnt -= 1.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) Finally, at task exit,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) (e) zap_pte() is called and swp_entry's refcnt -=1 -> 0.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96) 5. Page Cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) =============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) Page Cache is charged at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) - add_to_page_cache_locked().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) The logic is very clear. (About migration, see below)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) Note:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) __remove_from_page_cache() is called by remove_from_page_cache()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) and __remove_mapping().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) 6. Shmem(tmpfs) Page Cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) ===========================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) The best way to understand shmem's page state transition is to read
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) mm/shmem.c.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) But brief explanation of the behavior of memcg around shmem will be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) helpful to understand the logic.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) Shmem's page (just leaf page, not direct/indirect block) can be on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) - radix-tree of shmem's inode.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) - SwapCache.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) - Both on radix-tree and SwapCache. This happens at swap-in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) and swap-out,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) It's charged when...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) - A new page is added to shmem's radix-tree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) - A swp page is read. (move a charge from swap_cgroup to page_cgroup)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) 7. Page Migration
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) =================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) mem_cgroup_migrate()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) 8. LRU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) ======
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) Each memcg has its own private LRU. Now, its handling is under global
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) VM's control (means that it's handled under global pgdat->lru_lock).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) Almost all routines around memcg's LRU is called by global LRU's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) list management functions under pgdat->lru_lock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) A special function is mem_cgroup_isolate_pages(). This scans
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) memcg's private LRU and call __isolate_lru_page() to extract a page
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) from LRU.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) (By __isolate_lru_page(), the page is removed from both of global and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) private LRU.)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) 9. Typical Tests.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) =================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) Tests for racy cases.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) 9.1 Small limit to memcg.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) -------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) When you do test to do racy case, it's good test to set memcg's limit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) to be very small rather than GB. Many races found in the test under
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) xKB or xxMB limits.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) (Memory behavior under GB and Memory behavior under MB shows very
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) different situation.)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) 9.2 Shmem
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) ---------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) Historically, memcg's shmem handling was poor and we saw some amount
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) of troubles here. This is because shmem is page-cache but can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) SwapCache. Test with shmem/tmpfs is always good test.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) 9.3 Migration
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) -------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) For NUMA, migration is an another special case. To do easy test, cpuset
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) is useful. Following is a sample script to do migration::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) mount -t cgroup -o cpuset none /opt/cpuset
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) mkdir /opt/cpuset/01
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) echo 1 > /opt/cpuset/01/cpuset.cpus
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) echo 0 > /opt/cpuset/01/cpuset.mems
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) echo 1 > /opt/cpuset/01/cpuset.memory_migrate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) mkdir /opt/cpuset/02
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) echo 1 > /opt/cpuset/02/cpuset.cpus
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) echo 1 > /opt/cpuset/02/cpuset.mems
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) echo 1 > /opt/cpuset/02/cpuset.memory_migrate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) In above set, when you moves a task from 01 to 02, page migration to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) node 0 to node 1 will occur. Following is a script to migrate all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) under cpuset.::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) move_task()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) for pid in $1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) do
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) /bin/echo $pid >$2/tasks 2>/dev/null
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) echo -n $pid
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) echo -n " "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) done
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) echo END
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) G1_TASK=`cat ${G1}/tasks`
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) G2_TASK=`cat ${G2}/tasks`
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) move_task "${G1_TASK}" ${G2} &
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) 9.4 Memory hotplug
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) ------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) memory hotplug test is one of good test.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) to offline memory, do following::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) # echo offline > /sys/devices/system/memory/memoryXXX/state
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) (XXX is the place of memory)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) This is an easy way to test page migration, too.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) 9.5 mkdir/rmdir
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) ---------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) When using hierarchy, mkdir/rmdir test should be done.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) Use tests like the following::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) echo 1 >/opt/cgroup/01/memory/use_hierarchy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) mkdir /opt/cgroup/01/child_a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) mkdir /opt/cgroup/01/child_b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) set limit to 01.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) add limit to 01/child_b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) run jobs under child_a and child_b
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) create/delete following groups at random while jobs are running::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) /opt/cgroup/01/child_a/child_aa
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) /opt/cgroup/01/child_b/child_bb
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) /opt/cgroup/01/child_c
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) running new jobs in new group is also good.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) 9.6 Mount with other subsystems
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) -------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) Mounting with other subsystems is a good test because there is a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) race and lock dependency with other cgroup subsystems.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) example::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) # mount -t cgroup none /cgroup -o cpuset,memory,cpu,devices
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) and do task move, mkdir, rmdir etc...under this.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) 9.7 swapoff
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) -----------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) Besides management of swap is one of complicated parts of memcg,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) call path of swap-in at swapoff is not same as usual swap-in path..
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) It's worth to be tested explicitly.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) For example, test like following is good:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) (Shell-A)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) # mount -t cgroup none /cgroup -o memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) # mkdir /cgroup/test
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) # echo 40M > /cgroup/test/memory.limit_in_bytes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) # echo 0 > /cgroup/test/tasks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) Run malloc(100M) program under this. You'll see 60M of swaps.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) (Shell-B)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) # move all tasks in /cgroup/test to /cgroup
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) # /sbin/swapoff -a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) # rmdir /cgroup/test
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) # kill malloc task.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) Of course, tmpfs v.s. swapoff test should be tested, too.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) 9.8 OOM-Killer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) --------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) Out-of-memory caused by memcg's limit will kill tasks under
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) the memcg. When hierarchy is used, a task under hierarchy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) will be killed by the kernel.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) In this case, panic_on_oom shouldn't be invoked and tasks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) in other groups shouldn't be killed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) It's not difficult to cause OOM under memcg as following.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) Case A) when you can swapoff::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) #swapoff -a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) #echo 50M > /memory.limit_in_bytes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) run 51M of malloc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) Case B) when you use mem+swap limitation::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) #echo 50M > memory.limit_in_bytes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) #echo 50M > memory.memsw.limit_in_bytes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) run 51M of malloc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) 9.9 Move charges at task migration
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) ----------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312) Charges associated with a task can be moved along with task migration.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) (Shell-A)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) #mkdir /cgroup/A
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) #echo $$ >/cgroup/A/tasks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) run some programs which uses some amount of memory in /cgroup/A.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) (Shell-B)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) #mkdir /cgroup/B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) #echo 1 >/cgroup/B/memory.move_charge_at_immigrate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) #echo "pid of the program running in group A" >/cgroup/B/tasks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) You can see charges have been moved by reading ``*.usage_in_bytes`` or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) memory.stat of both A and B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) See 8.2 of Documentation/admin-guide/cgroup-v1/memory.rst to see what value should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) be written to move_charge_at_immigrate.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) 9.10 Memory thresholds
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) ----------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) Memory controller implements memory thresholds using cgroups notification
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) API. You can use tools/cgroup/cgroup_event_listener.c to test it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) (Shell-A) Create cgroup and run event listener::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) # mkdir /cgroup/A
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) # ./cgroup_event_listener /cgroup/A/memory.usage_in_bytes 5M
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) (Shell-B) Add task to cgroup and try to allocate and free memory::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) # echo $$ >/cgroup/A/tasks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) # a="$(dd if=/dev/zero bs=1M count=10)"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) # a=
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) You will see message from cgroup_event_listener every time you cross
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) the thresholds.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) Use /cgroup/A/memory.memsw.usage_in_bytes to test memsw thresholds.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) It's good idea to test root cgroup as well.