Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) ============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) A block layer cache (bcache)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3) ============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5) Say you've got a big slow raid 6, and an ssd or three. Wouldn't it be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6) nice if you could use them as cache... Hence bcache.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8) The bcache wiki can be found at:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9)   https://bcache.evilpiepirate.org
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11) This is the git repository of bcache-tools:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12)   https://git.kernel.org/pub/scm/linux/kernel/git/colyli/bcache-tools.git/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14) The latest bcache kernel code can be found from mainline Linux kernel:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15)   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17) It's designed around the performance characteristics of SSDs - it only allocates
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18) in erase block sized buckets, and it uses a hybrid btree/log to track cached
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19) extents (which can be anywhere from a single sector to the bucket size). It's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20) designed to avoid random writes at all costs; it fills up an erase block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21) sequentially, then issues a discard before reusing it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23) Both writethrough and writeback caching are supported. Writeback defaults to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24) off, but can be switched on and off arbitrarily at runtime. Bcache goes to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25) great lengths to protect your data - it reliably handles unclean shutdown. (It
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26) doesn't even have a notion of a clean shutdown; bcache simply doesn't return
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27) writes as completed until they're on stable storage).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29) Writeback caching can use most of the cache for buffering writes - writing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30) dirty data to the backing device is always done sequentially, scanning from the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31) start to the end of the index.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33) Since random IO is what SSDs excel at, there generally won't be much benefit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34) to caching large sequential IO. Bcache detects sequential IO and skips it;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35) it also keeps a rolling average of the IO sizes per task, and as long as the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36) average is above the cutoff it will skip all IO from that task - instead of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37) caching the first 512k after every seek. Backups and large file copies should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38) thus entirely bypass the cache.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40) In the event of a data IO error on the flash it will try to recover by reading
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41) from disk or invalidating cache entries.  For unrecoverable errors (meta data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42) or dirty data), caching is automatically disabled; if dirty data was present
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43) in the cache it first disables writeback caching and waits for all dirty data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44) to be flushed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46) Getting started:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47) You'll need bcache util from the bcache-tools repository. Both the cache device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48) and backing device must be formatted before use::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50)   bcache make -B /dev/sdb
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51)   bcache make -C /dev/sdc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53) `bcache make` has the ability to format multiple devices at the same time - if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54) you format your backing devices and cache device at the same time, you won't
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55) have to manually attach::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57)   bcache make -B /dev/sda /dev/sdb -C /dev/sdc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59) If your bcache-tools is not updated to latest version and does not have the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60) unified `bcache` utility, you may use the legacy `make-bcache` utility to format
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61) bcache device with same -B and -C parameters.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63) bcache-tools now ships udev rules, and bcache devices are known to the kernel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64) immediately.  Without udev, you can manually register devices like this::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66)   echo /dev/sdb > /sys/fs/bcache/register
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67)   echo /dev/sdc > /sys/fs/bcache/register
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69) Registering the backing device makes the bcache device show up in /dev; you can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70) now format it and use it as normal. But the first time using a new bcache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71) device, it'll be running in passthrough mode until you attach it to a cache.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72) If you are thinking about using bcache later, it is recommended to setup all your
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73) slow devices as bcache backing devices without a cache, and you can choose to add
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74) a caching device later.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75) See 'ATTACHING' section below.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77) The devices show up as::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79)   /dev/bcache<N>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81) As well as (with udev)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83)   /dev/bcache/by-uuid/<uuid>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84)   /dev/bcache/by-label/<label>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86) To get started::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88)   mkfs.ext4 /dev/bcache0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89)   mount /dev/bcache0 /mnt
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91) You can control bcache devices through sysfs at /sys/block/bcache<N>/bcache .
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92) You can also control them through /sys/fs//bcache/<cset-uuid>/ .
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94) Cache devices are managed as sets; multiple caches per set isn't supported yet
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95) but will allow for mirroring of metadata and dirty data in the future. Your new
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96) cache set shows up as /sys/fs/bcache/<UUID>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98) Attaching
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99) ---------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) After your cache device and backing device are registered, the backing device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) must be attached to your cache set to enable caching. Attaching a backing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) device to a cache set is done thusly, with the UUID of the cache set in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) /sys/fs/bcache::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106)   echo <CSET-UUID> > /sys/block/bcache0/bcache/attach
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) This only has to be done once. The next time you reboot, just reregister all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) your bcache devices. If a backing device has data in a cache somewhere, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) /dev/bcache<N> device won't be created until the cache shows up - particularly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) important if you have writeback caching turned on.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) If you're booting up and your cache device is gone and never coming back, you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) can force run the backing device::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116)   echo 1 > /sys/block/sdb/bcache/running
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) (You need to use /sys/block/sdb (or whatever your backing device is called), not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) /sys/block/bcache0, because bcache0 doesn't exist yet. If you're using a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) partition, the bcache directory would be at /sys/block/sdb/sdb2/bcache)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) The backing device will still use that cache set if it shows up in the future,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) but all the cached data will be invalidated. If there was dirty data in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) cache, don't expect the filesystem to be recoverable - you will have massive
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) filesystem corruption, though ext4's fsck does work miracles.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) Error Handling
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) --------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) Bcache tries to transparently handle IO errors to/from the cache device without
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) affecting normal operation; if it sees too many errors (the threshold is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) configurable, and defaults to 0) it shuts down the cache device and switches all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) the backing devices to passthrough mode.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135)  - For reads from the cache, if they error we just retry the read from the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136)    backing device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138)  - For writethrough writes, if the write to the cache errors we just switch to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139)    invalidating the data at that lba in the cache (i.e. the same thing we do for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140)    a write that bypasses the cache)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142)  - For writeback writes, we currently pass that error back up to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143)    filesystem/userspace. This could be improved - we could retry it as a write
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144)    that skips the cache so we don't have to error the write.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146)  - When we detach, we first try to flush any dirty data (if we were running in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147)    writeback mode). It currently doesn't do anything intelligent if it fails to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148)    read some of the dirty data, though.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) Howto/cookbook
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) --------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) A) Starting a bcache with a missing caching device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) If registering the backing device doesn't help, it's already there, you just need
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) to force it to run without the cache::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) 	host:~# echo /dev/sdb1 > /sys/fs/bcache/register
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) 	[  119.844831] bcache: register_bcache() error opening /dev/sdb1: device already registered
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) Next, you try to register your caching device if it's present. However
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) if it's absent, or registration fails for some reason, you can still
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) start your bcache without its cache, like so::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) 	host:/sys/block/sdb/sdb1/bcache# echo 1 > running
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) Note that this may cause data loss if you were running in writeback mode.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) B) Bcache does not find its cache::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) 	host:/sys/block/md5/bcache# echo 0226553a-37cf-41d5-b3ce-8b1e944543a8 > attach
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) 	[ 1933.455082] bcache: bch_cached_dev_attach() Couldn't find uuid for md5 in set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) 	[ 1933.478179] bcache: __cached_dev_store() Can't attach 0226553a-37cf-41d5-b3ce-8b1e944543a8
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) 	[ 1933.478179] : cache set not found
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) In this case, the caching device was simply not registered at boot
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) or disappeared and came back, and needs to be (re-)registered::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) 	host:/sys/block/md5/bcache# echo /dev/sdh2 > /sys/fs/bcache/register
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) C) Corrupt bcache crashes the kernel at device registration time:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) This should never happen.  If it does happen, then you have found a bug!
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) Please report it to the bcache development list: linux-bcache@vger.kernel.org
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) Be sure to provide as much information that you can including kernel dmesg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) output if available so that we may assist.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) D) Recovering data without bcache:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) If bcache is not available in the kernel, a filesystem on the backing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) device is still available at an 8KiB offset. So either via a loopdev
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) of the backing device created with --offset 8K, or any value defined by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) --data-offset when you originally formatted bcache with `bcache make`.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) For example::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) 	losetup -o 8192 /dev/loop0 /dev/your_bcache_backing_dev
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) This should present your unmodified backing device data in /dev/loop0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) If your cache is in writethrough mode, then you can safely discard the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) cache device without loosing data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) E) Wiping a cache device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) 	host:~# wipefs -a /dev/sdh2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) 	16 bytes were erased at offset 0x1018 (bcache)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) 	they were: c6 85 73 f6 4e 1a 45 ca 82 65 f5 7f 48 ba 6d 81
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) After you boot back with bcache enabled, you recreate the cache and attach it::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) 	host:~# bcache make -C /dev/sdh2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) 	UUID:                   7be7e175-8f4c-4f99-94b2-9c904d227045
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) 	Set UUID:               5bc072a8-ab17-446d-9744-e247949913c1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) 	version:                0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 	nbuckets:               106874
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) 	block_size:             1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) 	bucket_size:            1024
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) 	nr_in_set:              1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) 	nr_this_dev:            0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) 	first_bucket:           1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) 	[  650.511912] bcache: run_cache_set() invalidating existing data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) 	[  650.549228] bcache: register_cache() registered cache device sdh2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) start backing device with missing cache::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) 	host:/sys/block/md5/bcache# echo 1 > running
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) attach new cache::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) 	host:/sys/block/md5/bcache# echo 5bc072a8-ab17-446d-9744-e247949913c1 > attach
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) 	[  865.276616] bcache: bch_cached_dev_attach() Caching md5 as bcache0 on set 5bc072a8-ab17-446d-9744-e247949913c1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) F) Remove or replace a caching device::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) 	host:/sys/block/sda/sda7/bcache# echo 1 > detach
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) 	[  695.872542] bcache: cached_dev_detach_finish() Caching disabled for sda7
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) 	host:~# wipefs -a /dev/nvme0n1p4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) 	wipefs: error: /dev/nvme0n1p4: probing initialization failed: Device or resource busy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) 	Ooops, it's disabled, but not unregistered, so it's still protected
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) We need to go and unregister it::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) 	host:/sys/fs/bcache/b7ba27a1-2398-4649-8ae3-0959f57ba128# ls -l cache0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) 	lrwxrwxrwx 1 root root 0 Feb 25 18:33 cache0 -> ../../../devices/pci0000:00/0000:00:1d.0/0000:70:00.0/nvme/nvme0/nvme0n1/nvme0n1p4/bcache/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) 	host:/sys/fs/bcache/b7ba27a1-2398-4649-8ae3-0959f57ba128# echo 1 > stop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) 	kernel: [  917.041908] bcache: cache_set_free() Cache set b7ba27a1-2398-4649-8ae3-0959f57ba128 unregistered
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) Now we can wipe it::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) 	host:~# wipefs -a /dev/nvme0n1p4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) 	/dev/nvme0n1p4: 16 bytes were erased at offset 0x00001018 (bcache): c6 85 73 f6 4e 1a 45 ca 82 65 f5 7f 48 ba 6d 81
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) G) dm-crypt and bcache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) First setup bcache unencrypted and then install dmcrypt on top of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) /dev/bcache<N> This will work faster than if you dmcrypt both the backing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) and caching devices and then install bcache on top. [benchmarks?]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) H) Stop/free a registered bcache to wipe and/or recreate it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) Suppose that you need to free up all bcache references so that you can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) fdisk run and re-register a changed partition table, which won't work
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) if there are any active backing or caching devices left on it:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) 1) Is it present in /dev/bcache* ? (there are times where it won't be)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280)    If so, it's easy::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) 	host:/sys/block/bcache0/bcache# echo 1 > stop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) 2) But if your backing device is gone, this won't work::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) 	host:/sys/block/bcache0# cd bcache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) 	bash: cd: bcache: No such file or directory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289)    In this case, you may have to unregister the dmcrypt block device that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290)    references this bcache to free it up::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) 	host:~# dmsetup remove oldds1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) 	bcache: bcache_device_free() bcache0 stopped
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) 	bcache: cache_set_free() Cache set 5bc072a8-ab17-446d-9744-e247949913c1 unregistered
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296)    This causes the backing bcache to be removed from /sys/fs/bcache and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297)    then it can be reused.  This would be true of any block device stacking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298)    where bcache is a lower device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) 3) In other cases, you can also look in /sys/fs/bcache/::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) 	host:/sys/fs/bcache# ls -l */{cache?,bdev?}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) 	lrwxrwxrwx 1 root root 0 Mar  5 09:39 0226553a-37cf-41d5-b3ce-8b1e944543a8/bdev1 -> ../../../devices/virtual/block/dm-1/bcache/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) 	lrwxrwxrwx 1 root root 0 Mar  5 09:39 0226553a-37cf-41d5-b3ce-8b1e944543a8/cache0 -> ../../../devices/virtual/block/dm-4/bcache/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) 	lrwxrwxrwx 1 root root 0 Mar  5 09:39 5bc072a8-ab17-446d-9744-e247949913c1/cache0 -> ../../../devices/pci0000:00/0000:00:01.0/0000:01:00.0/ata10/host9/target9:0:0/9:0:0:0/block/sdl/sdl2/bcache/
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307)    The device names will show which UUID is relevant, cd in that directory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308)    and stop the cache::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) 	host:/sys/fs/bcache/5bc072a8-ab17-446d-9744-e247949913c1# echo 1 > stop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312)    This will free up bcache references and let you reuse the partition for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313)    other purposes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) Troubleshooting performance
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) ---------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) Bcache has a bunch of config options and tunables. The defaults are intended to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) be reasonable for typical desktop and server workloads, but they're not what you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) want for getting the best possible numbers when benchmarking.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324)  - Backing device alignment
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326)    The default metadata size in bcache is 8k.  If your backing device is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327)    RAID based, then be sure to align this by a multiple of your stride
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328)    width using `bcache make --data-offset`. If you intend to expand your
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329)    disk array in the future, then multiply a series of primes by your
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330)    raid stripe size to get the disk multiples that you would like.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332)    For example:  If you have a 64k stripe size, then the following offset
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333)    would provide alignment for many common RAID5 data spindle counts::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) 	64k * 2*2*2*3*3*5*7 bytes = 161280k
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337)    That space is wasted, but for only 157.5MB you can grow your RAID 5
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338)    volume to the following data-spindle counts without re-aligning::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) 	3,4,5,6,7,8,9,10,12,14,15,18,20,21 ...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342)  - Bad write performance
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344)    If write performance is not what you expected, you probably wanted to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345)    running in writeback mode, which isn't the default (not due to a lack of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346)    maturity, but simply because in writeback mode you'll lose data if something
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347)    happens to your SSD)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) 	# echo writeback > /sys/block/bcache0/bcache/cache_mode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351)  - Bad performance, or traffic not going to the SSD that you'd expect
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353)    By default, bcache doesn't cache everything. It tries to skip sequential IO -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354)    because you really want to be caching the random IO, and if you copy a 10
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355)    gigabyte file you probably don't want that pushing 10 gigabytes of randomly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356)    accessed data out of your cache.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358)    But if you want to benchmark reads from cache, and you start out with fio
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359)    writing an 8 gigabyte test file - so you want to disable that::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) 	# echo 0 > /sys/block/bcache0/bcache/sequential_cutoff
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363)    To set it back to the default (4 mb), do::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) 	# echo 4M > /sys/block/bcache0/bcache/sequential_cutoff
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367)  - Traffic's still going to the spindle/still getting cache misses
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369)    In the real world, SSDs don't always keep up with disks - particularly with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370)    slower SSDs, many disks being cached by one SSD, or mostly sequential IO. So
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371)    you want to avoid being bottlenecked by the SSD and having it slow everything
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372)    down.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374)    To avoid that bcache tracks latency to the cache device, and gradually
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375)    throttles traffic if the latency exceeds a threshold (it does this by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376)    cranking down the sequential bypass).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378)    You can disable this if you need to by setting the thresholds to 0::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) 	# echo 0 > /sys/fs/bcache/<cache set>/congested_read_threshold_us
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381) 	# echo 0 > /sys/fs/bcache/<cache set>/congested_write_threshold_us
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383)    The default is 2000 us (2 milliseconds) for reads, and 20000 for writes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385)  - Still getting cache misses, of the same data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387)    One last issue that sometimes trips people up is actually an old bug, due to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388)    the way cache coherency is handled for cache misses. If a btree node is full,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389)    a cache miss won't be able to insert a key for the new data and the data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390)    won't be written to the cache.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392)    In practice this isn't an issue because as soon as a write comes along it'll
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393)    cause the btree node to be split, and you need almost no write traffic for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394)    this to not show up enough to be noticeable (especially since bcache's btree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395)    nodes are huge and index large regions of the device). But when you're
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396)    benchmarking, if you're trying to warm the cache by reading a bunch of data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397)    and there's no other traffic - that can be a problem.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399)    Solution: warm the cache by doing writes, or use the testing branch (there's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400)    a fix for the issue there).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) Sysfs - backing device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) ----------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) Available at /sys/block/<bdev>/bcache, /sys/block/bcache*/bcache and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407) (if attached) /sys/fs/bcache/<cset-uuid>/bdev*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) attach
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410)   Echo the UUID of a cache set to this file to enable caching.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412) cache_mode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413)   Can be one of either writethrough, writeback, writearound or none.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415) clear_stats
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416)   Writing to this file resets the running total stats (not the day/hour/5 minute
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417)   decaying versions).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) detach
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420)   Write to this file to detach from a cache set. If there is dirty data in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421)   cache, it will be flushed first.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423) dirty_data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 424)   Amount of dirty data for this backing device in the cache. Continuously
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 425)   updated unlike the cache set's version, but may be slightly off.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 426) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 427) label
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 428)   Name of underlying device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 429) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 430) readahead
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 431)   Size of readahead that should be performed.  Defaults to 0.  If set to e.g.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 432)   1M, it will round cache miss reads up to that size, but without overlapping
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 433)   existing cache entries.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 434) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 435) running
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 436)   1 if bcache is running (i.e. whether the /dev/bcache device exists, whether
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 437)   it's in passthrough mode or caching).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 438) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 439) sequential_cutoff
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 440)   A sequential IO will bypass the cache once it passes this threshold; the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 441)   most recent 128 IOs are tracked so sequential IO can be detected even when
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 442)   it isn't all done at once.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 443) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 444) sequential_merge
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 445)   If non zero, bcache keeps a list of the last 128 requests submitted to compare
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 446)   against all new requests to determine which new requests are sequential
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 447)   continuations of previous requests for the purpose of determining sequential
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 448)   cutoff. This is necessary if the sequential cutoff value is greater than the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 449)   maximum acceptable sequential size for any single request.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 450) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 451) state
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 452)   The backing device can be in one of four different states:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 453) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 454)   no cache: Has never been attached to a cache set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 455) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 456)   clean: Part of a cache set, and there is no cached dirty data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 457) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 458)   dirty: Part of a cache set, and there is cached dirty data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 459) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 460)   inconsistent: The backing device was forcibly run by the user when there was
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 461)   dirty data cached but the cache set was unavailable; whatever data was on the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 462)   backing device has likely been corrupted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 463) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 464) stop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 465)   Write to this file to shut down the bcache device and close the backing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 466)   device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 467) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 468) writeback_delay
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 469)   When dirty data is written to the cache and it previously did not contain
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 470)   any, waits some number of seconds before initiating writeback. Defaults to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 471)   30.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 472) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 473) writeback_percent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 474)   If nonzero, bcache tries to keep around this percentage of the cache dirty by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 475)   throttling background writeback and using a PD controller to smoothly adjust
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 476)   the rate.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 477) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 478) writeback_rate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 479)   Rate in sectors per second - if writeback_percent is nonzero, background
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 480)   writeback is throttled to this rate. Continuously adjusted by bcache but may
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 481)   also be set by the user.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 482) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 483) writeback_running
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 484)   If off, writeback of dirty data will not take place at all. Dirty data will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 485)   still be added to the cache until it is mostly full; only meant for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 486)   benchmarking. Defaults to on.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 487) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 488) Sysfs - backing device stats
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 489) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 490) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 491) There are directories with these numbers for a running total, as well as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 492) versions that decay over the past day, hour and 5 minutes; they're also
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 493) aggregated in the cache set directory as well.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 494) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 495) bypassed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 496)   Amount of IO (both reads and writes) that has bypassed the cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 497) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 498) cache_hits, cache_misses, cache_hit_ratio
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 499)   Hits and misses are counted per individual IO as bcache sees them; a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 500)   partial hit is counted as a miss.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 501) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 502) cache_bypass_hits, cache_bypass_misses
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 503)   Hits and misses for IO that is intended to skip the cache are still counted,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 504)   but broken out here.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 505) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 506) cache_miss_collisions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 507)   Counts instances where data was going to be inserted into the cache from a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 508)   cache miss, but raced with a write and data was already present (usually 0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 509)   since the synchronization for cache misses was rewritten)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 510) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 511) cache_readaheads
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 512)   Count of times readahead occurred.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 513) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 514) Sysfs - cache set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 515) ~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 516) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 517) Available at /sys/fs/bcache/<cset-uuid>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 518) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 519) average_key_size
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 520)   Average data per key in the btree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 521) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 522) bdev<0..n>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 523)   Symlink to each of the attached backing devices.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 524) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 525) block_size
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 526)   Block size of the cache devices.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 527) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 528) btree_cache_size
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 529)   Amount of memory currently used by the btree cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 530) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 531) bucket_size
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 532)   Size of buckets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 533) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 534) cache<0..n>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 535)   Symlink to each of the cache devices comprising this cache set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 536) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 537) cache_available_percent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 538)   Percentage of cache device which doesn't contain dirty data, and could
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 539)   potentially be used for writeback.  This doesn't mean this space isn't used
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 540)   for clean cached data; the unused statistic (in priority_stats) is typically
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 541)   much lower.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 542) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 543) clear_stats
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 544)   Clears the statistics associated with this cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 545) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 546) dirty_data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 547)   Amount of dirty data is in the cache (updated when garbage collection runs).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 548) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 549) flash_vol_create
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 550)   Echoing a size to this file (in human readable units, k/M/G) creates a thinly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 551)   provisioned volume backed by the cache set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 552) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 553) io_error_halflife, io_error_limit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 554)   These determines how many errors we accept before disabling the cache.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 555)   Each error is decayed by the half life (in # ios).  If the decaying count
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 556)   reaches io_error_limit dirty data is written out and the cache is disabled.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 557) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 558) journal_delay_ms
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 559)   Journal writes will delay for up to this many milliseconds, unless a cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 560)   flush happens sooner. Defaults to 100.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 561) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 562) root_usage_percent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 563)   Percentage of the root btree node in use.  If this gets too high the node
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 564)   will split, increasing the tree depth.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 565) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 566) stop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 567)   Write to this file to shut down the cache set - waits until all attached
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 568)   backing devices have been shut down.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 569) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 570) tree_depth
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 571)   Depth of the btree (A single node btree has depth 0).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 572) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 573) unregister
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 574)   Detaches all backing devices and closes the cache devices; if dirty data is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 575)   present it will disable writeback caching and wait for it to be flushed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 576) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 577) Sysfs - cache set internal
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 578) ~~~~~~~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 579) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 580) This directory also exposes timings for a number of internal operations, with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 581) separate files for average duration, average frequency, last occurrence and max
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 582) duration: garbage collection, btree read, btree node sorts and btree splits.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 583) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 584) active_journal_entries
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 585)   Number of journal entries that are newer than the index.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 586) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 587) btree_nodes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 588)   Total nodes in the btree.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 589) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 590) btree_used_percent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 591)   Average fraction of btree in use.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 592) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 593) bset_tree_stats
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 594)   Statistics about the auxiliary search trees
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 595) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 596) btree_cache_max_chain
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 597)   Longest chain in the btree node cache's hash table
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 598) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 599) cache_read_races
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 600)   Counts instances where while data was being read from the cache, the bucket
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 601)   was reused and invalidated - i.e. where the pointer was stale after the read
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 602)   completed. When this occurs the data is reread from the backing device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 603) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 604) trigger_gc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 605)   Writing to this file forces garbage collection to run.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 606) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 607) Sysfs - Cache device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 608) ~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 609) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 610) Available at /sys/block/<cdev>/bcache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 611) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 612) block_size
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 613)   Minimum granularity of writes - should match hardware sector size.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 614) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 615) btree_written
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 616)   Sum of all btree writes, in (kilo/mega/giga) bytes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 617) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 618) bucket_size
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 619)   Size of buckets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 620) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 621) cache_replacement_policy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 622)   One of either lru, fifo or random.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 623) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 624) discard
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 625)   Boolean; if on a discard/TRIM will be issued to each bucket before it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 626)   reused. Defaults to off, since SATA TRIM is an unqueued command (and thus
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 627)   slow).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 628) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 629) freelist_percent
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 630)   Size of the freelist as a percentage of nbuckets. Can be written to to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 631)   increase the number of buckets kept on the freelist, which lets you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 632)   artificially reduce the size of the cache at runtime. Mostly for testing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 633)   purposes (i.e. testing how different size caches affect your hit rate), but
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 634)   since buckets are discarded when they move on to the freelist will also make
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 635)   the SSD's garbage collection easier by effectively giving it more reserved
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 636)   space.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 637) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 638) io_errors
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 639)   Number of errors that have occurred, decayed by io_error_halflife.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 640) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 641) metadata_written
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 642)   Sum of all non data writes (btree writes and all other metadata).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 643) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 644) nbuckets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 645)   Total buckets in this cache
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 646) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 647) priority_stats
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 648)   Statistics about how recently data in the cache has been accessed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 649)   This can reveal your working set size.  Unused is the percentage of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 650)   the cache that doesn't contain any data.  Metadata is bcache's
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 651)   metadata overhead.  Average is the average priority of cache buckets.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 652)   Next is a list of quantiles with the priority threshold of each.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 653) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 654) written
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 655)   Sum of all data that has been written to the cache; comparison with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 656)   btree_written gives the amount of write inflation in bcache.