^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) ==============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) Device-mapper snapshot support
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) ==============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) Device-mapper allows you, without massive data copying:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) - To create snapshots of any block device i.e. mountable, saved states of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) the block device which are also writable without interfering with the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) original content;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) - To create device "forks", i.e. multiple different versions of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) same data stream.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) - To merge a snapshot of a block device back into the snapshot's origin
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) In the first two cases, dm copies only the chunks of data that get
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) changed and uses a separate copy-on-write (COW) block device for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) storage.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) For snapshot merge the contents of the COW storage are merged back into
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) the origin device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) There are three dm targets available:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) snapshot, snapshot-origin, and snapshot-merge.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) - snapshot-origin <origin>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28) which will normally have one or more snapshots based on it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) Reads will be mapped directly to the backing device. For each write, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) original data will be saved in the <COW device> of each snapshot to keep
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) its visible content unchanged, at least until the <COW device> fills up.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) - snapshot <origin> <COW device> <persistent?> <chunksize>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) [<# feature args> [<arg>]*]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) A snapshot of the <origin> block device is created. Changed chunks of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) <chunksize> sectors will be stored on the <COW device>. Writes will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) only go to the <COW device>. Reads will come from the <COW device> or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) from <origin> for unchanged data. <COW device> will often be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) smaller than the origin and if it fills up the snapshot will become
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) useless and be disabled, returning errors. So it is important to monitor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) the amount of free space and expand the <COW device> before it fills up.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) <persistent?> is P (Persistent) or N (Not persistent - will not survive
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) after reboot). O (Overflow) can be added as a persistent store option
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) to allow userspace to advertise its support for seeing "Overflow" in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) snapshot status. So supported store types are "P", "PO" and "N".
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) The difference between persistent and transient is with transient
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) snapshots less metadata must be saved on disk - they can be kept in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) memory by the kernel.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) When loading or unloading the snapshot target, the corresponding
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) snapshot-origin or snapshot-merge target must be suspended. A failure to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56) suspend the origin target could result in data corruption.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) Optional features:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) discard_zeroes_cow - a discard issued to the snapshot device that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) maps to entire chunks to will zero the corresponding exception(s) in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) the snapshot's exception store.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64) discard_passdown_origin - a discard to the snapshot device is passed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) down to the snapshot-origin's underlying device. This doesn't cause
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) copy-out to the snapshot exception store because the snapshot-origin
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67) target is bypassed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) The discard_passdown_origin feature depends on the discard_zeroes_cow
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70) feature being enabled.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73) - snapshot-merge <origin> <COW device> <persistent> <chunksize>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) [<# feature args> [<arg>]*]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) takes the same table arguments as the snapshot target except it only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77) works with persistent snapshots. This target assumes the role of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) "snapshot-origin" target and must not be loaded if the "snapshot-origin"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) is still present for <origin>.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) Creates a merging snapshot that takes control of the changed chunks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) stored in the <COW device> of an existing snapshot, through a handover
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) procedure, and merges these chunks back into the <origin>. Once merging
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) has started (in the background) the <origin> may be opened and the merge
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) will continue while I/O is flowing to it. Changes to the <origin> are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) deferred until the merging snapshot's corresponding chunk(s) have been
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) merged. Once merging has started the snapshot device, associated with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) the "snapshot" target, will return -EIO when accessed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) How snapshot is used by LVM2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) ============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) When you create the first LVM2 snapshot of a volume, four dm devices are used:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95) 1) a device containing the original mapping table of the source volume;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96) 2) a device used as the <COW device>;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) 3) a "snapshot" device, combining #1 and #2, which is the visible snapshot
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) volume;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) 4) the "original" volume (which uses the device number used by the original
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) source volume), whose table is replaced by a "snapshot-origin" mapping
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) from device #1.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) A fixed naming scheme is used, so with the following commands::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) lvcreate -L 1G -n base volumeGroup
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) lvcreate -L 100M --snapshot -n snap volumeGroup/base
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) we'll have this situation (with volumes in above order)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) # dmsetup table|grep volumeGroup
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) volumeGroup-base-real: 0 2097152 linear 8:19 384
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) volumeGroup-snap-cow: 0 204800 linear 8:19 2097536
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) volumeGroup-snap: 0 2097152 snapshot 254:11 254:12 P 16
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) volumeGroup-base: 0 2097152 snapshot-origin 254:11
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) # ls -lL /dev/mapper/volumeGroup-*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) brw------- 1 root root 254, 11 29 ago 18:15 /dev/mapper/volumeGroup-base-real
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) brw------- 1 root root 254, 12 29 ago 18:15 /dev/mapper/volumeGroup-snap-cow
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) brw------- 1 root root 254, 13 29 ago 18:15 /dev/mapper/volumeGroup-snap
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) brw------- 1 root root 254, 10 29 ago 18:14 /dev/mapper/volumeGroup-base
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) How snapshot-merge is used by LVM2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) ==================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) A merging snapshot assumes the role of the "snapshot-origin" while
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) merging. As such the "snapshot-origin" is replaced with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) "snapshot-merge". The "-real" device is not changed and the "-cow"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) device is renamed to <origin name>-cow to aid LVM2's cleanup of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) merging snapshot after it completes. The "snapshot" that hands over its
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) COW device to the "snapshot-merge" is deactivated (unless using lvchange
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) --refresh); but if it is left active it will simply return I/O errors.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) A snapshot will merge into its origin with the following command::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) lvconvert --merge volumeGroup/snap
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) we'll now have this situation::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) # dmsetup table|grep volumeGroup
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) volumeGroup-base-real: 0 2097152 linear 8:19 384
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) volumeGroup-base-cow: 0 204800 linear 8:19 2097536
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) volumeGroup-base: 0 2097152 snapshot-merge 254:11 254:12 P 16
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) # ls -lL /dev/mapper/volumeGroup-*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) brw------- 1 root root 254, 11 29 ago 18:15 /dev/mapper/volumeGroup-base-real
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) brw------- 1 root root 254, 12 29 ago 18:16 /dev/mapper/volumeGroup-base-cow
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) brw------- 1 root root 254, 10 29 ago 18:16 /dev/mapper/volumeGroup-base
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) How to determine when a merging is complete
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) ===========================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) The snapshot-merge and snapshot status lines end with:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) <sectors_allocated>/<total_sectors> <metadata_sectors>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) Both <sectors_allocated> and <total_sectors> include both data and metadata.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) During merging, the number of sectors allocated gets smaller and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) smaller. Merging has finished when the number of sectors holding data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) is zero, in other words <sectors_allocated> == <metadata_sectors>.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) Here is a practical example (using a hybrid of lvm and dmsetup commands)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) # lvs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) LV VG Attr LSize Origin Snap% Move Log Copy% Convert
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) base volumeGroup owi-a- 4.00g
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) snap volumeGroup swi-a- 1.00g base 18.97
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) # dmsetup status volumeGroup-snap
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) 0 8388608 snapshot 397896/2097152 1560
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) ^^^^ metadata sectors
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) # lvconvert --merge -b volumeGroup/snap
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) Merging of volume snap started.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) # lvs volumeGroup/snap
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) LV VG Attr LSize Origin Snap% Move Log Copy% Convert
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) base volumeGroup Owi-a- 4.00g 17.23
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) # dmsetup status volumeGroup-base
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) 0 8388608 snapshot-merge 281688/2097152 1104
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) # dmsetup status volumeGroup-base
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) 0 8388608 snapshot-merge 180480/2097152 712
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) # dmsetup status volumeGroup-base
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) 0 8388608 snapshot-merge 16/2097152 16
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) Merging has finished.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) # lvs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) LV VG Attr LSize Origin Snap% Move Log Copy% Convert
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) base volumeGroup owi-a- 4.00g