Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   1) =======
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   2) dm-raid
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   3) =======
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   4) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   5) The device-mapper RAID (dm-raid) target provides a bridge from DM to MD.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   6) It allows the MD RAID drivers to be accessed using a device-mapper
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   7) interface.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   8) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   9) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  10) Mapping Table Interface
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  11) -----------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  12) The target is named "raid" and it accepts the following parameters::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  13) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  14)   <raid_type> <#raid_params> <raid_params> \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  15)     <#raid_devs> <metadata_dev0> <dev0> [.. <metadata_devN> <devN>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  16) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  17) <raid_type>:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  18) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  19)   ============= ===============================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  20)   raid0		RAID0 striping (no resilience)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  21)   raid1		RAID1 mirroring
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  22)   raid4		RAID4 with dedicated last parity disk
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  23)   raid5_n 	RAID5 with dedicated last parity disk supporting takeover
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  24) 		Same as raid4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  25) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  26) 		- Transitory layout
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  27)   raid5_la	RAID5 left asymmetric
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  28) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  29) 		- rotating parity 0 with data continuation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  30)   raid5_ra	RAID5 right asymmetric
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  31) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  32) 		- rotating parity N with data continuation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  33)   raid5_ls	RAID5 left symmetric
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  34) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  35) 		- rotating parity 0 with data restart
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  36)   raid5_rs 	RAID5 right symmetric
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  37) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  38) 		- rotating parity N with data restart
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  39)   raid6_zr	RAID6 zero restart
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  40) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  41) 		- rotating parity zero (left-to-right) with data restart
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  42)   raid6_nr	RAID6 N restart
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  43) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  44) 		- rotating parity N (right-to-left) with data restart
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  45)   raid6_nc	RAID6 N continue
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  46) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  47) 		- rotating parity N (right-to-left) with data continuation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  48)   raid6_n_6	RAID6 with dedicate parity disks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  49) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  50) 		- parity and Q-syndrome on the last 2 disks;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  51) 		  layout for takeover from/to raid4/raid5_n
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  52)   raid6_la_6	Same as "raid_la" plus dedicated last Q-syndrome disk
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  53) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  54) 		- layout for takeover from raid5_la from/to raid6
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  55)   raid6_ra_6	Same as "raid5_ra" dedicated last Q-syndrome disk
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  56) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  57) 		- layout for takeover from raid5_ra from/to raid6
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  58)   raid6_ls_6	Same as "raid5_ls" dedicated last Q-syndrome disk
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  59) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  60) 		- layout for takeover from raid5_ls from/to raid6
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  61)   raid6_rs_6	Same as "raid5_rs" dedicated last Q-syndrome disk
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  62) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  63) 		- layout for takeover from raid5_rs from/to raid6
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  64)   raid10        Various RAID10 inspired algorithms chosen by additional params
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  65) 		(see raid10_format and raid10_copies below)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  66) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  67) 		- RAID10: Striped Mirrors (aka 'Striping on top of mirrors')
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  68) 		- RAID1E: Integrated Adjacent Stripe Mirroring
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  69) 		- RAID1E: Integrated Offset Stripe Mirroring
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  70) 		- and other similar RAID10 variants
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  71)   ============= ===============================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  72) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  73)   Reference: Chapter 4 of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  74)   https://www.snia.org/sites/default/files/SNIA_DDF_Technical_Position_v2.0.pdf
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  75) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  76) <#raid_params>: The number of parameters that follow.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  77) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  78) <raid_params> consists of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  79) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  80)     Mandatory parameters:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  81)         <chunk_size>:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  82) 		      Chunk size in sectors.  This parameter is often known as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  83) 		      "stripe size".  It is the only mandatory parameter and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  84) 		      is placed first.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  85) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  86)     followed by optional parameters (in any order):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  87) 	[sync|nosync]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  88) 		Force or prevent RAID initialization.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  89) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  90) 	[rebuild <idx>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  91) 		Rebuild drive number 'idx' (first drive is 0).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  92) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  93) 	[daemon_sleep <ms>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  94) 		Interval between runs of the bitmap daemon that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  95) 		clear bits.  A longer interval means less bitmap I/O but
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  96) 		resyncing after a failure is likely to take longer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  97) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  98) 	[min_recovery_rate <kB/sec/disk>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  99) 		Throttle RAID initialization
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) 	[max_recovery_rate <kB/sec/disk>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) 		Throttle RAID initialization
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) 	[write_mostly <idx>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) 		Mark drive index 'idx' write-mostly.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) 	[max_write_behind <sectors>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) 		See '--write-behind=' (man mdadm)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) 	[stripe_cache <sectors>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) 		Stripe cache size (RAID 4/5/6 only)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) 	[region_size <sectors>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) 		The region_size multiplied by the number of regions is the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) 		logical size of the array.  The bitmap records the device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) 		synchronisation state for each region.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113)         [raid10_copies   <# copies>], [raid10_format   <near|far|offset>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) 		These two options are used to alter the default layout of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) 		a RAID10 configuration.  The number of copies is can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) 		specified, but the default is 2.  There are also three
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) 		variations to how the copies are laid down - the default
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) 		is "near".  Near copies are what most people think of with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) 		respect to mirroring.  If these options are left unspecified,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) 		or 'raid10_copies 2' and/or 'raid10_format near' are given,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) 		then the layouts for 2, 3 and 4 devices	are:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) 		========	 ==========	   ==============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) 		2 drives         3 drives          4 drives
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) 		========	 ==========	   ==============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) 		A1  A1           A1  A1  A2        A1  A1  A2  A2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) 		A2  A2           A2  A3  A3        A3  A3  A4  A4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) 		A3  A3           A4  A4  A5        A5  A5  A6  A6
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) 		A4  A4           A5  A6  A6        A7  A7  A8  A8
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) 		..  ..           ..  ..  ..        ..  ..  ..  ..
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) 		========	 ==========	   ==============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) 		The 2-device layout is equivalent 2-way RAID1.  The 4-device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) 		layout is what a traditional RAID10 would look like.  The
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) 		3-device layout is what might be called a 'RAID1E - Integrated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) 		Adjacent Stripe Mirroring'.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) 		If 'raid10_copies 2' and 'raid10_format far', then the layouts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) 		for 2, 3 and 4 devices are:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) 		========	     ============	  ===================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) 		2 drives             3 drives             4 drives
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) 		========	     ============	  ===================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) 		A1  A2               A1   A2   A3         A1   A2   A3   A4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) 		A3  A4               A4   A5   A6         A5   A6   A7   A8
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) 		A5  A6               A7   A8   A9         A9   A10  A11  A12
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) 		..  ..               ..   ..   ..         ..   ..   ..   ..
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) 		A2  A1               A3   A1   A2         A2   A1   A4   A3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) 		A4  A3               A6   A4   A5         A6   A5   A8   A7
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) 		A6  A5               A9   A7   A8         A10  A9   A12  A11
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) 		..  ..               ..   ..   ..         ..   ..   ..   ..
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) 		========	     ============	  ===================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) 		If 'raid10_copies 2' and 'raid10_format offset', then the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) 		layouts for 2, 3 and 4 devices are:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) 		========       ==========         ================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) 		2 drives       3 drives           4 drives
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) 		========       ==========         ================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) 		A1  A2         A1  A2  A3         A1  A2  A3  A4
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) 		A2  A1         A3  A1  A2         A2  A1  A4  A3
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) 		A3  A4         A4  A5  A6         A5  A6  A7  A8
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) 		A4  A3         A6  A4  A5         A6  A5  A8  A7
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) 		A5  A6         A7  A8  A9         A9  A10 A11 A12
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) 		A6  A5         A9  A7  A8         A10 A9  A12 A11
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) 		..  ..         ..  ..  ..         ..  ..  ..  ..
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) 		========       ==========         ================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) 		Here we see layouts closely akin to 'RAID1E - Integrated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) 		Offset Stripe Mirroring'.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172)         [delta_disks <N>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) 		The delta_disks option value (-251 < N < +251) triggers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) 		device removal (negative value) or device addition (positive
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) 		value) to any reshape supporting raid levels 4/5/6 and 10.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) 		RAID levels 4/5/6 allow for addition of devices (metadata
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) 		and data device tuple), raid10_near and raid10_offset only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) 		allow for device addition. raid10_far does not support any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) 		reshaping at all.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) 		A minimum of devices have to be kept to enforce resilience,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) 		which is 3 devices for raid4/5 and 4 devices for raid6.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183)         [data_offset <sectors>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) 		This option value defines the offset into each data device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) 		where the data starts. This is used to provide out-of-place
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) 		reshaping space to avoid writing over data while
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) 		changing the layout of stripes, hence an interruption/crash
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) 		may happen at any time without the risk of losing data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) 		E.g. when adding devices to an existing raid set during
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) 		forward reshaping, the out-of-place space will be allocated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) 		at the beginning of each raid device. The kernel raid4/5/6/10
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) 		MD personalities supporting such device addition will read the data from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) 		the existing first stripes (those with smaller number of stripes)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) 		starting at data_offset to fill up a new stripe with the larger
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) 		number of stripes, calculate the redundancy blocks (CRC/Q-syndrome)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) 		and write that new stripe to offset 0. Same will be applied to all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) 		N-1 other new stripes. This out-of-place scheme is used to change
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) 		the RAID type (i.e. the allocation algorithm) as well, e.g.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) 		changing from raid5_ls to raid5_n.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) 	[journal_dev <dev>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) 		This option adds a journal device to raid4/5/6 raid sets and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) 		uses it to close the 'write hole' caused by the non-atomic updates
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) 		to the component devices which can cause data loss during recovery.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) 		The journal device is used as writethrough thus causing writes to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) 		be throttled versus non-journaled raid4/5/6 sets.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) 		Takeover/reshape is not possible with a raid4/5/6 journal device;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) 		it has to be deconfigured before requesting these.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) 	[journal_mode <mode>]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) 		This option sets the caching mode on journaled raid4/5/6 raid sets
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) 		(see 'journal_dev <dev>' above) to 'writethrough' or 'writeback'.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) 		If 'writeback' is selected the journal device has to be resilient
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) 		and must not suffer from the 'write hole' problem itself (e.g. use
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) 		raid1 or raid10) to avoid a single point of failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) <#raid_devs>: The number of devices composing the array.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) 	Each device consists of two entries.  The first is the device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) 	containing the metadata (if any); the second is the one containing the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) 	data. A Maximum of 64 metadata/data device entries are supported
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) 	up to target version 1.8.0.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) 	1.9.0 supports up to 253 which is enforced by the used MD kernel runtime.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 	If a drive has failed or is missing at creation time, a '-' can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) 	given for both the metadata and data drives for a given position.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) Example Tables
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) --------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233)   # RAID4 - 4 data drives, 1 parity (no metadata devices)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234)   # No metadata devices specified to hold superblock/bitmap info
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235)   # Chunk size of 1MiB
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236)   # (Lines separated for easy reading)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238)   0 1960893648 raid \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239)           raid4 1 2048 \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240)           5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242)   # RAID4 - 4 data drives, 1 parity (with metadata devices)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243)   # Chunk size of 1MiB, force RAID initialization,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244)   #       min recovery rate at 20 kiB/sec/disk
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246)   0 1960893648 raid \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247)           raid4 4 2048 sync min_recovery_rate 20 \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248)           5 8:17 8:18 8:33 8:34 8:49 8:50 8:65 8:66 8:81 8:82
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) Status Output
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) -------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) 'dmsetup table' displays the table used to construct the mapping.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) The optional parameters are always printed in the order listed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) above with "sync" or "nosync" always output ahead of the other
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) arguments, regardless of the order used when originally loading the table.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) Arguments that can be repeated are ordered by value.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) 'dmsetup status' yields information on the state and health of the array.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) The output is as follows (normally a single line, but expanded here for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) clarity)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264)   1: <s> <l> raid \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265)   2:      <raid_type> <#devices> <health_chars> \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266)   3:      <sync_ratio> <sync_action> <mismatch_cnt>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) Line 1 is the standard output produced by device-mapper.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) Line 2 & 3 are produced by the raid target and are best explained by example::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272)         0 1960893648 raid raid4 5 AAAAA 2/490221568 init 0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) Here we can see the RAID type is raid4, there are 5 devices - all of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) which are 'A'live, and the array is 2/490221568 complete with its initial
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) recovery.  Here is a fuller description of the individual fields:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) 	=============== =========================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) 	<raid_type>     Same as the <raid_type> used to create the array.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) 	<health_chars>  One char for each device, indicating:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) 			- 'A' = alive and in-sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) 			- 'a' = alive but not in-sync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) 			- 'D' = dead/failed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) 	<sync_ratio>    The ratio indicating how much of the array has undergone
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) 			the process described by 'sync_action'.  If the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) 			'sync_action' is "check" or "repair", then the process
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) 			of "resync" or "recover" can be considered complete.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) 	<sync_action>   One of the following possible states:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) 			idle
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) 				- No synchronization action is being performed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) 			frozen
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) 				- The current action has been halted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) 			resync
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) 				- Array is undergoing its initial synchronization
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) 				  or is resynchronizing after an unclean shutdown
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) 				  (possibly aided by a bitmap).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) 			recover
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) 				- A device in the array is being rebuilt or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) 				  replaced.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302) 			check
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) 				- A user-initiated full check of the array is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) 				  being performed.  All blocks are read and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) 				  checked for consistency.  The number of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) 				  discrepancies found are recorded in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) 				  <mismatch_cnt>.  No changes are made to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) 				  array by this action.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) 			repair
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310) 				- The same as "check", but discrepancies are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) 				  corrected.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312) 			reshape
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) 				- The array is undergoing a reshape.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) 	<mismatch_cnt>  The number of discrepancies found between mirror copies
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) 			in RAID1/10 or wrong parity values found in RAID4/5/6.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) 			This value is valid only after a "check" of the array
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) 			is performed.  A healthy array has a 'mismatch_cnt' of 0.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) 	<data_offset>   The current data offset to the start of the user data on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) 			each component device of a raid set (see the respective
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) 			raid parameter to support out-of-place reshaping).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) 	<journal_char>	- 'A' - active write-through journal device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) 			- 'a' - active write-back journal device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) 			- 'D' - dead journal device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) 			- '-' - no journal device.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) 	=============== =========================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) Message Interface
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) -----------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) The dm-raid target will accept certain actions through the 'message' interface.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) ('man dmsetup' for more information on the message interface.)  These actions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) include:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) 	========= ================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) 	"idle"    Halt the current sync action.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) 	"frozen"  Freeze the current sync action.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) 	"resync"  Initiate/continue a resync.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) 	"recover" Initiate/continue a recover process.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) 	"check"   Initiate a check (i.e. a "scrub") of the array.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) 	"repair"  Initiate a repair of the array.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) 	========= ================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) Discard Support
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) ---------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) The implementation of discard support among hardware vendors varies.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) When a block is discarded, some storage devices will return zeroes when
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) the block is read.  These devices set the 'discard_zeroes_data'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349) attribute.  Other devices will return random data.  Confusingly, some
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) devices that advertise 'discard_zeroes_data' will not reliably return
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) zeroes when discarded blocks are read!  Since RAID 4/5/6 uses blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) from a number of devices to calculate parity blocks and (for performance
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) reasons) relies on 'discard_zeroes_data' being reliable, it is important
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) that the devices be consistent.  Blocks may be discarded in the middle
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) of a RAID 4/5/6 stripe and if subsequent read results are not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) consistent, the parity blocks may be calculated differently at any time;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) making the parity blocks useless for redundancy.  It is important to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) understand how your hardware behaves with discards if you are going to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) enable discards with RAID 4/5/6.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) Since the behavior of storage devices is unreliable in this respect,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) even when reporting 'discard_zeroes_data', by default RAID 4/5/6
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) discard support is disabled -- this ensures data integrity at the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) expense of losing some performance.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) Storage devices that properly support 'discard_zeroes_data' are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) increasingly whitelisted in the kernel and can thus be trusted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) For trusted devices, the following dm-raid module parameter can be set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) to safely enable discard support for RAID 4/5/6:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372)     'devices_handle_discards_safely'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) Version History
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) ---------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380)  1.0.0	Initial version.  Support for RAID 4/5/6
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381)  1.1.0	Added support for RAID 1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382)  1.2.0	Handle creation of arrays that contain failed devices.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383)  1.3.0	Added support for RAID 10
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384)  1.3.1	Allow device replacement/rebuild for RAID 10
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385)  1.3.2	Fix/improve redundancy checking for RAID10
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386)  1.4.0	Non-functional change.  Removes arg from mapping function.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387)  1.4.1	RAID10 fix redundancy validation checks (commit 55ebbb5).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388)  1.4.2	Add RAID10 "far" and "offset" algorithm support.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389)  1.5.0	Add message interface to allow manipulation of the sync_action.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390) 	New status (STATUSTYPE_INFO) fields: sync_action and mismatch_cnt.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391)  1.5.1	Add ability to restore transiently failed devices on resume.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392)  1.5.2	'mismatch_cnt' is zero unless [last_]sync_action is "check".
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393)  1.6.0	Add discard support (and devices_handle_discard_safely module param).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394)  1.7.0	Add support for MD RAID0 mappings.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395)  1.8.0	Explicitly check for compatible flags in the superblock metadata
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396) 	and reject to start the raid set if any are set by a newer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) 	target version, thus avoiding data corruption on a raid set
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) 	with a reshape in progress.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399)  1.9.0	Add support for RAID level takeover/reshape/region size
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) 	and set size reduction.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401)  1.9.1	Fix activation of existing RAID 4/10 mapped devices
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402)  1.9.2	Don't emit '- -' on the status table line in case the constructor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) 	fails reading a superblock. Correctly emit 'maj:min1 maj:min2' and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) 	'D' on the status line.  If '- -' is passed into the constructor, emit
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) 	'- -' on the table line and '-' as the status line health character.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406)  1.10.0	Add support for raid4/5/6 journal device
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407)  1.10.1	Fix data corruption on reshape request
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408)  1.11.0	Fix table line argument order
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) 	(wrong raid10_copies/raid10_format sequence)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410)  1.11.1	Add raid4/5/6 journal write-back support via journal_mode option
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411)  1.12.1	Fix for MD deadlock between mddev_suspend() and md_write_start() available
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 412)  1.13.0	Fix dev_health status at end of "recover" (was 'a', now 'A')
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 413)  1.13.1	Fix deadlock caused by early md_stop_writes().  Also fix size an
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 414) 	state races.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 415)  1.13.2	Fix raid redundancy validation and avoid keeping raid set frozen
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 416)  1.14.0	Fix reshape race on small devices.  Fix stripe adding reshape
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 417) 	deadlock/potential data corruption.  Update superblock when
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 418) 	specific devices are requested via rebuild.  Fix RAID leg
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 419) 	rebuild errors.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 420)  1.15.0 Fix size extensions not being synchronized in case of new MD bitmap
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 421)         pages allocated;  also fix those not occuring after previous reductions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 422)  1.15.1 Fix argument count and arguments for rebuild/write_mostly/journal_(dev|mode)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 423)         on the status line.