Orange Pi5 kernel

Deprecated Linux kernel 5.10.110 for OrangePi 5/5B/5+ boards

3 Commits   0 Branches   0 Tags
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    1) // SPDX-License-Identifier: GPL-2.0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    2) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    3) #include <linux/kernel.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    4) #include <linux/irqflags.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    5) #include <linux/string.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    6) #include <linux/errno.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    7) #include <linux/bug.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    8) #include "printk_ringbuffer.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300    9) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   10) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   11)  * DOC: printk_ringbuffer overview
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   12)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   13)  * Data Structure
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   14)  * --------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   15)  * The printk_ringbuffer is made up of 3 internal ringbuffers:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   16)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   17)  *   desc_ring
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   18)  *     A ring of descriptors and their meta data (such as sequence number,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   19)  *     timestamp, loglevel, etc.) as well as internal state information about
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   20)  *     the record and logical positions specifying where in the other
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   21)  *     ringbuffer the text strings are located.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   22)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   23)  *   text_data_ring
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   24)  *     A ring of data blocks. A data block consists of an unsigned long
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   25)  *     integer (ID) that maps to a desc_ring index followed by the text
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   26)  *     string of the record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   27)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   28)  * The internal state information of a descriptor is the key element to allow
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   29)  * readers and writers to locklessly synchronize access to the data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   30)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   31)  * Implementation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   32)  * --------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   33)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   34)  * Descriptor Ring
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   35)  * ~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   36)  * The descriptor ring is an array of descriptors. A descriptor contains
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   37)  * essential meta data to track the data of a printk record using
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   38)  * blk_lpos structs pointing to associated text data blocks (see
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   39)  * "Data Rings" below). Each descriptor is assigned an ID that maps
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   40)  * directly to index values of the descriptor array and has a state. The ID
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   41)  * and the state are bitwise combined into a single descriptor field named
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   42)  * @state_var, allowing ID and state to be synchronously and atomically
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   43)  * updated.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   44)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   45)  * Descriptors have four states:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   46)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   47)  *   reserved
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   48)  *     A writer is modifying the record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   49)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   50)  *   committed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   51)  *     The record and all its data are written. A writer can reopen the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   52)  *     descriptor (transitioning it back to reserved), but in the committed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   53)  *     state the data is consistent.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   54)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   55)  *   finalized
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   56)  *     The record and all its data are complete and available for reading. A
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   57)  *     writer cannot reopen the descriptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   58)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   59)  *   reusable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   60)  *     The record exists, but its text and/or meta data may no longer be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   61)  *     available.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   62)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   63)  * Querying the @state_var of a record requires providing the ID of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   64)  * descriptor to query. This can yield a possible fifth (pseudo) state:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   65)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   66)  *   miss
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   67)  *     The descriptor being queried has an unexpected ID.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   68)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   69)  * The descriptor ring has a @tail_id that contains the ID of the oldest
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   70)  * descriptor and @head_id that contains the ID of the newest descriptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   71)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   72)  * When a new descriptor should be created (and the ring is full), the tail
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   73)  * descriptor is invalidated by first transitioning to the reusable state and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   74)  * then invalidating all tail data blocks up to and including the data blocks
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   75)  * associated with the tail descriptor (for the text ring). Then
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   76)  * @tail_id is advanced, followed by advancing @head_id. And finally the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   77)  * @state_var of the new descriptor is initialized to the new ID and reserved
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   78)  * state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   79)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   80)  * The @tail_id can only be advanced if the new @tail_id would be in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   81)  * committed or reusable queried state. This makes it possible that a valid
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   82)  * sequence number of the tail is always available.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   83)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   84)  * Descriptor Finalization
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   85)  * ~~~~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   86)  * When a writer calls the commit function prb_commit(), record data is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   87)  * fully stored and is consistent within the ringbuffer. However, a writer can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   88)  * reopen that record, claiming exclusive access (as with prb_reserve()), and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   89)  * modify that record. When finished, the writer must again commit the record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   90)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   91)  * In order for a record to be made available to readers (and also become
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   92)  * recyclable for writers), it must be finalized. A finalized record cannot be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   93)  * reopened and can never become "unfinalized". Record finalization can occur
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   94)  * in three different scenarios:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   95)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   96)  *   1) A writer can simultaneously commit and finalize its record by calling
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   97)  *      prb_final_commit() instead of prb_commit().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   98)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300   99)  *   2) When a new record is reserved and the previous record has been
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  100)  *      committed via prb_commit(), that previous record is automatically
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  101)  *      finalized.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  102)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  103)  *   3) When a record is committed via prb_commit() and a newer record
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  104)  *      already exists, the record being committed is automatically finalized.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  105)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  106)  * Data Ring
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  107)  * ~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  108)  * The text data ring is a byte array composed of data blocks. Data blocks are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  109)  * referenced by blk_lpos structs that point to the logical position of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  110)  * beginning of a data block and the beginning of the next adjacent data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  111)  * block. Logical positions are mapped directly to index values of the byte
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  112)  * array ringbuffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  113)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  114)  * Each data block consists of an ID followed by the writer data. The ID is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  115)  * the identifier of a descriptor that is associated with the data block. A
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  116)  * given data block is considered valid if all of the following conditions
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  117)  * are met:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  118)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  119)  *   1) The descriptor associated with the data block is in the committed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  120)  *      or finalized queried state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  121)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  122)  *   2) The blk_lpos struct within the descriptor associated with the data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  123)  *      block references back to the same data block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  124)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  125)  *   3) The data block is within the head/tail logical position range.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  126)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  127)  * If the writer data of a data block would extend beyond the end of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  128)  * byte array, only the ID of the data block is stored at the logical
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  129)  * position and the full data block (ID and writer data) is stored at the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  130)  * beginning of the byte array. The referencing blk_lpos will point to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  131)  * ID before the wrap and the next data block will be at the logical
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  132)  * position adjacent the full data block after the wrap.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  133)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  134)  * Data rings have a @tail_lpos that points to the beginning of the oldest
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  135)  * data block and a @head_lpos that points to the logical position of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  136)  * next (not yet existing) data block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  137)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  138)  * When a new data block should be created (and the ring is full), tail data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  139)  * blocks will first be invalidated by putting their associated descriptors
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  140)  * into the reusable state and then pushing the @tail_lpos forward beyond
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  141)  * them. Then the @head_lpos is pushed forward and is associated with a new
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  142)  * descriptor. If a data block is not valid, the @tail_lpos cannot be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  143)  * advanced beyond it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  144)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  145)  * Info Array
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  146)  * ~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  147)  * The general meta data of printk records are stored in printk_info structs,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  148)  * stored in an array with the same number of elements as the descriptor ring.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  149)  * Each info corresponds to the descriptor of the same index in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  150)  * descriptor ring. Info validity is confirmed by evaluating the corresponding
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  151)  * descriptor before and after loading the info.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  152)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  153)  * Usage
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  154)  * -----
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  155)  * Here are some simple examples demonstrating writers and readers. For the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  156)  * examples a global ringbuffer (test_rb) is available (which is not the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  157)  * actual ringbuffer used by printk)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  158)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  159)  *	DEFINE_PRINTKRB(test_rb, 15, 5);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  160)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  161)  * This ringbuffer allows up to 32768 records (2 ^ 15) and has a size of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  162)  * 1 MiB (2 ^ (15 + 5)) for text data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  163)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  164)  * Sample writer code::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  165)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  166)  *	const char *textstr = "message text";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  167)  *	struct prb_reserved_entry e;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  168)  *	struct printk_record r;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  169)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  170)  *	// specify how much to allocate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  171)  *	prb_rec_init_wr(&r, strlen(textstr) + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  172)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  173)  *	if (prb_reserve(&e, &test_rb, &r)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  174)  *		snprintf(r.text_buf, r.text_buf_size, "%s", textstr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  175)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  176)  *		r.info->text_len = strlen(textstr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  177)  *		r.info->ts_nsec = local_clock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  178)  *		r.info->caller_id = printk_caller_id();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  179)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  180)  *		// commit and finalize the record
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  181)  *		prb_final_commit(&e);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  182)  *	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  183)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  184)  * Note that additional writer functions are available to extend a record
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  185)  * after it has been committed but not yet finalized. This can be done as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  186)  * long as no new records have been reserved and the caller is the same.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  187)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  188)  * Sample writer code (record extending)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  189)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  190)  *		// alternate rest of previous example
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  191)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  192)  *		r.info->text_len = strlen(textstr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  193)  *		r.info->ts_nsec = local_clock();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  194)  *		r.info->caller_id = printk_caller_id();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  195)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  196)  *		// commit the record (but do not finalize yet)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  197)  *		prb_commit(&e);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  198)  *	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  199)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  200)  *	...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  201)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  202)  *	// specify additional 5 bytes text space to extend
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  203)  *	prb_rec_init_wr(&r, 5);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  204)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  205)  *	// try to extend, but only if it does not exceed 32 bytes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  206)  *	if (prb_reserve_in_last(&e, &test_rb, &r, printk_caller_id()), 32) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  207)  *		snprintf(&r.text_buf[r.info->text_len],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  208)  *			 r.text_buf_size - r.info->text_len, "hello");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  209)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  210)  *		r.info->text_len += 5;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  211)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  212)  *		// commit and finalize the record
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  213)  *		prb_final_commit(&e);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  214)  *	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  215)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  216)  * Sample reader code::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  217)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  218)  *	struct printk_info info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  219)  *	struct printk_record r;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  220)  *	char text_buf[32];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  221)  *	u64 seq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  222)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  223)  *	prb_rec_init_rd(&r, &info, &text_buf[0], sizeof(text_buf));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  224)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  225)  *	prb_for_each_record(0, &test_rb, &seq, &r) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  226)  *		if (info.seq != seq)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  227)  *			pr_warn("lost %llu records\n", info.seq - seq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  228)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  229)  *		if (info.text_len > r.text_buf_size) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  230)  *			pr_warn("record %llu text truncated\n", info.seq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  231)  *			text_buf[r.text_buf_size - 1] = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  232)  *		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  233)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  234)  *		pr_info("%llu: %llu: %s\n", info.seq, info.ts_nsec,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  235)  *			&text_buf[0]);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  236)  *	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  237)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  238)  * Note that additional less convenient reader functions are available to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  239)  * allow complex record access.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  240)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  241)  * ABA Issues
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  242)  * ~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  243)  * To help avoid ABA issues, descriptors are referenced by IDs (array index
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  244)  * values combined with tagged bits counting array wraps) and data blocks are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  245)  * referenced by logical positions (array index values combined with tagged
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  246)  * bits counting array wraps). However, on 32-bit systems the number of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  247)  * tagged bits is relatively small such that an ABA incident is (at least
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  248)  * theoretically) possible. For example, if 4 million maximally sized (1KiB)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  249)  * printk messages were to occur in NMI context on a 32-bit system, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  250)  * interrupted context would not be able to recognize that the 32-bit integer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  251)  * completely wrapped and thus represents a different data block than the one
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  252)  * the interrupted context expects.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  253)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  254)  * To help combat this possibility, additional state checking is performed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  255)  * (such as using cmpxchg() even though set() would suffice). These extra
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  256)  * checks are commented as such and will hopefully catch any ABA issue that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  257)  * a 32-bit system might experience.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  258)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  259)  * Memory Barriers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  260)  * ~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  261)  * Multiple memory barriers are used. To simplify proving correctness and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  262)  * generating litmus tests, lines of code related to memory barriers
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  263)  * (loads, stores, and the associated memory barriers) are labeled::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  264)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  265)  *	LMM(function:letter)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  266)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  267)  * Comments reference the labels using only the "function:letter" part.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  268)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  269)  * The memory barrier pairs and their ordering are:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  270)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  271)  *   desc_reserve:D / desc_reserve:B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  272)  *     push descriptor tail (id), then push descriptor head (id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  273)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  274)  *   desc_reserve:D / data_push_tail:B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  275)  *     push data tail (lpos), then set new descriptor reserved (state)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  276)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  277)  *   desc_reserve:D / desc_push_tail:C
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  278)  *     push descriptor tail (id), then set new descriptor reserved (state)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  279)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  280)  *   desc_reserve:D / prb_first_seq:C
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  281)  *     push descriptor tail (id), then set new descriptor reserved (state)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  282)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  283)  *   desc_reserve:F / desc_read:D
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  284)  *     set new descriptor id and reserved (state), then allow writer changes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  285)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  286)  *   data_alloc:A (or data_realloc:A) / desc_read:D
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  287)  *     set old descriptor reusable (state), then modify new data block area
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  288)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  289)  *   data_alloc:A (or data_realloc:A) / data_push_tail:B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  290)  *     push data tail (lpos), then modify new data block area
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  291)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  292)  *   _prb_commit:B / desc_read:B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  293)  *     store writer changes, then set new descriptor committed (state)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  294)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  295)  *   desc_reopen_last:A / _prb_commit:B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  296)  *     set descriptor reserved (state), then read descriptor data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  297)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  298)  *   _prb_commit:B / desc_reserve:D
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  299)  *     set new descriptor committed (state), then check descriptor head (id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  300)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  301)  *   data_push_tail:D / data_push_tail:A
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  302)  *     set descriptor reusable (state), then push data tail (lpos)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  303)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  304)  *   desc_push_tail:B / desc_reserve:D
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  305)  *     set descriptor reusable (state), then push descriptor tail (id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  306)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  307) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  308) #define DATA_SIZE(data_ring)		_DATA_SIZE((data_ring)->size_bits)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  309) #define DATA_SIZE_MASK(data_ring)	(DATA_SIZE(data_ring) - 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  310) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  311) #define DESCS_COUNT(desc_ring)		_DESCS_COUNT((desc_ring)->count_bits)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  312) #define DESCS_COUNT_MASK(desc_ring)	(DESCS_COUNT(desc_ring) - 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  313) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  314) /* Determine the data array index from a logical position. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  315) #define DATA_INDEX(data_ring, lpos)	((lpos) & DATA_SIZE_MASK(data_ring))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  316) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  317) /* Determine the desc array index from an ID or sequence number. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  318) #define DESC_INDEX(desc_ring, n)	((n) & DESCS_COUNT_MASK(desc_ring))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  319) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  320) /* Determine how many times the data array has wrapped. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  321) #define DATA_WRAPS(data_ring, lpos)	((lpos) >> (data_ring)->size_bits)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  322) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  323) /* Determine if a logical position refers to a data-less block. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  324) #define LPOS_DATALESS(lpos)		((lpos) & 1UL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  325) #define BLK_DATALESS(blk)		(LPOS_DATALESS((blk)->begin) && \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  326) 					 LPOS_DATALESS((blk)->next))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  327) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  328) /* Get the logical position at index 0 of the current wrap. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  329) #define DATA_THIS_WRAP_START_LPOS(data_ring, lpos) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  330) ((lpos) & ~DATA_SIZE_MASK(data_ring))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  331) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  332) /* Get the ID for the same index of the previous wrap as the given ID. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  333) #define DESC_ID_PREV_WRAP(desc_ring, id) \
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  334) DESC_ID((id) - DESCS_COUNT(desc_ring))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  335) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  336) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  337)  * A data block: mapped directly to the beginning of the data block area
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  338)  * specified as a logical position within the data ring.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  339)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  340)  * @id:   the ID of the associated descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  341)  * @data: the writer data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  342)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  343)  * Note that the size of a data block is only known by its associated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  344)  * descriptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  345)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  346) struct prb_data_block {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  347) 	unsigned long	id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  348) 	char		data[];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  349) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  350) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  351) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  352)  * Return the descriptor associated with @n. @n can be either a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  353)  * descriptor ID or a sequence number.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  354)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  355) static struct prb_desc *to_desc(struct prb_desc_ring *desc_ring, u64 n)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  356) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  357) 	return &desc_ring->descs[DESC_INDEX(desc_ring, n)];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  358) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  359) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  360) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  361)  * Return the printk_info associated with @n. @n can be either a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  362)  * descriptor ID or a sequence number.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  363)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  364) static struct printk_info *to_info(struct prb_desc_ring *desc_ring, u64 n)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  365) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  366) 	return &desc_ring->infos[DESC_INDEX(desc_ring, n)];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  367) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  368) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  369) static struct prb_data_block *to_block(struct prb_data_ring *data_ring,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  370) 				       unsigned long begin_lpos)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  371) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  372) 	return (void *)&data_ring->data[DATA_INDEX(data_ring, begin_lpos)];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  373) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  374) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  375) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  376)  * Increase the data size to account for data block meta data plus any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  377)  * padding so that the adjacent data block is aligned on the ID size.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  378)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  379) static unsigned int to_blk_size(unsigned int size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  380) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  381) 	struct prb_data_block *db = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  382) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  383) 	size += sizeof(*db);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  384) 	size = ALIGN(size, sizeof(db->id));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  385) 	return size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  386) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  387) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  388) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  389)  * Sanity checker for reserve size. The ringbuffer code assumes that a data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  390)  * block does not exceed the maximum possible size that could fit within the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  391)  * ringbuffer. This function provides that basic size check so that the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  392)  * assumption is safe.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  393)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  394) static bool data_check_size(struct prb_data_ring *data_ring, unsigned int size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  395) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  396) 	struct prb_data_block *db = NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  397) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  398) 	if (size == 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  399) 		return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  400) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  401) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  402) 	 * Ensure the alignment padded size could possibly fit in the data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  403) 	 * array. The largest possible data block must still leave room for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  404) 	 * at least the ID of the next block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  405) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  406) 	size = to_blk_size(size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  407) 	if (size > DATA_SIZE(data_ring) - sizeof(db->id))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  408) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  409) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  410) 	return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  411) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  412) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  413) /* Query the state of a descriptor. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  414) static enum desc_state get_desc_state(unsigned long id,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  415) 				      unsigned long state_val)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  416) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  417) 	if (id != DESC_ID(state_val))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  418) 		return desc_miss;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  419) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  420) 	return DESC_STATE(state_val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  421) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  422) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  423) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  424)  * Get a copy of a specified descriptor and return its queried state. If the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  425)  * descriptor is in an inconsistent state (miss or reserved), the caller can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  426)  * only expect the descriptor's @state_var field to be valid.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  427)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  428)  * The sequence number and caller_id can be optionally retrieved. Like all
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  429)  * non-state_var data, they are only valid if the descriptor is in a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  430)  * consistent state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  431)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  432) static enum desc_state desc_read(struct prb_desc_ring *desc_ring,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  433) 				 unsigned long id, struct prb_desc *desc_out,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  434) 				 u64 *seq_out, u32 *caller_id_out)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  435) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  436) 	struct printk_info *info = to_info(desc_ring, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  437) 	struct prb_desc *desc = to_desc(desc_ring, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  438) 	atomic_long_t *state_var = &desc->state_var;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  439) 	enum desc_state d_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  440) 	unsigned long state_val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  441) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  442) 	/* Check the descriptor state. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  443) 	state_val = atomic_long_read(state_var); /* LMM(desc_read:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  444) 	d_state = get_desc_state(id, state_val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  445) 	if (d_state == desc_miss || d_state == desc_reserved) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  446) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  447) 		 * The descriptor is in an inconsistent state. Set at least
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  448) 		 * @state_var so that the caller can see the details of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  449) 		 * the inconsistent state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  450) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  451) 		goto out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  452) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  453) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  454) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  455) 	 * Guarantee the state is loaded before copying the descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  456) 	 * content. This avoids copying obsolete descriptor content that might
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  457) 	 * not apply to the descriptor state. This pairs with _prb_commit:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  458) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  459) 	 * Memory barrier involvement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  460) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  461) 	 * If desc_read:A reads from _prb_commit:B, then desc_read:C reads
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  462) 	 * from _prb_commit:A.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  463) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  464) 	 * Relies on:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  465) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  466) 	 * WMB from _prb_commit:A to _prb_commit:B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  467) 	 *    matching
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  468) 	 * RMB from desc_read:A to desc_read:C
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  469) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  470) 	smp_rmb(); /* LMM(desc_read:B) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  471) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  472) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  473) 	 * Copy the descriptor data. The data is not valid until the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  474) 	 * state has been re-checked. A memcpy() for all of @desc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  475) 	 * cannot be used because of the atomic_t @state_var field.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  476) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  477) 	if (desc_out) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  478) 		memcpy(&desc_out->text_blk_lpos, &desc->text_blk_lpos,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  479) 		       sizeof(desc_out->text_blk_lpos)); /* LMM(desc_read:C) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  480) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  481) 	if (seq_out)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  482) 		*seq_out = info->seq; /* also part of desc_read:C */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  483) 	if (caller_id_out)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  484) 		*caller_id_out = info->caller_id; /* also part of desc_read:C */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  485) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  486) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  487) 	 * 1. Guarantee the descriptor content is loaded before re-checking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  488) 	 *    the state. This avoids reading an obsolete descriptor state
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  489) 	 *    that may not apply to the copied content. This pairs with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  490) 	 *    desc_reserve:F.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  491) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  492) 	 *    Memory barrier involvement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  493) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  494) 	 *    If desc_read:C reads from desc_reserve:G, then desc_read:E
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  495) 	 *    reads from desc_reserve:F.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  496) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  497) 	 *    Relies on:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  498) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  499) 	 *    WMB from desc_reserve:F to desc_reserve:G
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  500) 	 *       matching
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  501) 	 *    RMB from desc_read:C to desc_read:E
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  502) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  503) 	 * 2. Guarantee the record data is loaded before re-checking the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  504) 	 *    state. This avoids reading an obsolete descriptor state that may
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  505) 	 *    not apply to the copied data. This pairs with data_alloc:A and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  506) 	 *    data_realloc:A.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  507) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  508) 	 *    Memory barrier involvement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  509) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  510) 	 *    If copy_data:A reads from data_alloc:B, then desc_read:E
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  511) 	 *    reads from desc_make_reusable:A.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  512) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  513) 	 *    Relies on:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  514) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  515) 	 *    MB from desc_make_reusable:A to data_alloc:B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  516) 	 *       matching
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  517) 	 *    RMB from desc_read:C to desc_read:E
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  518) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  519) 	 *    Note: desc_make_reusable:A and data_alloc:B can be different
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  520) 	 *          CPUs. However, the data_alloc:B CPU (which performs the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  521) 	 *          full memory barrier) must have previously seen
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  522) 	 *          desc_make_reusable:A.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  523) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  524) 	smp_rmb(); /* LMM(desc_read:D) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  525) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  526) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  527) 	 * The data has been copied. Return the current descriptor state,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  528) 	 * which may have changed since the load above.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  529) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  530) 	state_val = atomic_long_read(state_var); /* LMM(desc_read:E) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  531) 	d_state = get_desc_state(id, state_val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  532) out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  533) 	if (desc_out)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  534) 		atomic_long_set(&desc_out->state_var, state_val);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  535) 	return d_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  536) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  537) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  538) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  539)  * Take a specified descriptor out of the finalized state by attempting
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  540)  * the transition from finalized to reusable. Either this context or some
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  541)  * other context will have been successful.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  542)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  543) static void desc_make_reusable(struct prb_desc_ring *desc_ring,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  544) 			       unsigned long id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  545) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  546) 	unsigned long val_finalized = DESC_SV(id, desc_finalized);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  547) 	unsigned long val_reusable = DESC_SV(id, desc_reusable);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  548) 	struct prb_desc *desc = to_desc(desc_ring, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  549) 	atomic_long_t *state_var = &desc->state_var;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  550) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  551) 	atomic_long_cmpxchg_relaxed(state_var, val_finalized,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  552) 				    val_reusable); /* LMM(desc_make_reusable:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  553) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  554) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  555) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  556)  * Given the text data ring, put the associated descriptor of each
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  557)  * data block from @lpos_begin until @lpos_end into the reusable state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  558)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  559)  * If there is any problem making the associated descriptor reusable, either
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  560)  * the descriptor has not yet been finalized or another writer context has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  561)  * already pushed the tail lpos past the problematic data block. Regardless,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  562)  * on error the caller can re-load the tail lpos to determine the situation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  563)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  564) static bool data_make_reusable(struct printk_ringbuffer *rb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  565) 			       struct prb_data_ring *data_ring,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  566) 			       unsigned long lpos_begin,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  567) 			       unsigned long lpos_end,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  568) 			       unsigned long *lpos_out)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  569) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  570) 	struct prb_desc_ring *desc_ring = &rb->desc_ring;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  571) 	struct prb_data_block *blk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  572) 	enum desc_state d_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  573) 	struct prb_desc desc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  574) 	struct prb_data_blk_lpos *blk_lpos = &desc.text_blk_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  575) 	unsigned long id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  576) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  577) 	/* Loop until @lpos_begin has advanced to or beyond @lpos_end. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  578) 	while ((lpos_end - lpos_begin) - 1 < DATA_SIZE(data_ring)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  579) 		blk = to_block(data_ring, lpos_begin);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  580) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  581) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  582) 		 * Load the block ID from the data block. This is a data race
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  583) 		 * against a writer that may have newly reserved this data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  584) 		 * area. If the loaded value matches a valid descriptor ID,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  585) 		 * the blk_lpos of that descriptor will be checked to make
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  586) 		 * sure it points back to this data block. If the check fails,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  587) 		 * the data area has been recycled by another writer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  588) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  589) 		id = blk->id; /* LMM(data_make_reusable:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  590) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  591) 		d_state = desc_read(desc_ring, id, &desc,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  592) 				    NULL, NULL); /* LMM(data_make_reusable:B) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  593) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  594) 		switch (d_state) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  595) 		case desc_miss:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  596) 		case desc_reserved:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  597) 		case desc_committed:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  598) 			return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  599) 		case desc_finalized:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  600) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  601) 			 * This data block is invalid if the descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  602) 			 * does not point back to it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  603) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  604) 			if (blk_lpos->begin != lpos_begin)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  605) 				return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  606) 			desc_make_reusable(desc_ring, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  607) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  608) 		case desc_reusable:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  609) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  610) 			 * This data block is invalid if the descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  611) 			 * does not point back to it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  612) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  613) 			if (blk_lpos->begin != lpos_begin)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  614) 				return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  615) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  616) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  617) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  618) 		/* Advance @lpos_begin to the next data block. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  619) 		lpos_begin = blk_lpos->next;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  620) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  621) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  622) 	*lpos_out = lpos_begin;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  623) 	return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  624) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  625) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  626) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  627)  * Advance the data ring tail to at least @lpos. This function puts
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  628)  * descriptors into the reusable state if the tail is pushed beyond
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  629)  * their associated data block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  630)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  631) static bool data_push_tail(struct printk_ringbuffer *rb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  632) 			   struct prb_data_ring *data_ring,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  633) 			   unsigned long lpos)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  634) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  635) 	unsigned long tail_lpos_new;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  636) 	unsigned long tail_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  637) 	unsigned long next_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  638) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  639) 	/* If @lpos is from a data-less block, there is nothing to do. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  640) 	if (LPOS_DATALESS(lpos))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  641) 		return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  642) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  643) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  644) 	 * Any descriptor states that have transitioned to reusable due to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  645) 	 * data tail being pushed to this loaded value will be visible to this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  646) 	 * CPU. This pairs with data_push_tail:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  647) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  648) 	 * Memory barrier involvement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  649) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  650) 	 * If data_push_tail:A reads from data_push_tail:D, then this CPU can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  651) 	 * see desc_make_reusable:A.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  652) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  653) 	 * Relies on:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  654) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  655) 	 * MB from desc_make_reusable:A to data_push_tail:D
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  656) 	 *    matches
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  657) 	 * READFROM from data_push_tail:D to data_push_tail:A
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  658) 	 *    thus
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  659) 	 * READFROM from desc_make_reusable:A to this CPU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  660) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  661) 	tail_lpos = atomic_long_read(&data_ring->tail_lpos); /* LMM(data_push_tail:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  662) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  663) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  664) 	 * Loop until the tail lpos is at or beyond @lpos. This condition
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  665) 	 * may already be satisfied, resulting in no full memory barrier
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  666) 	 * from data_push_tail:D being performed. However, since this CPU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  667) 	 * sees the new tail lpos, any descriptor states that transitioned to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  668) 	 * the reusable state must already be visible.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  669) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  670) 	while ((lpos - tail_lpos) - 1 < DATA_SIZE(data_ring)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  671) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  672) 		 * Make all descriptors reusable that are associated with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  673) 		 * data blocks before @lpos.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  674) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  675) 		if (!data_make_reusable(rb, data_ring, tail_lpos, lpos,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  676) 					&next_lpos)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  677) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  678) 			 * 1. Guarantee the block ID loaded in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  679) 			 *    data_make_reusable() is performed before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  680) 			 *    reloading the tail lpos. The failed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  681) 			 *    data_make_reusable() may be due to a newly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  682) 			 *    recycled data area causing the tail lpos to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  683) 			 *    have been previously pushed. This pairs with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  684) 			 *    data_alloc:A and data_realloc:A.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  685) 			 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  686) 			 *    Memory barrier involvement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  687) 			 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  688) 			 *    If data_make_reusable:A reads from data_alloc:B,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  689) 			 *    then data_push_tail:C reads from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  690) 			 *    data_push_tail:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  691) 			 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  692) 			 *    Relies on:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  693) 			 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  694) 			 *    MB from data_push_tail:D to data_alloc:B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  695) 			 *       matching
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  696) 			 *    RMB from data_make_reusable:A to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  697) 			 *    data_push_tail:C
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  698) 			 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  699) 			 *    Note: data_push_tail:D and data_alloc:B can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  700) 			 *          different CPUs. However, the data_alloc:B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  701) 			 *          CPU (which performs the full memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  702) 			 *          barrier) must have previously seen
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  703) 			 *          data_push_tail:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  704) 			 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  705) 			 * 2. Guarantee the descriptor state loaded in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  706) 			 *    data_make_reusable() is performed before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  707) 			 *    reloading the tail lpos. The failed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  708) 			 *    data_make_reusable() may be due to a newly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  709) 			 *    recycled descriptor causing the tail lpos to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  710) 			 *    have been previously pushed. This pairs with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  711) 			 *    desc_reserve:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  712) 			 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  713) 			 *    Memory barrier involvement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  714) 			 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  715) 			 *    If data_make_reusable:B reads from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  716) 			 *    desc_reserve:F, then data_push_tail:C reads
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  717) 			 *    from data_push_tail:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  718) 			 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  719) 			 *    Relies on:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  720) 			 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  721) 			 *    MB from data_push_tail:D to desc_reserve:F
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  722) 			 *       matching
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  723) 			 *    RMB from data_make_reusable:B to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  724) 			 *    data_push_tail:C
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  725) 			 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  726) 			 *    Note: data_push_tail:D and desc_reserve:F can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  727) 			 *          be different CPUs. However, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  728) 			 *          desc_reserve:F CPU (which performs the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  729) 			 *          full memory barrier) must have previously
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  730) 			 *          seen data_push_tail:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  731) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  732) 			smp_rmb(); /* LMM(data_push_tail:B) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  733) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  734) 			tail_lpos_new = atomic_long_read(&data_ring->tail_lpos
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  735) 							); /* LMM(data_push_tail:C) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  736) 			if (tail_lpos_new == tail_lpos)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  737) 				return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  738) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  739) 			/* Another CPU pushed the tail. Try again. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  740) 			tail_lpos = tail_lpos_new;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  741) 			continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  742) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  743) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  744) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  745) 		 * Guarantee any descriptor states that have transitioned to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  746) 		 * reusable are stored before pushing the tail lpos. A full
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  747) 		 * memory barrier is needed since other CPUs may have made
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  748) 		 * the descriptor states reusable. This pairs with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  749) 		 * data_push_tail:A.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  750) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  751) 		if (atomic_long_try_cmpxchg(&data_ring->tail_lpos, &tail_lpos,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  752) 					    next_lpos)) { /* LMM(data_push_tail:D) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  753) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  754) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  755) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  756) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  757) 	return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  758) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  759) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  760) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  761)  * Advance the desc ring tail. This function advances the tail by one
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  762)  * descriptor, thus invalidating the oldest descriptor. Before advancing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  763)  * the tail, the tail descriptor is made reusable and all data blocks up to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  764)  * and including the descriptor's data block are invalidated (i.e. the data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  765)  * ring tail is pushed past the data block of the descriptor being made
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  766)  * reusable).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  767)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  768) static bool desc_push_tail(struct printk_ringbuffer *rb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  769) 			   unsigned long tail_id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  770) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  771) 	struct prb_desc_ring *desc_ring = &rb->desc_ring;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  772) 	enum desc_state d_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  773) 	struct prb_desc desc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  774) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  775) 	d_state = desc_read(desc_ring, tail_id, &desc, NULL, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  776) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  777) 	switch (d_state) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  778) 	case desc_miss:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  779) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  780) 		 * If the ID is exactly 1 wrap behind the expected, it is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  781) 		 * in the process of being reserved by another writer and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  782) 		 * must be considered reserved.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  783) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  784) 		if (DESC_ID(atomic_long_read(&desc.state_var)) ==
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  785) 		    DESC_ID_PREV_WRAP(desc_ring, tail_id)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  786) 			return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  787) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  788) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  789) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  790) 		 * The ID has changed. Another writer must have pushed the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  791) 		 * tail and recycled the descriptor already. Success is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  792) 		 * returned because the caller is only interested in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  793) 		 * specified tail being pushed, which it was.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  794) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  795) 		return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  796) 	case desc_reserved:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  797) 	case desc_committed:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  798) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  799) 	case desc_finalized:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  800) 		desc_make_reusable(desc_ring, tail_id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  801) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  802) 	case desc_reusable:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  803) 		break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  804) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  805) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  806) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  807) 	 * Data blocks must be invalidated before their associated
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  808) 	 * descriptor can be made available for recycling. Invalidating
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  809) 	 * them later is not possible because there is no way to trust
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  810) 	 * data blocks once their associated descriptor is gone.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  811) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  812) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  813) 	if (!data_push_tail(rb, &rb->text_data_ring, desc.text_blk_lpos.next))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  814) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  815) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  816) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  817) 	 * Check the next descriptor after @tail_id before pushing the tail
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  818) 	 * to it because the tail must always be in a finalized or reusable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  819) 	 * state. The implementation of prb_first_seq() relies on this.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  820) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  821) 	 * A successful read implies that the next descriptor is less than or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  822) 	 * equal to @head_id so there is no risk of pushing the tail past the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  823) 	 * head.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  824) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  825) 	d_state = desc_read(desc_ring, DESC_ID(tail_id + 1), &desc,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  826) 			    NULL, NULL); /* LMM(desc_push_tail:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  827) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  828) 	if (d_state == desc_finalized || d_state == desc_reusable) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  829) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  830) 		 * Guarantee any descriptor states that have transitioned to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  831) 		 * reusable are stored before pushing the tail ID. This allows
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  832) 		 * verifying the recycled descriptor state. A full memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  833) 		 * barrier is needed since other CPUs may have made the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  834) 		 * descriptor states reusable. This pairs with desc_reserve:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  835) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  836) 		atomic_long_cmpxchg(&desc_ring->tail_id, tail_id,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  837) 				    DESC_ID(tail_id + 1)); /* LMM(desc_push_tail:B) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  838) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  839) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  840) 		 * Guarantee the last state load from desc_read() is before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  841) 		 * reloading @tail_id in order to see a new tail ID in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  842) 		 * case that the descriptor has been recycled. This pairs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  843) 		 * with desc_reserve:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  844) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  845) 		 * Memory barrier involvement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  846) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  847) 		 * If desc_push_tail:A reads from desc_reserve:F, then
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  848) 		 * desc_push_tail:D reads from desc_push_tail:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  849) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  850) 		 * Relies on:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  851) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  852) 		 * MB from desc_push_tail:B to desc_reserve:F
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  853) 		 *    matching
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  854) 		 * RMB from desc_push_tail:A to desc_push_tail:D
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  855) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  856) 		 * Note: desc_push_tail:B and desc_reserve:F can be different
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  857) 		 *       CPUs. However, the desc_reserve:F CPU (which performs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  858) 		 *       the full memory barrier) must have previously seen
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  859) 		 *       desc_push_tail:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  860) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  861) 		smp_rmb(); /* LMM(desc_push_tail:C) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  862) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  863) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  864) 		 * Re-check the tail ID. The descriptor following @tail_id is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  865) 		 * not in an allowed tail state. But if the tail has since
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  866) 		 * been moved by another CPU, then it does not matter.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  867) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  868) 		if (atomic_long_read(&desc_ring->tail_id) == tail_id) /* LMM(desc_push_tail:D) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  869) 			return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  870) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  871) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  872) 	return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  873) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  874) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  875) /* Reserve a new descriptor, invalidating the oldest if necessary. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  876) static bool desc_reserve(struct printk_ringbuffer *rb, unsigned long *id_out)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  877) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  878) 	struct prb_desc_ring *desc_ring = &rb->desc_ring;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  879) 	unsigned long prev_state_val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  880) 	unsigned long id_prev_wrap;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  881) 	struct prb_desc *desc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  882) 	unsigned long head_id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  883) 	unsigned long id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  884) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  885) 	head_id = atomic_long_read(&desc_ring->head_id); /* LMM(desc_reserve:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  886) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  887) 	do {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  888) 		id = DESC_ID(head_id + 1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  889) 		id_prev_wrap = DESC_ID_PREV_WRAP(desc_ring, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  890) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  891) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  892) 		 * Guarantee the head ID is read before reading the tail ID.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  893) 		 * Since the tail ID is updated before the head ID, this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  894) 		 * guarantees that @id_prev_wrap is never ahead of the tail
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  895) 		 * ID. This pairs with desc_reserve:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  896) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  897) 		 * Memory barrier involvement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  898) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  899) 		 * If desc_reserve:A reads from desc_reserve:D, then
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  900) 		 * desc_reserve:C reads from desc_push_tail:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  901) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  902) 		 * Relies on:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  903) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  904) 		 * MB from desc_push_tail:B to desc_reserve:D
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  905) 		 *    matching
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  906) 		 * RMB from desc_reserve:A to desc_reserve:C
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  907) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  908) 		 * Note: desc_push_tail:B and desc_reserve:D can be different
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  909) 		 *       CPUs. However, the desc_reserve:D CPU (which performs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  910) 		 *       the full memory barrier) must have previously seen
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  911) 		 *       desc_push_tail:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  912) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  913) 		smp_rmb(); /* LMM(desc_reserve:B) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  914) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  915) 		if (id_prev_wrap == atomic_long_read(&desc_ring->tail_id
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  916) 						    )) { /* LMM(desc_reserve:C) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  917) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  918) 			 * Make space for the new descriptor by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  919) 			 * advancing the tail.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  920) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  921) 			if (!desc_push_tail(rb, id_prev_wrap))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  922) 				return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  923) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  924) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  925) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  926) 		 * 1. Guarantee the tail ID is read before validating the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  927) 		 *    recycled descriptor state. A read memory barrier is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  928) 		 *    sufficient for this. This pairs with desc_push_tail:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  929) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  930) 		 *    Memory barrier involvement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  931) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  932) 		 *    If desc_reserve:C reads from desc_push_tail:B, then
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  933) 		 *    desc_reserve:E reads from desc_make_reusable:A.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  934) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  935) 		 *    Relies on:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  936) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  937) 		 *    MB from desc_make_reusable:A to desc_push_tail:B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  938) 		 *       matching
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  939) 		 *    RMB from desc_reserve:C to desc_reserve:E
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  940) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  941) 		 *    Note: desc_make_reusable:A and desc_push_tail:B can be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  942) 		 *          different CPUs. However, the desc_push_tail:B CPU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  943) 		 *          (which performs the full memory barrier) must have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  944) 		 *          previously seen desc_make_reusable:A.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  945) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  946) 		 * 2. Guarantee the tail ID is stored before storing the head
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  947) 		 *    ID. This pairs with desc_reserve:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  948) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  949) 		 * 3. Guarantee any data ring tail changes are stored before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  950) 		 *    recycling the descriptor. Data ring tail changes can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  951) 		 *    happen via desc_push_tail()->data_push_tail(). A full
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  952) 		 *    memory barrier is needed since another CPU may have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  953) 		 *    pushed the data ring tails. This pairs with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  954) 		 *    data_push_tail:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  955) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  956) 		 * 4. Guarantee a new tail ID is stored before recycling the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  957) 		 *    descriptor. A full memory barrier is needed since
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  958) 		 *    another CPU may have pushed the tail ID. This pairs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  959) 		 *    with desc_push_tail:C and this also pairs with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  960) 		 *    prb_first_seq:C.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  961) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  962) 		 * 5. Guarantee the head ID is stored before trying to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  963) 		 *    finalize the previous descriptor. This pairs with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  964) 		 *    _prb_commit:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  965) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  966) 	} while (!atomic_long_try_cmpxchg(&desc_ring->head_id, &head_id,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  967) 					  id)); /* LMM(desc_reserve:D) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  968) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  969) 	desc = to_desc(desc_ring, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  970) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  971) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  972) 	 * If the descriptor has been recycled, verify the old state val.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  973) 	 * See "ABA Issues" about why this verification is performed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  974) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  975) 	prev_state_val = atomic_long_read(&desc->state_var); /* LMM(desc_reserve:E) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  976) 	if (prev_state_val &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  977) 	    get_desc_state(id_prev_wrap, prev_state_val) != desc_reusable) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  978) 		WARN_ON_ONCE(1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  979) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  980) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  981) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  982) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  983) 	 * Assign the descriptor a new ID and set its state to reserved.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  984) 	 * See "ABA Issues" about why cmpxchg() instead of set() is used.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  985) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  986) 	 * Guarantee the new descriptor ID and state is stored before making
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  987) 	 * any other changes. A write memory barrier is sufficient for this.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  988) 	 * This pairs with desc_read:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  989) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  990) 	if (!atomic_long_try_cmpxchg(&desc->state_var, &prev_state_val,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  991) 			DESC_SV(id, desc_reserved))) { /* LMM(desc_reserve:F) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  992) 		WARN_ON_ONCE(1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  993) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  994) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  995) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  996) 	/* Now data in @desc can be modified: LMM(desc_reserve:G) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  997) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  998) 	*id_out = id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300  999) 	return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1000) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1001) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1002) /* Determine the end of a data block. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1003) static unsigned long get_next_lpos(struct prb_data_ring *data_ring,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1004) 				   unsigned long lpos, unsigned int size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1005) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1006) 	unsigned long begin_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1007) 	unsigned long next_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1008) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1009) 	begin_lpos = lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1010) 	next_lpos = lpos + size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1011) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1012) 	/* First check if the data block does not wrap. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1013) 	if (DATA_WRAPS(data_ring, begin_lpos) == DATA_WRAPS(data_ring, next_lpos))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1014) 		return next_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1015) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1016) 	/* Wrapping data blocks store their data at the beginning. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1017) 	return (DATA_THIS_WRAP_START_LPOS(data_ring, next_lpos) + size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1018) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1019) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1020) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1021)  * Allocate a new data block, invalidating the oldest data block(s)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1022)  * if necessary. This function also associates the data block with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1023)  * a specified descriptor.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1024)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1025) static char *data_alloc(struct printk_ringbuffer *rb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1026) 			struct prb_data_ring *data_ring, unsigned int size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1027) 			struct prb_data_blk_lpos *blk_lpos, unsigned long id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1028) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1029) 	struct prb_data_block *blk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1030) 	unsigned long begin_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1031) 	unsigned long next_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1032) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1033) 	if (size == 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1034) 		/* Specify a data-less block. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1035) 		blk_lpos->begin = NO_LPOS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1036) 		blk_lpos->next = NO_LPOS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1037) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1038) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1039) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1040) 	size = to_blk_size(size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1041) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1042) 	begin_lpos = atomic_long_read(&data_ring->head_lpos);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1043) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1044) 	do {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1045) 		next_lpos = get_next_lpos(data_ring, begin_lpos, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1046) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1047) 		if (!data_push_tail(rb, data_ring, next_lpos - DATA_SIZE(data_ring))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1048) 			/* Failed to allocate, specify a data-less block. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1049) 			blk_lpos->begin = FAILED_LPOS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1050) 			blk_lpos->next = FAILED_LPOS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1051) 			return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1052) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1053) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1054) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1055) 		 * 1. Guarantee any descriptor states that have transitioned
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1056) 		 *    to reusable are stored before modifying the newly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1057) 		 *    allocated data area. A full memory barrier is needed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1058) 		 *    since other CPUs may have made the descriptor states
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1059) 		 *    reusable. See data_push_tail:A about why the reusable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1060) 		 *    states are visible. This pairs with desc_read:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1061) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1062) 		 * 2. Guarantee any updated tail lpos is stored before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1063) 		 *    modifying the newly allocated data area. Another CPU may
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1064) 		 *    be in data_make_reusable() and is reading a block ID
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1065) 		 *    from this area. data_make_reusable() can handle reading
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1066) 		 *    a garbage block ID value, but then it must be able to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1067) 		 *    load a new tail lpos. A full memory barrier is needed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1068) 		 *    since other CPUs may have updated the tail lpos. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1069) 		 *    pairs with data_push_tail:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1070) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1071) 	} while (!atomic_long_try_cmpxchg(&data_ring->head_lpos, &begin_lpos,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1072) 					  next_lpos)); /* LMM(data_alloc:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1073) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1074) 	blk = to_block(data_ring, begin_lpos);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1075) 	blk->id = id; /* LMM(data_alloc:B) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1076) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1077) 	if (DATA_WRAPS(data_ring, begin_lpos) != DATA_WRAPS(data_ring, next_lpos)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1078) 		/* Wrapping data blocks store their data at the beginning. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1079) 		blk = to_block(data_ring, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1080) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1081) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1082) 		 * Store the ID on the wrapped block for consistency.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1083) 		 * The printk_ringbuffer does not actually use it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1084) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1085) 		blk->id = id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1086) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1087) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1088) 	blk_lpos->begin = begin_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1089) 	blk_lpos->next = next_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1090) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1091) 	return &blk->data[0];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1092) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1093) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1094) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1095)  * Try to resize an existing data block associated with the descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1096)  * specified by @id. If the resized data block should become wrapped, it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1097)  * copies the old data to the new data block. If @size yields a data block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1098)  * with the same or less size, the data block is left as is.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1099)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1100)  * Fail if this is not the last allocated data block or if there is not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1101)  * enough space or it is not possible make enough space.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1102)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1103)  * Return a pointer to the beginning of the entire data buffer or NULL on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1104)  * failure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1105)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1106) static char *data_realloc(struct printk_ringbuffer *rb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1107) 			  struct prb_data_ring *data_ring, unsigned int size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1108) 			  struct prb_data_blk_lpos *blk_lpos, unsigned long id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1109) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1110) 	struct prb_data_block *blk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1111) 	unsigned long head_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1112) 	unsigned long next_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1113) 	bool wrapped;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1114) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1115) 	/* Reallocation only works if @blk_lpos is the newest data block. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1116) 	head_lpos = atomic_long_read(&data_ring->head_lpos);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1117) 	if (head_lpos != blk_lpos->next)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1118) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1119) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1120) 	/* Keep track if @blk_lpos was a wrapping data block. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1121) 	wrapped = (DATA_WRAPS(data_ring, blk_lpos->begin) != DATA_WRAPS(data_ring, blk_lpos->next));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1122) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1123) 	size = to_blk_size(size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1124) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1125) 	next_lpos = get_next_lpos(data_ring, blk_lpos->begin, size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1126) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1127) 	/* If the data block does not increase, there is nothing to do. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1128) 	if (head_lpos - next_lpos < DATA_SIZE(data_ring)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1129) 		if (wrapped)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1130) 			blk = to_block(data_ring, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1131) 		else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1132) 			blk = to_block(data_ring, blk_lpos->begin);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1133) 		return &blk->data[0];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1134) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1135) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1136) 	if (!data_push_tail(rb, data_ring, next_lpos - DATA_SIZE(data_ring)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1137) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1138) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1139) 	/* The memory barrier involvement is the same as data_alloc:A. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1140) 	if (!atomic_long_try_cmpxchg(&data_ring->head_lpos, &head_lpos,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1141) 				     next_lpos)) { /* LMM(data_realloc:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1142) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1143) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1144) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1145) 	blk = to_block(data_ring, blk_lpos->begin);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1146) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1147) 	if (DATA_WRAPS(data_ring, blk_lpos->begin) != DATA_WRAPS(data_ring, next_lpos)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1148) 		struct prb_data_block *old_blk = blk;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1149) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1150) 		/* Wrapping data blocks store their data at the beginning. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1151) 		blk = to_block(data_ring, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1152) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1153) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1154) 		 * Store the ID on the wrapped block for consistency.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1155) 		 * The printk_ringbuffer does not actually use it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1156) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1157) 		blk->id = id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1158) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1159) 		if (!wrapped) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1160) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1161) 			 * Since the allocated space is now in the newly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1162) 			 * created wrapping data block, copy the content
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1163) 			 * from the old data block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1164) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1165) 			memcpy(&blk->data[0], &old_blk->data[0],
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1166) 			       (blk_lpos->next - blk_lpos->begin) - sizeof(blk->id));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1167) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1168) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1169) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1170) 	blk_lpos->next = next_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1171) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1172) 	return &blk->data[0];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1173) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1174) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1175) /* Return the number of bytes used by a data block. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1176) static unsigned int space_used(struct prb_data_ring *data_ring,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1177) 			       struct prb_data_blk_lpos *blk_lpos)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1178) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1179) 	/* Data-less blocks take no space. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1180) 	if (BLK_DATALESS(blk_lpos))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1181) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1182) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1183) 	if (DATA_WRAPS(data_ring, blk_lpos->begin) == DATA_WRAPS(data_ring, blk_lpos->next)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1184) 		/* Data block does not wrap. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1185) 		return (DATA_INDEX(data_ring, blk_lpos->next) -
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1186) 			DATA_INDEX(data_ring, blk_lpos->begin));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1187) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1188) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1189) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1190) 	 * For wrapping data blocks, the trailing (wasted) space is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1191) 	 * also counted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1192) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1193) 	return (DATA_INDEX(data_ring, blk_lpos->next) +
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1194) 		DATA_SIZE(data_ring) - DATA_INDEX(data_ring, blk_lpos->begin));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1195) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1196) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1197) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1198)  * Given @blk_lpos, return a pointer to the writer data from the data block
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1199)  * and calculate the size of the data part. A NULL pointer is returned if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1200)  * @blk_lpos specifies values that could never be legal.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1201)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1202)  * This function (used by readers) performs strict validation on the lpos
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1203)  * values to possibly detect bugs in the writer code. A WARN_ON_ONCE() is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1204)  * triggered if an internal error is detected.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1205)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1206) static const char *get_data(struct prb_data_ring *data_ring,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1207) 			    struct prb_data_blk_lpos *blk_lpos,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1208) 			    unsigned int *data_size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1209) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1210) 	struct prb_data_block *db;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1211) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1212) 	/* Data-less data block description. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1213) 	if (BLK_DATALESS(blk_lpos)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1214) 		if (blk_lpos->begin == NO_LPOS && blk_lpos->next == NO_LPOS) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1215) 			*data_size = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1216) 			return "";
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1217) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1218) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1219) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1220) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1221) 	/* Regular data block: @begin less than @next and in same wrap. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1222) 	if (DATA_WRAPS(data_ring, blk_lpos->begin) == DATA_WRAPS(data_ring, blk_lpos->next) &&
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1223) 	    blk_lpos->begin < blk_lpos->next) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1224) 		db = to_block(data_ring, blk_lpos->begin);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1225) 		*data_size = blk_lpos->next - blk_lpos->begin;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1226) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1227) 	/* Wrapping data block: @begin is one wrap behind @next. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1228) 	} else if (DATA_WRAPS(data_ring, blk_lpos->begin + DATA_SIZE(data_ring)) ==
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1229) 		   DATA_WRAPS(data_ring, blk_lpos->next)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1230) 		db = to_block(data_ring, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1231) 		*data_size = DATA_INDEX(data_ring, blk_lpos->next);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1232) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1233) 	/* Illegal block description. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1234) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1235) 		WARN_ON_ONCE(1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1236) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1237) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1238) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1239) 	/* A valid data block will always be aligned to the ID size. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1240) 	if (WARN_ON_ONCE(blk_lpos->begin != ALIGN(blk_lpos->begin, sizeof(db->id))) ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1241) 	    WARN_ON_ONCE(blk_lpos->next != ALIGN(blk_lpos->next, sizeof(db->id)))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1242) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1243) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1244) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1245) 	/* A valid data block will always have at least an ID. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1246) 	if (WARN_ON_ONCE(*data_size < sizeof(db->id)))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1247) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1248) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1249) 	/* Subtract block ID space from size to reflect data size. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1250) 	*data_size -= sizeof(db->id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1251) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1252) 	return &db->data[0];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1253) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1254) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1255) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1256)  * Attempt to transition the newest descriptor from committed back to reserved
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1257)  * so that the record can be modified by a writer again. This is only possible
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1258)  * if the descriptor is not yet finalized and the provided @caller_id matches.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1259)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1260) static struct prb_desc *desc_reopen_last(struct prb_desc_ring *desc_ring,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1261) 					 u32 caller_id, unsigned long *id_out)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1262) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1263) 	unsigned long prev_state_val;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1264) 	enum desc_state d_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1265) 	struct prb_desc desc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1266) 	struct prb_desc *d;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1267) 	unsigned long id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1268) 	u32 cid;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1269) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1270) 	id = atomic_long_read(&desc_ring->head_id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1271) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1272) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1273) 	 * To reduce unnecessarily reopening, first check if the descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1274) 	 * state and caller ID are correct.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1275) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1276) 	d_state = desc_read(desc_ring, id, &desc, NULL, &cid);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1277) 	if (d_state != desc_committed || cid != caller_id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1278) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1279) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1280) 	d = to_desc(desc_ring, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1281) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1282) 	prev_state_val = DESC_SV(id, desc_committed);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1283) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1284) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1285) 	 * Guarantee the reserved state is stored before reading any
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1286) 	 * record data. A full memory barrier is needed because @state_var
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1287) 	 * modification is followed by reading. This pairs with _prb_commit:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1288) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1289) 	 * Memory barrier involvement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1290) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1291) 	 * If desc_reopen_last:A reads from _prb_commit:B, then
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1292) 	 * prb_reserve_in_last:A reads from _prb_commit:A.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1293) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1294) 	 * Relies on:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1295) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1296) 	 * WMB from _prb_commit:A to _prb_commit:B
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1297) 	 *    matching
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1298) 	 * MB If desc_reopen_last:A to prb_reserve_in_last:A
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1299) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1300) 	if (!atomic_long_try_cmpxchg(&d->state_var, &prev_state_val,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1301) 			DESC_SV(id, desc_reserved))) { /* LMM(desc_reopen_last:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1302) 		return NULL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1303) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1304) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1305) 	*id_out = id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1306) 	return d;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1307) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1308) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1309) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1310)  * prb_reserve_in_last() - Re-reserve and extend the space in the ringbuffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1311)  *                         used by the newest record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1312)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1313)  * @e:         The entry structure to setup.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1314)  * @rb:        The ringbuffer to re-reserve and extend data in.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1315)  * @r:         The record structure to allocate buffers for.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1316)  * @caller_id: The caller ID of the caller (reserving writer).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1317)  * @max_size:  Fail if the extended size would be greater than this.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1318)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1319)  * This is the public function available to writers to re-reserve and extend
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1320)  * data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1321)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1322)  * The writer specifies the text size to extend (not the new total size) by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1323)  * setting the @text_buf_size field of @r. To ensure proper initialization
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1324)  * of @r, prb_rec_init_wr() should be used.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1325)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1326)  * This function will fail if @caller_id does not match the caller ID of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1327)  * newest record. In that case the caller must reserve new data using
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1328)  * prb_reserve().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1329)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1330)  * Context: Any context. Disables local interrupts on success.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1331)  * Return: true if text data could be extended, otherwise false.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1332)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1333)  * On success:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1334)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1335)  *   - @r->text_buf points to the beginning of the entire text buffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1336)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1337)  *   - @r->text_buf_size is set to the new total size of the buffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1338)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1339)  *   - @r->info is not touched so that @r->info->text_len could be used
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1340)  *     to append the text.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1341)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1342)  *   - prb_record_text_space() can be used on @e to query the new
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1343)  *     actually used space.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1344)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1345)  * Important: All @r->info fields will already be set with the current values
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1346)  *            for the record. I.e. @r->info->text_len will be less than
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1347)  *            @text_buf_size. Writers can use @r->info->text_len to know
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1348)  *            where concatenation begins and writers should update
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1349)  *            @r->info->text_len after concatenating.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1350)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1351) bool prb_reserve_in_last(struct prb_reserved_entry *e, struct printk_ringbuffer *rb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1352) 			 struct printk_record *r, u32 caller_id, unsigned int max_size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1353) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1354) 	struct prb_desc_ring *desc_ring = &rb->desc_ring;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1355) 	struct printk_info *info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1356) 	unsigned int data_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1357) 	struct prb_desc *d;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1358) 	unsigned long id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1359) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1360) 	local_irq_save(e->irqflags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1361) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1362) 	/* Transition the newest descriptor back to the reserved state. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1363) 	d = desc_reopen_last(desc_ring, caller_id, &id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1364) 	if (!d) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1365) 		local_irq_restore(e->irqflags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1366) 		goto fail_reopen;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1367) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1368) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1369) 	/* Now the writer has exclusive access: LMM(prb_reserve_in_last:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1370) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1371) 	info = to_info(desc_ring, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1372) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1373) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1374) 	 * Set the @e fields here so that prb_commit() can be used if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1375) 	 * anything fails from now on.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1376) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1377) 	e->rb = rb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1378) 	e->id = id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1379) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1380) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1381) 	 * desc_reopen_last() checked the caller_id, but there was no
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1382) 	 * exclusive access at that point. The descriptor may have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1383) 	 * changed since then.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1384) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1385) 	if (caller_id != info->caller_id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1386) 		goto fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1387) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1388) 	if (BLK_DATALESS(&d->text_blk_lpos)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1389) 		if (WARN_ON_ONCE(info->text_len != 0)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1390) 			pr_warn_once("wrong text_len value (%hu, expecting 0)\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1391) 				     info->text_len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1392) 			info->text_len = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1393) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1394) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1395) 		if (!data_check_size(&rb->text_data_ring, r->text_buf_size))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1396) 			goto fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1397) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1398) 		if (r->text_buf_size > max_size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1399) 			goto fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1400) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1401) 		r->text_buf = data_alloc(rb, &rb->text_data_ring, r->text_buf_size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1402) 					 &d->text_blk_lpos, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1403) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1404) 		if (!get_data(&rb->text_data_ring, &d->text_blk_lpos, &data_size))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1405) 			goto fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1406) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1407) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1408) 		 * Increase the buffer size to include the original size. If
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1409) 		 * the meta data (@text_len) is not sane, use the full data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1410) 		 * block size.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1411) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1412) 		if (WARN_ON_ONCE(info->text_len > data_size)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1413) 			pr_warn_once("wrong text_len value (%hu, expecting <=%u)\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1414) 				     info->text_len, data_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1415) 			info->text_len = data_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1416) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1417) 		r->text_buf_size += info->text_len;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1418) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1419) 		if (!data_check_size(&rb->text_data_ring, r->text_buf_size))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1420) 			goto fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1421) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1422) 		if (r->text_buf_size > max_size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1423) 			goto fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1424) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1425) 		r->text_buf = data_realloc(rb, &rb->text_data_ring, r->text_buf_size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1426) 					   &d->text_blk_lpos, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1427) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1428) 	if (r->text_buf_size && !r->text_buf)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1429) 		goto fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1430) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1431) 	r->info = info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1432) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1433) 	e->text_space = space_used(&rb->text_data_ring, &d->text_blk_lpos);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1434) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1435) 	return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1436) fail:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1437) 	prb_commit(e);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1438) 	/* prb_commit() re-enabled interrupts. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1439) fail_reopen:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1440) 	/* Make it clear to the caller that the re-reserve failed. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1441) 	memset(r, 0, sizeof(*r));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1442) 	return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1443) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1444) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1445) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1446)  * Attempt to finalize a specified descriptor. If this fails, the descriptor
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1447)  * is either already final or it will finalize itself when the writer commits.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1448)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1449) static void desc_make_final(struct prb_desc_ring *desc_ring, unsigned long id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1450) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1451) 	unsigned long prev_state_val = DESC_SV(id, desc_committed);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1452) 	struct prb_desc *d = to_desc(desc_ring, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1453) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1454) 	atomic_long_cmpxchg_relaxed(&d->state_var, prev_state_val,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1455) 			DESC_SV(id, desc_finalized)); /* LMM(desc_make_final:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1456) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1457) 	/* Best effort to remember the last finalized @id. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1458) 	atomic_long_set(&desc_ring->last_finalized_id, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1459) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1460) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1461) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1462)  * prb_reserve() - Reserve space in the ringbuffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1463)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1464)  * @e:  The entry structure to setup.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1465)  * @rb: The ringbuffer to reserve data in.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1466)  * @r:  The record structure to allocate buffers for.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1467)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1468)  * This is the public function available to writers to reserve data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1469)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1470)  * The writer specifies the text size to reserve by setting the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1471)  * @text_buf_size field of @r. To ensure proper initialization of @r,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1472)  * prb_rec_init_wr() should be used.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1473)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1474)  * Context: Any context. Disables local interrupts on success.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1475)  * Return: true if at least text data could be allocated, otherwise false.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1476)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1477)  * On success, the fields @info and @text_buf of @r will be set by this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1478)  * function and should be filled in by the writer before committing. Also
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1479)  * on success, prb_record_text_space() can be used on @e to query the actual
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1480)  * space used for the text data block.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1481)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1482)  * Important: @info->text_len needs to be set correctly by the writer in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1483)  *            order for data to be readable and/or extended. Its value
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1484)  *            is initialized to 0.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1485)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1486) bool prb_reserve(struct prb_reserved_entry *e, struct printk_ringbuffer *rb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1487) 		 struct printk_record *r)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1488) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1489) 	struct prb_desc_ring *desc_ring = &rb->desc_ring;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1490) 	struct printk_info *info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1491) 	struct prb_desc *d;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1492) 	unsigned long id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1493) 	u64 seq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1494) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1495) 	if (!data_check_size(&rb->text_data_ring, r->text_buf_size))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1496) 		goto fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1497) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1498) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1499) 	 * Descriptors in the reserved state act as blockers to all further
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1500) 	 * reservations once the desc_ring has fully wrapped. Disable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1501) 	 * interrupts during the reserve/commit window in order to minimize
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1502) 	 * the likelihood of this happening.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1503) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1504) 	local_irq_save(e->irqflags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1505) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1506) 	if (!desc_reserve(rb, &id)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1507) 		/* Descriptor reservation failures are tracked. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1508) 		atomic_long_inc(&rb->fail);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1509) 		local_irq_restore(e->irqflags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1510) 		goto fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1511) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1512) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1513) 	d = to_desc(desc_ring, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1514) 	info = to_info(desc_ring, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1515) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1516) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1517) 	 * All @info fields (except @seq) are cleared and must be filled in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1518) 	 * by the writer. Save @seq before clearing because it is used to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1519) 	 * determine the new sequence number.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1520) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1521) 	seq = info->seq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1522) 	memset(info, 0, sizeof(*info));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1523) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1524) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1525) 	 * Set the @e fields here so that prb_commit() can be used if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1526) 	 * text data allocation fails.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1527) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1528) 	e->rb = rb;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1529) 	e->id = id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1530) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1531) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1532) 	 * Initialize the sequence number if it has "never been set".
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1533) 	 * Otherwise just increment it by a full wrap.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1534) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1535) 	 * @seq is considered "never been set" if it has a value of 0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1536) 	 * _except_ for @infos[0], which was specially setup by the ringbuffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1537) 	 * initializer and therefore is always considered as set.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1538) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1539) 	 * See the "Bootstrap" comment block in printk_ringbuffer.h for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1540) 	 * details about how the initializer bootstraps the descriptors.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1541) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1542) 	if (seq == 0 && DESC_INDEX(desc_ring, id) != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1543) 		info->seq = DESC_INDEX(desc_ring, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1544) 	else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1545) 		info->seq = seq + DESCS_COUNT(desc_ring);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1546) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1547) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1548) 	 * New data is about to be reserved. Once that happens, previous
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1549) 	 * descriptors are no longer able to be extended. Finalize the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1550) 	 * previous descriptor now so that it can be made available to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1551) 	 * readers. (For seq==0 there is no previous descriptor.)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1552) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1553) 	if (info->seq > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1554) 		desc_make_final(desc_ring, DESC_ID(id - 1));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1555) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1556) 	r->text_buf = data_alloc(rb, &rb->text_data_ring, r->text_buf_size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1557) 				 &d->text_blk_lpos, id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1558) 	/* If text data allocation fails, a data-less record is committed. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1559) 	if (r->text_buf_size && !r->text_buf) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1560) 		prb_commit(e);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1561) 		/* prb_commit() re-enabled interrupts. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1562) 		goto fail;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1563) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1564) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1565) 	r->info = info;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1566) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1567) 	/* Record full text space used by record. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1568) 	e->text_space = space_used(&rb->text_data_ring, &d->text_blk_lpos);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1569) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1570) 	return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1571) fail:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1572) 	/* Make it clear to the caller that the reserve failed. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1573) 	memset(r, 0, sizeof(*r));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1574) 	return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1575) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1576) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1577) /* Commit the data (possibly finalizing it) and restore interrupts. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1578) static void _prb_commit(struct prb_reserved_entry *e, unsigned long state_val)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1579) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1580) 	struct prb_desc_ring *desc_ring = &e->rb->desc_ring;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1581) 	struct prb_desc *d = to_desc(desc_ring, e->id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1582) 	unsigned long prev_state_val = DESC_SV(e->id, desc_reserved);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1583) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1584) 	/* Now the writer has finished all writing: LMM(_prb_commit:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1585) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1586) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1587) 	 * Set the descriptor as committed. See "ABA Issues" about why
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1588) 	 * cmpxchg() instead of set() is used.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1589) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1590) 	 * 1  Guarantee all record data is stored before the descriptor state
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1591) 	 *    is stored as committed. A write memory barrier is sufficient
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1592) 	 *    for this. This pairs with desc_read:B and desc_reopen_last:A.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1593) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1594) 	 * 2. Guarantee the descriptor state is stored as committed before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1595) 	 *    re-checking the head ID in order to possibly finalize this
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1596) 	 *    descriptor. This pairs with desc_reserve:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1597) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1598) 	 *    Memory barrier involvement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1599) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1600) 	 *    If prb_commit:A reads from desc_reserve:D, then
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1601) 	 *    desc_make_final:A reads from _prb_commit:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1602) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1603) 	 *    Relies on:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1604) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1605) 	 *    MB _prb_commit:B to prb_commit:A
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1606) 	 *       matching
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1607) 	 *    MB desc_reserve:D to desc_make_final:A
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1608) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1609) 	if (!atomic_long_try_cmpxchg(&d->state_var, &prev_state_val,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1610) 			DESC_SV(e->id, state_val))) { /* LMM(_prb_commit:B) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1611) 		WARN_ON_ONCE(1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1612) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1613) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1614) 	/* Restore interrupts, the reserve/commit window is finished. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1615) 	local_irq_restore(e->irqflags);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1616) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1617) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1618) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1619)  * prb_commit() - Commit (previously reserved) data to the ringbuffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1620)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1621)  * @e: The entry containing the reserved data information.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1622)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1623)  * This is the public function available to writers to commit data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1624)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1625)  * Note that the data is not yet available to readers until it is finalized.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1626)  * Finalizing happens automatically when space for the next record is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1627)  * reserved.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1628)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1629)  * See prb_final_commit() for a version of this function that finalizes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1630)  * immediately.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1631)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1632)  * Context: Any context. Enables local interrupts.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1633)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1634) void prb_commit(struct prb_reserved_entry *e)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1635) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1636) 	struct prb_desc_ring *desc_ring = &e->rb->desc_ring;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1637) 	unsigned long head_id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1638) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1639) 	_prb_commit(e, desc_committed);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1640) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1641) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1642) 	 * If this descriptor is no longer the head (i.e. a new record has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1643) 	 * been allocated), extending the data for this record is no longer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1644) 	 * allowed and therefore it must be finalized.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1645) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1646) 	head_id = atomic_long_read(&desc_ring->head_id); /* LMM(prb_commit:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1647) 	if (head_id != e->id)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1648) 		desc_make_final(desc_ring, e->id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1649) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1650) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1651) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1652)  * prb_final_commit() - Commit and finalize (previously reserved) data to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1653)  *                      the ringbuffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1654)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1655)  * @e: The entry containing the reserved data information.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1656)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1657)  * This is the public function available to writers to commit+finalize data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1658)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1659)  * By finalizing, the data is made immediately available to readers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1660)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1661)  * This function should only be used if there are no intentions of extending
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1662)  * this data using prb_reserve_in_last().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1663)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1664)  * Context: Any context. Enables local interrupts.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1665)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1666) void prb_final_commit(struct prb_reserved_entry *e)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1667) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1668) 	struct prb_desc_ring *desc_ring = &e->rb->desc_ring;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1669) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1670) 	_prb_commit(e, desc_finalized);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1671) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1672) 	/* Best effort to remember the last finalized @id. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1673) 	atomic_long_set(&desc_ring->last_finalized_id, e->id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1674) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1675) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1676) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1677)  * Count the number of lines in provided text. All text has at least 1 line
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1678)  * (even if @text_size is 0). Each '\n' processed is counted as an additional
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1679)  * line.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1680)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1681) static unsigned int count_lines(const char *text, unsigned int text_size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1682) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1683) 	unsigned int next_size = text_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1684) 	unsigned int line_count = 1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1685) 	const char *next = text;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1686) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1687) 	while (next_size) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1688) 		next = memchr(next, '\n', next_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1689) 		if (!next)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1690) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1691) 		line_count++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1692) 		next++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1693) 		next_size = text_size - (next - text);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1694) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1695) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1696) 	return line_count;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1697) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1698) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1699) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1700)  * Given @blk_lpos, copy an expected @len of data into the provided buffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1701)  * If @line_count is provided, count the number of lines in the data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1702)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1703)  * This function (used by readers) performs strict validation on the data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1704)  * size to possibly detect bugs in the writer code. A WARN_ON_ONCE() is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1705)  * triggered if an internal error is detected.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1706)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1707) static bool copy_data(struct prb_data_ring *data_ring,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1708) 		      struct prb_data_blk_lpos *blk_lpos, u16 len, char *buf,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1709) 		      unsigned int buf_size, unsigned int *line_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1710) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1711) 	unsigned int data_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1712) 	const char *data;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1713) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1714) 	/* Caller might not want any data. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1715) 	if ((!buf || !buf_size) && !line_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1716) 		return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1717) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1718) 	data = get_data(data_ring, blk_lpos, &data_size);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1719) 	if (!data)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1720) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1721) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1722) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1723) 	 * Actual cannot be less than expected. It can be more than expected
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1724) 	 * because of the trailing alignment padding.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1725) 	 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1726) 	 * Note that invalid @len values can occur because the caller loads
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1727) 	 * the value during an allowed data race.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1728) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1729) 	if (data_size < (unsigned int)len)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1730) 		return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1731) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1732) 	/* Caller interested in the line count? */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1733) 	if (line_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1734) 		*line_count = count_lines(data, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1735) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1736) 	/* Caller interested in the data content? */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1737) 	if (!buf || !buf_size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1738) 		return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1739) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1740) 	data_size = min_t(u16, buf_size, len);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1741) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1742) 	memcpy(&buf[0], data, data_size); /* LMM(copy_data:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1743) 	return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1744) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1745) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1746) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1747)  * This is an extended version of desc_read(). It gets a copy of a specified
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1748)  * descriptor. However, it also verifies that the record is finalized and has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1749)  * the sequence number @seq. On success, 0 is returned.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1750)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1751)  * Error return values:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1752)  * -EINVAL: A finalized record with sequence number @seq does not exist.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1753)  * -ENOENT: A finalized record with sequence number @seq exists, but its data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1754)  *          is not available. This is a valid record, so readers should
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1755)  *          continue with the next record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1756)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1757) static int desc_read_finalized_seq(struct prb_desc_ring *desc_ring,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1758) 				   unsigned long id, u64 seq,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1759) 				   struct prb_desc *desc_out)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1760) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1761) 	struct prb_data_blk_lpos *blk_lpos = &desc_out->text_blk_lpos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1762) 	enum desc_state d_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1763) 	u64 s;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1764) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1765) 	d_state = desc_read(desc_ring, id, desc_out, &s, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1766) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1767) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1768) 	 * An unexpected @id (desc_miss) or @seq mismatch means the record
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1769) 	 * does not exist. A descriptor in the reserved or committed state
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1770) 	 * means the record does not yet exist for the reader.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1771) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1772) 	if (d_state == desc_miss ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1773) 	    d_state == desc_reserved ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1774) 	    d_state == desc_committed ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1775) 	    s != seq) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1776) 		return -EINVAL;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1777) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1778) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1779) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1780) 	 * A descriptor in the reusable state may no longer have its data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1781) 	 * available; report it as existing but with lost data. Or the record
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1782) 	 * may actually be a record with lost data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1783) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1784) 	if (d_state == desc_reusable ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1785) 	    (blk_lpos->begin == FAILED_LPOS && blk_lpos->next == FAILED_LPOS)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1786) 		return -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1787) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1788) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1789) 	return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1790) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1791) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1792) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1793)  * Copy the ringbuffer data from the record with @seq to the provided
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1794)  * @r buffer. On success, 0 is returned.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1795)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1796)  * See desc_read_finalized_seq() for error return values.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1797)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1798) static int prb_read(struct printk_ringbuffer *rb, u64 seq,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1799) 		    struct printk_record *r, unsigned int *line_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1800) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1801) 	struct prb_desc_ring *desc_ring = &rb->desc_ring;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1802) 	struct printk_info *info = to_info(desc_ring, seq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1803) 	struct prb_desc *rdesc = to_desc(desc_ring, seq);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1804) 	atomic_long_t *state_var = &rdesc->state_var;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1805) 	struct prb_desc desc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1806) 	unsigned long id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1807) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1808) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1809) 	/* Extract the ID, used to specify the descriptor to read. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1810) 	id = DESC_ID(atomic_long_read(state_var));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1811) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1812) 	/* Get a local copy of the correct descriptor (if available). */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1813) 	err = desc_read_finalized_seq(desc_ring, id, seq, &desc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1814) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1815) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1816) 	 * If @r is NULL, the caller is only interested in the availability
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1817) 	 * of the record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1818) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1819) 	if (err || !r)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1820) 		return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1821) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1822) 	/* If requested, copy meta data. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1823) 	if (r->info)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1824) 		memcpy(r->info, info, sizeof(*(r->info)));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1825) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1826) 	/* Copy text data. If it fails, this is a data-less record. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1827) 	if (!copy_data(&rb->text_data_ring, &desc.text_blk_lpos, info->text_len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1828) 		       r->text_buf, r->text_buf_size, line_count)) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1829) 		return -ENOENT;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1830) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1831) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1832) 	/* Ensure the record is still finalized and has the same @seq. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1833) 	return desc_read_finalized_seq(desc_ring, id, seq, &desc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1834) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1835) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1836) /* Get the sequence number of the tail descriptor. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1837) static u64 prb_first_seq(struct printk_ringbuffer *rb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1838) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1839) 	struct prb_desc_ring *desc_ring = &rb->desc_ring;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1840) 	enum desc_state d_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1841) 	struct prb_desc desc;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1842) 	unsigned long id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1843) 	u64 seq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1844) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1845) 	for (;;) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1846) 		id = atomic_long_read(&rb->desc_ring.tail_id); /* LMM(prb_first_seq:A) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1847) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1848) 		d_state = desc_read(desc_ring, id, &desc, &seq, NULL); /* LMM(prb_first_seq:B) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1849) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1850) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1851) 		 * This loop will not be infinite because the tail is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1852) 		 * _always_ in the finalized or reusable state.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1853) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1854) 		if (d_state == desc_finalized || d_state == desc_reusable)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1855) 			break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1856) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1857) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1858) 		 * Guarantee the last state load from desc_read() is before
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1859) 		 * reloading @tail_id in order to see a new tail in the case
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1860) 		 * that the descriptor has been recycled. This pairs with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1861) 		 * desc_reserve:D.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1862) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1863) 		 * Memory barrier involvement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1864) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1865) 		 * If prb_first_seq:B reads from desc_reserve:F, then
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1866) 		 * prb_first_seq:A reads from desc_push_tail:B.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1867) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1868) 		 * Relies on:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1869) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1870) 		 * MB from desc_push_tail:B to desc_reserve:F
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1871) 		 *    matching
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1872) 		 * RMB prb_first_seq:B to prb_first_seq:A
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1873) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1874) 		smp_rmb(); /* LMM(prb_first_seq:C) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1875) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1876) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1877) 	return seq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1878) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1879) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1880) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1881)  * Non-blocking read of a record. Updates @seq to the last finalized record
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1882)  * (which may have no data available).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1883)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1884)  * See the description of prb_read_valid() and prb_read_valid_info()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1885)  * for details.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1886)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1887) static bool _prb_read_valid(struct printk_ringbuffer *rb, u64 *seq,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1888) 			    struct printk_record *r, unsigned int *line_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1889) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1890) 	u64 tail_seq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1891) 	int err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1892) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1893) 	while ((err = prb_read(rb, *seq, r, line_count))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1894) 		tail_seq = prb_first_seq(rb);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1895) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1896) 		if (*seq < tail_seq) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1897) 			/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1898) 			 * Behind the tail. Catch up and try again. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1899) 			 * can happen for -ENOENT and -EINVAL cases.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1900) 			 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1901) 			*seq = tail_seq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1902) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1903) 		} else if (err == -ENOENT) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1904) 			/* Record exists, but no data available. Skip. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1905) 			(*seq)++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1906) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1907) 		} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1908) 			/* Non-existent/non-finalized record. Must stop. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1909) 			return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1910) 		}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1911) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1912) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1913) 	return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1914) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1915) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1916) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1917)  * prb_read_valid() - Non-blocking read of a requested record or (if gone)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1918)  *                    the next available record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1919)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1920)  * @rb:  The ringbuffer to read from.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1921)  * @seq: The sequence number of the record to read.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1922)  * @r:   A record data buffer to store the read record to.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1923)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1924)  * This is the public function available to readers to read a record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1925)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1926)  * The reader provides the @info and @text_buf buffers of @r to be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1927)  * filled in. Any of the buffer pointers can be set to NULL if the reader
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1928)  * is not interested in that data. To ensure proper initialization of @r,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1929)  * prb_rec_init_rd() should be used.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1930)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1931)  * Context: Any context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1932)  * Return: true if a record was read, otherwise false.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1933)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1934)  * On success, the reader must check r->info.seq to see which record was
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1935)  * actually read. This allows the reader to detect dropped records.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1936)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1937)  * Failure means @seq refers to a not yet written record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1938)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1939) bool prb_read_valid(struct printk_ringbuffer *rb, u64 seq,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1940) 		    struct printk_record *r)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1941) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1942) 	return _prb_read_valid(rb, &seq, r, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1943) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1944) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1945) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1946)  * prb_read_valid_info() - Non-blocking read of meta data for a requested
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1947)  *                         record or (if gone) the next available record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1948)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1949)  * @rb:         The ringbuffer to read from.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1950)  * @seq:        The sequence number of the record to read.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1951)  * @info:       A buffer to store the read record meta data to.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1952)  * @line_count: A buffer to store the number of lines in the record text.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1953)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1954)  * This is the public function available to readers to read only the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1955)  * meta data of a record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1956)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1957)  * The reader provides the @info, @line_count buffers to be filled in.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1958)  * Either of the buffer pointers can be set to NULL if the reader is not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1959)  * interested in that data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1960)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1961)  * Context: Any context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1962)  * Return: true if a record's meta data was read, otherwise false.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1963)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1964)  * On success, the reader must check info->seq to see which record meta data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1965)  * was actually read. This allows the reader to detect dropped records.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1966)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1967)  * Failure means @seq refers to a not yet written record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1968)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1969) bool prb_read_valid_info(struct printk_ringbuffer *rb, u64 seq,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1970) 			 struct printk_info *info, unsigned int *line_count)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1971) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1972) 	struct printk_record r;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1973) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1974) 	prb_rec_init_rd(&r, info, NULL, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1975) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1976) 	return _prb_read_valid(rb, &seq, &r, line_count);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1977) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1978) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1979) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1980)  * prb_first_valid_seq() - Get the sequence number of the oldest available
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1981)  *                         record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1982)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1983)  * @rb: The ringbuffer to get the sequence number from.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1984)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1985)  * This is the public function available to readers to see what the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1986)  * first/oldest valid sequence number is.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1987)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1988)  * This provides readers a starting point to begin iterating the ringbuffer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1989)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1990)  * Context: Any context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1991)  * Return: The sequence number of the first/oldest record or, if the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1992)  *         ringbuffer is empty, 0 is returned.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1993)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1994) u64 prb_first_valid_seq(struct printk_ringbuffer *rb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1995) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1996) 	u64 seq = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1997) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1998) 	if (!_prb_read_valid(rb, &seq, NULL, NULL))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1999) 		return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2000) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2001) 	return seq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2002) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2003) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2004) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2005)  * prb_next_seq() - Get the sequence number after the last available record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2006)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2007)  * @rb:  The ringbuffer to get the sequence number from.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2008)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2009)  * This is the public function available to readers to see what the next
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2010)  * newest sequence number available to readers will be.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2011)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2012)  * This provides readers a sequence number to jump to if all currently
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2013)  * available records should be skipped.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2014)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2015)  * Context: Any context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2016)  * Return: The sequence number of the next newest (not yet available) record
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2017)  *         for readers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2018)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2019) u64 prb_next_seq(struct printk_ringbuffer *rb)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2020) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2021) 	struct prb_desc_ring *desc_ring = &rb->desc_ring;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2022) 	enum desc_state d_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2023) 	unsigned long id;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2024) 	u64 seq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2025) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2026) 	/* Check if the cached @id still points to a valid @seq. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2027) 	id = atomic_long_read(&desc_ring->last_finalized_id);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2028) 	d_state = desc_read(desc_ring, id, NULL, &seq, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2029) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2030) 	if (d_state == desc_finalized || d_state == desc_reusable) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2031) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2032) 		 * Begin searching after the last finalized record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2033) 		 *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2034) 		 * On 0, the search must begin at 0 because of hack#2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2035) 		 * of the bootstrapping phase it is not known if a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2036) 		 * record at index 0 exists.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2037) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2038) 		if (seq != 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2039) 			seq++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2040) 	} else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2041) 		/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2042) 		 * The information about the last finalized sequence number
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2043) 		 * has gone. It should happen only when there is a flood of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2044) 		 * new messages and the ringbuffer is rapidly recycled.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2045) 		 * Give up and start from the beginning.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2046) 		 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2047) 		seq = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2048) 	}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2049) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2050) 	/*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2051) 	 * The information about the last finalized @seq might be inaccurate.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2052) 	 * Search forward to find the current one.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2053) 	 */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2054) 	while (_prb_read_valid(rb, &seq, NULL, NULL))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2055) 		seq++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2056) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2057) 	return seq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2058) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2059) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2060) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2061)  * prb_init() - Initialize a ringbuffer to use provided external buffers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2062)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2063)  * @rb:       The ringbuffer to initialize.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2064)  * @text_buf: The data buffer for text data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2065)  * @textbits: The size of @text_buf as a power-of-2 value.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2066)  * @descs:    The descriptor buffer for ringbuffer records.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2067)  * @descbits: The count of @descs items as a power-of-2 value.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2068)  * @infos:    The printk_info buffer for ringbuffer records.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2069)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2070)  * This is the public function available to writers to setup a ringbuffer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2071)  * during runtime using provided buffers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2072)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2073)  * This must match the initialization of DEFINE_PRINTKRB().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2074)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2075)  * Context: Any context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2076)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2077) void prb_init(struct printk_ringbuffer *rb,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2078) 	      char *text_buf, unsigned int textbits,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2079) 	      struct prb_desc *descs, unsigned int descbits,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2080) 	      struct printk_info *infos)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2081) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2082) 	memset(descs, 0, _DESCS_COUNT(descbits) * sizeof(descs[0]));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2083) 	memset(infos, 0, _DESCS_COUNT(descbits) * sizeof(infos[0]));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2084) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2085) 	rb->desc_ring.count_bits = descbits;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2086) 	rb->desc_ring.descs = descs;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2087) 	rb->desc_ring.infos = infos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2088) 	atomic_long_set(&rb->desc_ring.head_id, DESC0_ID(descbits));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2089) 	atomic_long_set(&rb->desc_ring.tail_id, DESC0_ID(descbits));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2090) 	atomic_long_set(&rb->desc_ring.last_finalized_id, DESC0_ID(descbits));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2091) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2092) 	rb->text_data_ring.size_bits = textbits;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2093) 	rb->text_data_ring.data = text_buf;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2094) 	atomic_long_set(&rb->text_data_ring.head_lpos, BLK0_LPOS(textbits));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2095) 	atomic_long_set(&rb->text_data_ring.tail_lpos, BLK0_LPOS(textbits));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2096) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2097) 	atomic_long_set(&rb->fail, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2098) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2099) 	atomic_long_set(&(descs[_DESCS_COUNT(descbits) - 1].state_var), DESC0_SV(descbits));
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2100) 	descs[_DESCS_COUNT(descbits) - 1].text_blk_lpos.begin = FAILED_LPOS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2101) 	descs[_DESCS_COUNT(descbits) - 1].text_blk_lpos.next = FAILED_LPOS;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2102) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2103) 	infos[0].seq = -(u64)_DESCS_COUNT(descbits);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2104) 	infos[_DESCS_COUNT(descbits) - 1].seq = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2105) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2106) 
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2107) /**
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2108)  * prb_record_text_space() - Query the full actual used ringbuffer space for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2109)  *                           the text data of a reserved entry.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2110)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2111)  * @e: The successfully reserved entry to query.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2112)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2113)  * This is the public function available to writers to see how much actual
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2114)  * space is used in the ringbuffer to store the text data of the specified
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2115)  * entry.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2116)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2117)  * This function is only valid if @e has been successfully reserved using
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2118)  * prb_reserve().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2119)  *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2120)  * Context: Any context.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2121)  * Return: The size in bytes used by the text data of the associated record.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2122)  */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2123) unsigned int prb_record_text_space(struct prb_reserved_entry *e)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2124) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2125) 	return e->text_space;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2126) }