^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) TODO
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) ====
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4) There is a potential for deadlock when allocating a struct sk_buff for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) data that needs to be written out to aoe storage. If the data is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) being written from a dirty page in order to free that page, and if
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) there are no other pages available, then deadlock may occur when a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) free page is needed for the sk_buff allocation. This situation has
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) not been observed, but it would be nice to eliminate any potential for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) deadlock under memory pressure.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) Because ATA over Ethernet is not fragmented by the kernel's IP code,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) the destructor member of the struct sk_buff is available to the aoe
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) driver. By using a mempool for allocating all but the first few
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) sk_buffs, and by registering a destructor, we should be able to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) efficiently allocate sk_buffs without introducing any potential for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) deadlock.