^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) - support asynchronous operation -- add a per-fs 'reserved_space' count,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) let each outstanding write reserve the _maximum_ amount of physical
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4) space it could take. Let GC flush the outstanding writes because the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) reservations will necessarily be pessimistic. With this we could even
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) do shared writable mmap, if we can have a fs hook for do_wp_page() to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) make the reservation.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) - disable compression in commit_write()?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) - fine-tune the allocation / GC thresholds
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) - chattr support - turning on/off and tuning compression per-inode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) - checkpointing (do we need this? scan is quite fast)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) - make the scan code populate real inodes so read_inode just after
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) mount doesn't have to read the flash twice for large files.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) Make this a per-inode option, changeable with chattr, so you can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) decide which inodes should be in-core immediately after mount.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) - test, test, test
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) - NAND flash support:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) - almost done :)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) - use bad block check instead of the hardwired byte check
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) - Optimisations:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) - Split writes so they go to two separate blocks rather than just c->nextblock.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) By writing _new_ nodes to one block, and garbage-collected REF_PRISTINE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) nodes to a different one, we can separate clean nodes from those which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) are likely to become dirty, and end up with blocks which are each far
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) closer to 100% or 0% clean, hence speeding up later GC progress dramatically.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28) - Stop keeping name in-core with struct jffs2_full_dirent. If we keep the hash in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) the full dirent, we only need to go to the flash in lookup() when we think we've
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) got a match, and in readdir().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) - Doubly-linked next_in_ino list to allow us to free obsoleted raw_node_refs immediately?
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) - Remove size from jffs2_raw_node_frag.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) dedekind:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) 1. __jffs2_flush_wbuf() has a strange 'pad' parameter. Eliminate.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) 2. get_sb()->build_fs()->scan() path... Why get_sb() removes scan()'s crap in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) case of failure? scan() does not clean everything. Fix.