^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) =============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) Device Driver Design Patterns
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) =============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) This document describes a few common design patterns found in device drivers.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) It is likely that subsystem maintainers will ask driver developers to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) conform to these design patterns.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) 1. State Container
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) 2. container_of()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) 1. State Container
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) ~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) While the kernel contains a few device drivers that assume that they will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) only be probed() once on a certain system (singletons), it is custom to assume
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) that the device the driver binds to will appear in several instances. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) means that the probe() function and all callbacks need to be reentrant.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21) The most common way to achieve this is to use the state container design
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) pattern. It usually has this form::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) struct foo {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) spinlock_t lock; /* Example member */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) (...)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) static int foo_probe(...)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) struct foo *foo;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33) foo = devm_kzalloc(dev, sizeof(*foo), GFP_KERNEL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) if (!foo)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) return -ENOMEM;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) spin_lock_init(&foo->lock);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) (...)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) This will create an instance of struct foo in memory every time probe() is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) called. This is our state container for this instance of the device driver.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) Of course it is then necessary to always pass this instance of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) state around to all functions that need access to the state and its members.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) For example, if the driver is registering an interrupt handler, you would
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) pass around a pointer to struct foo like this::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) static irqreturn_t foo_handler(int irq, void *arg)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) struct foo *foo = arg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) (...)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) static int foo_probe(...)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56) struct foo *foo;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) (...)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) ret = request_irq(irq, foo_handler, 0, "foo", foo);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) This way you always get a pointer back to the correct instance of foo in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) your interrupt handler.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) 2. container_of()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67) ~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) Continuing on the above example we add an offloaded work::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) struct foo {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) spinlock_t lock;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73) struct workqueue_struct *wq;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) struct work_struct offload;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) (...)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) static void foo_work(struct work_struct *work)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80) struct foo *foo = container_of(work, struct foo, offload);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) (...)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) static irqreturn_t foo_handler(int irq, void *arg)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) struct foo *foo = arg;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) queue_work(foo->wq, &foo->offload);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) (...)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) static int foo_probe(...)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95) struct foo *foo;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) foo->wq = create_singlethread_workqueue("foo-wq");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) INIT_WORK(&foo->offload, foo_work);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) (...)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) The design pattern is the same for an hrtimer or something similar that will
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) return a single argument which is a pointer to a struct member in the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) callback.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) container_of() is a macro defined in <linux/kernel.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) What container_of() does is to obtain a pointer to the containing struct from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) a pointer to a member by a simple subtraction using the offsetof() macro from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) standard C, which allows something similar to object oriented behaviours.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) Notice that the contained member must not be a pointer, but an actual member
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) for this to work.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) We can see here that we avoid having global pointers to our struct foo *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) instance this way, while still keeping the number of parameters passed to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) work function to a single pointer.