^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) =================================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) CPU Scheduler implementation hints for architecture specific code
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) =================================================================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) Nick Piggin, 2005
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) Context switch
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) ==============
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) 1. Runqueue locking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) By default, the switch_to arch function is called with the runqueue
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) locked. This is usually not a problem unless switch_to may need to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) take the runqueue lock. This is usually due to a wake up operation in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) the context switch. See arch/ia64/include/asm/switch_to.h for an example.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) To request the scheduler call switch_to with the runqueue unlocked,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) you must `#define __ARCH_WANT_UNLOCKED_CTXSW` in a header file
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) (typically the one where switch_to is defined).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) Unlocked context switches introduce only a very minor performance
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) penalty to the core scheduler implementation in the CONFIG_SMP case.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) CPU idle
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) ========
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) Your cpu_idle routines need to obey the following rules:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) 1. Preempt should now disabled over idle routines. Should only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) be enabled to call schedule() then disabled again.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) 2. need_resched/TIF_NEED_RESCHED is only ever set, and will never
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) be cleared until the running task has called schedule(). Idle
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) threads need only ever query need_resched, and may never set or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) clear it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) 3. When cpu_idle finds (need_resched() == 'true'), it should call
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) schedule(). It should not call schedule() otherwise.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) 4. The only time interrupts need to be disabled when checking
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) need_resched is if we are about to sleep the processor until
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) the next interrupt (this doesn't provide any protection of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) need_resched, it prevents losing an interrupt):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) 4a. Common problem with this type of sleep appears to be::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) local_irq_disable();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) if (!need_resched()) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) local_irq_enable();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) *** resched interrupt arrives here ***
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) __asm__("sleep until next interrupt");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) 5. TIF_POLLING_NRFLAG can be set by idle routines that do not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) need an interrupt to wake them up when need_resched goes high.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) In other words, they must be periodically polling need_resched,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) although it may be reasonable to do some background work or enter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) a low CPU priority.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57) - 5a. If TIF_POLLING_NRFLAG is set, and we do decide to enter
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) an interrupt sleep, it needs to be cleared then a memory
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) barrier issued (followed by a test of need_resched with
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) interrupts disabled, as explained in 3).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) arch/x86/kernel/process.c has examples of both polling and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) sleeping idle functions.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) Possible arch/ problems
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67) =======================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) Possible arch problems I found (and either tried to fix or didn't):
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) ia64 - is safe_halt call racy vs interrupts? (does it sleep?) (See #4a)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73) sh64 - Is sleeping racy vs interrupts? (See #4a)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) sparc - IRQs on at this point(?), change local_irq_save to _disable.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) - TODO: needs secondary CPUs to disable preempt (See #1)