^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) ======================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) Function Tracer Design
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) ======================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) :Author: Mike Frysinger
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) .. caution::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) This document is out of date. Some of the description below doesn't
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) match current implementation now.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) Introduction
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) ------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) Here we will cover the architecture pieces that the common function tracing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) code relies on for proper functioning. Things are broken down into increasing
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) complexity so that you can start simple and at least get basic functionality.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) Note that this focuses on architecture implementation details only. If you
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) want more explanation of a feature in terms of common code, review the common
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) ftrace.txt file.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) Ideally, everyone who wishes to retain performance while supporting tracing in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) their kernel should make it all the way to dynamic ftrace support.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) Prerequisites
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) -------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) Ftrace relies on these features being implemented:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) - STACKTRACE_SUPPORT - implement save_stack_trace()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) - TRACE_IRQFLAGS_SUPPORT - implement include/asm/irqflags.h
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) HAVE_FUNCTION_TRACER
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) --------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) You will need to implement the mcount and the ftrace_stub functions.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) The exact mcount symbol name will depend on your toolchain. Some call it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) "mcount", "_mcount", or even "__mcount". You can probably figure it out by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) running something like::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) $ echo 'main(){}' | gcc -x c -S -o - - -pg | grep mcount
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) call mcount
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) We'll make the assumption below that the symbol is "mcount" just to keep things
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) nice and simple in the examples.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) Keep in mind that the ABI that is in effect inside of the mcount function is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) *highly* architecture/toolchain specific. We cannot help you in this regard,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) sorry. Dig up some old documentation and/or find someone more familiar than
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) you to bang ideas off of. Typically, register usage (argument/scratch/etc...)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) is a major issue at this point, especially in relation to the location of the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) mcount call (before/after function prologue). You might also want to look at
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) how glibc has implemented the mcount function for your architecture. It might
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56) be (semi-)relevant.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) The mcount function should check the function pointer ftrace_trace_function
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) to see if it is set to ftrace_stub. If it is, there is nothing for you to do,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) so return immediately. If it isn't, then call that function in the same way
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) the mcount function normally calls __mcount_internal -- the first argument is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) the "frompc" while the second argument is the "selfpc" (adjusted to remove the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) size of the mcount call that is embedded in the function).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) For example, if the function foo() calls bar(), when the bar() function calls
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) mcount(), the arguments mcount() will pass to the tracer are:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) - "frompc" - the address bar() will use to return to foo()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) - "selfpc" - the address bar() (with mcount() size adjustment)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) Also keep in mind that this mcount function will be called *a lot*, so
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) optimizing for the default case of no tracer will help the smooth running of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73) your system when tracing is disabled. So the start of the mcount function is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) typically the bare minimum with checking things before returning. That also
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) means the code flow should usually be kept linear (i.e. no branching in the nop
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) case). This is of course an optimization and not a hard requirement.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) Here is some pseudo code that should help (these functions should actually be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) implemented in assembly)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) void ftrace_stub(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) void mcount(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) /* save any bare state needed in order to do initial checking */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) extern void (*ftrace_trace_function)(unsigned long, unsigned long);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) if (ftrace_trace_function != ftrace_stub)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) goto do_trace;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) /* restore any bare state */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96) return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) do_trace:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) /* save all state needed by the ABI (see paragraph above) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) unsigned long frompc = ...;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) unsigned long selfpc = <return address> - MCOUNT_INSN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) ftrace_trace_function(frompc, selfpc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) /* restore all state needed by the ABI */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) Don't forget to export mcount for modules !
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) extern void mcount(void);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) EXPORT_SYMBOL(mcount);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) HAVE_FUNCTION_GRAPH_TRACER
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) --------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) Deep breath ... time to do some real work. Here you will need to update the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) mcount function to check ftrace graph function pointers, as well as implement
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) some functions to save (hijack) and restore the return address.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) The mcount function should check the function pointers ftrace_graph_return
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) (compare to ftrace_stub) and ftrace_graph_entry (compare to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) ftrace_graph_entry_stub). If either of those is not set to the relevant stub
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) function, call the arch-specific function ftrace_graph_caller which in turn
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) calls the arch-specific function prepare_ftrace_return. Neither of these
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) function names is strictly required, but you should use them anyway to stay
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) consistent across the architecture ports -- easier to compare & contrast
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) things.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) The arguments to prepare_ftrace_return are slightly different than what are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) passed to ftrace_trace_function. The second argument "selfpc" is the same,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) but the first argument should be a pointer to the "frompc". Typically this is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) located on the stack. This allows the function to hijack the return address
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) temporarily to have it point to the arch-specific function return_to_handler.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) That function will simply call the common ftrace_return_to_handler function and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) that will return the original return address with which you can return to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) original call site.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) Here is the updated mcount pseudo code::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) void mcount(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) ...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) if (ftrace_trace_function != ftrace_stub)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) goto do_trace;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) + extern void (*ftrace_graph_return)(...);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) + extern void (*ftrace_graph_entry)(...);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) + if (ftrace_graph_return != ftrace_stub ||
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) + ftrace_graph_entry != ftrace_graph_entry_stub)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) + ftrace_graph_caller();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) +#endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) /* restore any bare state */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) ...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) Here is the pseudo code for the new ftrace_graph_caller assembly function::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) #ifdef CONFIG_FUNCTION_GRAPH_TRACER
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) void ftrace_graph_caller(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) /* save all state needed by the ABI */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) unsigned long *frompc = &...;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) unsigned long selfpc = <return address> - MCOUNT_INSN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) /* passing frame pointer up is optional -- see below */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) prepare_ftrace_return(frompc, selfpc, frame_pointer);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) /* restore all state needed by the ABI */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) For information on how to implement prepare_ftrace_return(), simply look at the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) x86 version (the frame pointer passing is optional; see the next section for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) more information). The only architecture-specific piece in it is the setup of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) the fault recovery table (the asm(...) code). The rest should be the same
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) across architectures.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) Here is the pseudo code for the new return_to_handler assembly function. Note
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) that the ABI that applies here is different from what applies to the mcount
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) code. Since you are returning from a function (after the epilogue), you might
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) be able to skimp on things saved/restored (usually just registers used to pass
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) return values).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) ::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) #ifdef CONFIG_FUNCTION_GRAPH_TRACER
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) void return_to_handler(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) /* save all state needed by the ABI (see paragraph above) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) void (*original_return_point)(void) = ftrace_return_to_handler();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) /* restore all state needed by the ABI */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) /* this is usually either a return or a jump */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) original_return_point();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) HAVE_FUNCTION_GRAPH_FP_TEST
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) ---------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) An arch may pass in a unique value (frame pointer) to both the entering and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) exiting of a function. On exit, the value is compared and if it does not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) match, then it will panic the kernel. This is largely a sanity check for bad
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) code generation with gcc. If gcc for your port sanely updates the frame
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) pointer under different optimization levels, then ignore this option.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) However, adding support for it isn't terribly difficult. In your assembly code
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) that calls prepare_ftrace_return(), pass the frame pointer as the 3rd argument.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) Then in the C version of that function, do what the x86 port does and pass it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) along to ftrace_push_return_trace() instead of a stub value of 0.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) Similarly, when you call ftrace_return_to_handler(), pass it the frame pointer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) --------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) An arch may pass in a pointer to the return address on the stack. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) prevents potential stack unwinding issues where the unwinder gets out of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) sync with ret_stack and the wrong addresses are reported by
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) ftrace_graph_ret_addr().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) Adding support for it is easy: just define the macro in asm/ftrace.h and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) pass the return address pointer as the 'retp' argument to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) ftrace_push_return_trace().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) HAVE_SYSCALL_TRACEPOINTS
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) ------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) You need very few things to get the syscalls tracing in an arch.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) - Support HAVE_ARCH_TRACEHOOK (see arch/Kconfig).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) - Have a NR_syscalls variable in <asm/unistd.h> that provides the number
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) of syscalls supported by the arch.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) - Support the TIF_SYSCALL_TRACEPOINT thread flags.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) - Put the trace_sys_enter() and trace_sys_exit() tracepoints calls from ptrace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) in the ptrace syscalls tracing path.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) - If the system call table on this arch is more complicated than a simple array
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) of addresses of the system calls, implement an arch_syscall_addr to return
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) the address of a given system call.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) - If the symbol names of the system calls do not match the function names on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) this arch, define ARCH_HAS_SYSCALL_MATCH_SYM_NAME in asm/ftrace.h and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) implement arch_syscall_match_sym_name with the appropriate logic to return
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) true if the function name corresponds with the symbol name.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) - Tag this arch as HAVE_SYSCALL_TRACEPOINTS.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) HAVE_FTRACE_MCOUNT_RECORD
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) -------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) See scripts/recordmcount.pl for more info. Just fill in the arch-specific
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) details for how to locate the addresses of mcount call sites via objdump.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) This option doesn't make much sense without also implementing dynamic ftrace.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) HAVE_DYNAMIC_FTRACE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) -------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) You will first need HAVE_FTRACE_MCOUNT_RECORD and HAVE_FUNCTION_TRACER, so
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265) scroll your reader back up if you got over eager.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) Once those are out of the way, you will need to implement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) - asm/ftrace.h:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269) - MCOUNT_ADDR
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) - ftrace_call_adjust()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) - struct dyn_arch_ftrace{}
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272) - asm code:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) - mcount() (new stub)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) - ftrace_caller()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) - ftrace_call()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) - ftrace_stub()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277) - C code:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) - ftrace_dyn_arch_init()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) - ftrace_make_nop()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280) - ftrace_make_call()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) - ftrace_update_ftrace_func()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) First you will need to fill out some arch details in your asm/ftrace.h.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) Define MCOUNT_ADDR as the address of your mcount symbol similar to::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) #define MCOUNT_ADDR ((unsigned long)mcount)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) Since no one else will have a decl for that function, you will need to::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) extern void mcount(void);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) You will also need the helper function ftrace_call_adjust(). Most people
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) will be able to stub it out like so::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) static inline unsigned long ftrace_call_adjust(unsigned long addr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) return addr;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) <details to be filled>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) Lastly you will need the custom dyn_arch_ftrace structure. If you need
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) some extra state when runtime patching arbitrary call sites, this is the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) place. For now though, create an empty struct::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) struct dyn_arch_ftrace {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) /* No extra data needed */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) With the header out of the way, we can fill out the assembly code. While we
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312) did already create a mcount() function earlier, dynamic ftrace only wants a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) stub function. This is because the mcount() will only be used during boot
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) and then all references to it will be patched out never to return. Instead,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) the guts of the old mcount() will be used to create a new ftrace_caller()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) function. Because the two are hard to merge, it will most likely be a lot
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) easier to have two separate definitions split up by #ifdefs. Same goes for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) the ftrace_stub() as that will now be inlined in ftrace_caller().
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) Before we get confused anymore, let's check out some pseudo code so you can
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) implement your own stuff in assembly::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323) void mcount(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328) void ftrace_caller(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) /* save all state needed by the ABI (see paragraph above) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) unsigned long frompc = ...;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) unsigned long selfpc = <return address> - MCOUNT_INSN_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) ftrace_call:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) ftrace_stub(frompc, selfpc);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338) /* restore all state needed by the ABI */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) ftrace_stub:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) return;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) This might look a little odd at first, but keep in mind that we will be runtime
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345) patching multiple things. First, only functions that we actually want to trace
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) will be patched to call ftrace_caller(). Second, since we only have one tracer
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) active at a time, we will patch the ftrace_caller() function itself to call the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) specific tracer in question. That is the point of the ftrace_call label.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) With that in mind, let's move on to the C code that will actually be doing the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) runtime patching. You'll need a little knowledge of your arch's opcodes in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352) order to make it through the next section.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354) Every arch has an init callback function. If you need to do something early on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) to initialize some state, this is the time to do that. Otherwise, this simple
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) function below should be sufficient for most people::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358) int __init ftrace_dyn_arch_init(void)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363) There are two functions that are used to do runtime patching of arbitrary
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) functions. The first is used to turn the mcount call site into a nop (which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) is what helps us retain runtime performance when not tracing). The second is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) used to turn the mcount call site into a call to an arbitrary location (but
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367) typically that is ftracer_caller()). See the general function definition in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) linux/ftrace.h for the functions::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) ftrace_make_nop()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) ftrace_make_call()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) The rec->ip value is the address of the mcount call site that was collected
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) by the scripts/recordmcount.pl during build time.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) The last function is used to do runtime patching of the active tracer. This
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377) will be modifying the assembly code at the location of the ftrace_call symbol
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) inside of the ftrace_caller() function. So you should have sufficient padding
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) at that location to support the new function calls you'll be inserting. Some
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) people will be using a "call" type instruction while others will be using a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381) "branch" type instruction. Specifically, the function is::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383) ftrace_update_ftrace_func()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) HAVE_DYNAMIC_FTRACE + HAVE_FUNCTION_GRAPH_TRACER
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387) ------------------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389) The function grapher needs a few tweaks in order to work with dynamic ftrace.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390) Basically, you will need to:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392) - update:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393) - ftrace_caller()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394) - ftrace_graph_call()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395) - ftrace_graph_caller()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396) - implement:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) - ftrace_enable_ftrace_graph_caller()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) - ftrace_disable_ftrace_graph_caller()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) <details to be filled>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) Quick notes:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) - add a nop stub after the ftrace_call location named ftrace_graph_call;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) stub needs to be large enough to support a call to ftrace_graph_caller()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) - update ftrace_graph_caller() to work with being called by the new
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407) ftrace_caller() since some semantics may have changed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) - ftrace_enable_ftrace_graph_caller() will runtime patch the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) ftrace_graph_call location with a call to ftrace_graph_caller()
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410) - ftrace_disable_ftrace_graph_caller() will runtime patch the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) ftrace_graph_call location with nops