^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) libperf-sampling(7)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) ===================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4) NAME
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) ----
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) libperf-sampling - sampling interface
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9) DESCRIPTION
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) -----------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) The sampling interface provides API to measure and get count for specific perf events.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) The following test tries to explain count on `sampling.c` example.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) It is by no means complete guide to sampling, but shows libperf basic API for sampling.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) The `sampling.c` comes with libperf package and can be compiled and run like:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) [source,bash]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21) $ gcc -o sampling sampling.c -lperf
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) $ sudo ./sampling
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) cpu 0, pid 0, tid 0, ip ffffffffad06c4e6, period 1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) cpu 0, pid 4465, tid 4469, ip ffffffffad118748, period 18322959
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) cpu 0, pid 0, tid 0, ip ffffffffad115722, period 33544846
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) cpu 0, pid 4465, tid 4470, ip 7f84fe0cdad6, period 23687474
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) cpu 0, pid 0, tid 0, ip ffffffffad9e0349, period 34255790
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28) cpu 0, pid 4465, tid 4469, ip ffffffffad136581, period 38664069
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) cpu 0, pid 0, tid 0, ip ffffffffad9e55e2, period 21922384
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) cpu 0, pid 4465, tid 4470, ip 7f84fe0ebebf, period 17655175
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) ...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) It requires root access, because it uses hardware cycles event.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) The `sampling.c` example profiles/samples all CPUs with hardware cycles, in a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) nutshell it:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) - creates events
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) - adds them to the event list
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) - opens and enables events through the event list
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) - sleeps for 3 seconds
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) - disables events
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) - reads and displays recorded samples
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) - destroys the event list
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) The first thing you need to do before using libperf is to call init function:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) [source,c]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) 12 static int libperf_print(enum libperf_print_level level,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) 13 const char *fmt, va_list ap)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) 14 {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) 15 return vfprintf(stderr, fmt, ap);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) 16 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57) 23 int main(int argc, char **argv)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) 24 {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) ...
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) 40 libperf_init(libperf_print);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) It will setup the library and sets function for debug output from library.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) The `libperf_print` callback will receive any message with its debug level,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) defined as:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) [source,c]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70) enum libperf_print_level {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) LIBPERF_ERR,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) LIBPERF_WARN,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73) LIBPERF_INFO,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) LIBPERF_DEBUG,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) LIBPERF_DEBUG2,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) LIBPERF_DEBUG3,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77) };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80) Once the setup is complete we start by defining cycles event using the `struct perf_event_attr`:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) [source,c]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) 29 struct perf_event_attr attr = {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) 30 .type = PERF_TYPE_HARDWARE,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) 31 .config = PERF_COUNT_HW_CPU_CYCLES,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) 32 .disabled = 1,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) 33 .freq = 1,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) 34 .sample_freq = 10,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) 35 .sample_type = PERF_SAMPLE_IP|PERF_SAMPLE_TID|PERF_SAMPLE_CPU|PERF_SAMPLE_PERIOD,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) 36 };
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) Next step is to prepare CPUs map.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96) In this case we will monitor all the available CPUs:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) [source,c]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) 42 cpus = perf_cpu_map__new(NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) 43 if (!cpus) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) 44 fprintf(stderr, "failed to create cpus\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) 45 return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) 46 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) Now we create libperf's event list, which will serve as holder for the cycles event:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) [source,c]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) 48 evlist = perf_evlist__new();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) 49 if (!evlist) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) 50 fprintf(stderr, "failed to create evlist\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) 51 goto out_cpus;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) 52 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) We create libperf's event for the cycles attribute we defined earlier and add it to the list:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) [source,c]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) 54 evsel = perf_evsel__new(&attr);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) 55 if (!evsel) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) 56 fprintf(stderr, "failed to create cycles\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) 57 goto out_cpus;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) 58 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) 59
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) 60 perf_evlist__add(evlist, evsel);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) Configure event list with the cpus map and open event:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) [source,c]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) 62 perf_evlist__set_maps(evlist, cpus, NULL);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) 63
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) 64 err = perf_evlist__open(evlist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) 65 if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) 66 fprintf(stderr, "failed to open evlist\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) 67 goto out_evlist;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) 68 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144) Once the events list is open, we can create memory maps AKA perf ring buffers:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) [source,c]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) 70 err = perf_evlist__mmap(evlist, 4);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) 71 if (err) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) 72 fprintf(stderr, "failed to mmap evlist\n");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) 73 goto out_evlist;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) 74 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) The event is created as disabled (note the `disabled = 1` assignment above),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) so we need to enable the events list explicitly.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) From this moment the cycles event is sampling.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) We will sleep for 3 seconds while the ring buffers get data from all CPUs, then we disable the events list.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) [source,c]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) 76 perf_evlist__enable(evlist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165) 77 sleep(3);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) 78 perf_evlist__disable(evlist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) Following code walks through the ring buffers and reads stored events/samples:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) [source,c]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) 80 perf_evlist__for_each_mmap(evlist, map, false) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) 81 if (perf_mmap__read_init(map) < 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) 82 continue;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) 83
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) 84 while ((event = perf_mmap__read_event(map)) != NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) /* process event */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) 108 perf_mmap__consume(map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) 109 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) 110 perf_mmap__read_done(map);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) 111 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) Each sample needs to get parsed:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) [source,c]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) 85 int cpu, pid, tid;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) 86 __u64 ip, period, *array;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) 87 union u64_swap u;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) 88
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) 89 array = event->sample.array;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) 90
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) 91 ip = *array;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) 92 array++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) 93
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) 94 u.val64 = *array;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) 95 pid = u.val32[0];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) 96 tid = u.val32[1];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) 97 array++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205) 98
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) 99 u.val64 = *array;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) 100 cpu = u.val32[0];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) 101 array++;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) 102
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) 103 period = *array;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) 104
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) 105 fprintf(stdout, "cpu %3d, pid %6d, tid %6d, ip %20llx, period %20llu\n",
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213) 106 cpu, pid, tid, ip, period);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) And finally cleanup.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) We close the whole events list (both events) and remove it together with the threads map:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) [source,c]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222) 113 out_evlist:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) 114 perf_evlist__delete(evlist);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) 115 out_cpus:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) 116 perf_cpu_map__put(cpus);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) 117 return err;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) 118 }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) --
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) REPORTING BUGS
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) --------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) Report bugs to <linux-perf-users@vger.kernel.org>.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) LICENSE
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) -------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) libperf is Free Software licensed under the GNU LGPL 2.1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) RESOURCES
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) ---------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) SEE ALSO
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243) --------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) libperf(3), libperf-counting(7)