Commit 039ca4e74a1cf60bd7487324a564ecf5c981f254
Committed by
Frederic Weisbecker
1 parent
30dbb20e68
Exists in
master
and in
39 other branches
tracing: Remove kmemtrace ftrace plugin
We have been resisting new ftrace plugins and removing existing ones, and kmemtrace has been superseded by kmem trace events and perf-kmem, so we remove it. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Acked-by: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> [ remove kmemtrace from the makefile, handle slob too ] Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Showing 15 changed files with 7 additions and 833 deletions Side-by-side Diff
- Documentation/ABI/testing/debugfs-kmemtrace
- Documentation/trace/kmemtrace.txt
- MAINTAINERS
- include/linux/kmemtrace.h
- include/linux/slab_def.h
- include/linux/slub_def.h
- init/main.c
- kernel/trace/Kconfig
- kernel/trace/Makefile
- kernel/trace/kmemtrace.c
- kernel/trace/trace.h
- kernel/trace/trace_entries.h
- mm/slab.c
- mm/slob.c
- mm/slub.c
Documentation/ABI/testing/debugfs-kmemtrace
1 | -What: /sys/kernel/debug/kmemtrace/ | |
2 | -Date: July 2008 | |
3 | -Contact: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> | |
4 | -Description: | |
5 | - | |
6 | -In kmemtrace-enabled kernels, the following files are created: | |
7 | - | |
8 | -/sys/kernel/debug/kmemtrace/ | |
9 | - cpu<n> (0400) Per-CPU tracing data, see below. (binary) | |
10 | - total_overruns (0400) Total number of bytes which were dropped from | |
11 | - cpu<n> files because of full buffer condition, | |
12 | - non-binary. (text) | |
13 | - abi_version (0400) Kernel's kmemtrace ABI version. (text) | |
14 | - | |
15 | -Each per-CPU file should be read according to the relay interface. That is, | |
16 | -the reader should set affinity to that specific CPU and, as currently done by | |
17 | -the userspace application (though there are other methods), use poll() with | |
18 | -an infinite timeout before every read(). Otherwise, erroneous data may be | |
19 | -read. The binary data has the following _core_ format: | |
20 | - | |
21 | - Event ID (1 byte) Unsigned integer, one of: | |
22 | - 0 - represents an allocation (KMEMTRACE_EVENT_ALLOC) | |
23 | - 1 - represents a freeing of previously allocated memory | |
24 | - (KMEMTRACE_EVENT_FREE) | |
25 | - Type ID (1 byte) Unsigned integer, one of: | |
26 | - 0 - this is a kmalloc() / kfree() | |
27 | - 1 - this is a kmem_cache_alloc() / kmem_cache_free() | |
28 | - 2 - this is a __get_free_pages() et al. | |
29 | - Event size (2 bytes) Unsigned integer representing the | |
30 | - size of this event. Used to extend | |
31 | - kmemtrace. Discard the bytes you | |
32 | - don't know about. | |
33 | - Sequence number (4 bytes) Signed integer used to reorder data | |
34 | - logged on SMP machines. Wraparound | |
35 | - must be taken into account, although | |
36 | - it is unlikely. | |
37 | - Caller address (8 bytes) Return address to the caller. | |
38 | - Pointer to mem (8 bytes) Pointer to target memory area. Can be | |
39 | - NULL, but not all such calls might be | |
40 | - recorded. | |
41 | - | |
42 | -In case of KMEMTRACE_EVENT_ALLOC events, the next fields follow: | |
43 | - | |
44 | - Requested bytes (8 bytes) Total number of requested bytes, | |
45 | - unsigned, must not be zero. | |
46 | - Allocated bytes (8 bytes) Total number of actually allocated | |
47 | - bytes, unsigned, must not be lower | |
48 | - than requested bytes. | |
49 | - Requested flags (4 bytes) GFP flags supplied by the caller. | |
50 | - Target CPU (4 bytes) Signed integer, valid for event id 1. | |
51 | - If equal to -1, target CPU is the same | |
52 | - as origin CPU, but the reverse might | |
53 | - not be true. | |
54 | - | |
55 | -The data is made available in the same endianness the machine has. | |
56 | - | |
57 | -Other event ids and type ids may be defined and added. Other fields may be | |
58 | -added by increasing event size, but see below for details. | |
59 | -Every modification to the ABI, including new id definitions, are followed | |
60 | -by bumping the ABI version by one. | |
61 | - | |
62 | -Adding new data to the packet (features) is done at the end of the mandatory | |
63 | -data: | |
64 | - Feature size (2 byte) | |
65 | - Feature ID (1 byte) | |
66 | - Feature data (Feature size - 3 bytes) | |
67 | - | |
68 | - | |
69 | -Users: | |
70 | - kmemtrace-user - git://repo.or.cz/kmemtrace-user.git |
Documentation/trace/kmemtrace.txt
1 | - kmemtrace - Kernel Memory Tracer | |
2 | - | |
3 | - by Eduard - Gabriel Munteanu | |
4 | - <eduard.munteanu@linux360.ro> | |
5 | - | |
6 | -I. Introduction | |
7 | -=============== | |
8 | - | |
9 | -kmemtrace helps kernel developers figure out two things: | |
10 | -1) how different allocators (SLAB, SLUB etc.) perform | |
11 | -2) how kernel code allocates memory and how much | |
12 | - | |
13 | -To do this, we trace every allocation and export information to the userspace | |
14 | -through the relay interface. We export things such as the number of requested | |
15 | -bytes, the number of bytes actually allocated (i.e. including internal | |
16 | -fragmentation), whether this is a slab allocation or a plain kmalloc() and so | |
17 | -on. | |
18 | - | |
19 | -The actual analysis is performed by a userspace tool (see section III for | |
20 | -details on where to get it from). It logs the data exported by the kernel, | |
21 | -processes it and (as of writing this) can provide the following information: | |
22 | -- the total amount of memory allocated and fragmentation per call-site | |
23 | -- the amount of memory allocated and fragmentation per allocation | |
24 | -- total memory allocated and fragmentation in the collected dataset | |
25 | -- number of cross-CPU allocation and frees (makes sense in NUMA environments) | |
26 | - | |
27 | -Moreover, it can potentially find inconsistent and erroneous behavior in | |
28 | -kernel code, such as using slab free functions on kmalloc'ed memory or | |
29 | -allocating less memory than requested (but not truly failed allocations). | |
30 | - | |
31 | -kmemtrace also makes provisions for tracing on some arch and analysing the | |
32 | -data on another. | |
33 | - | |
34 | -II. Design and goals | |
35 | -==================== | |
36 | - | |
37 | -kmemtrace was designed to handle rather large amounts of data. Thus, it uses | |
38 | -the relay interface to export whatever is logged to userspace, which then | |
39 | -stores it. Analysis and reporting is done asynchronously, that is, after the | |
40 | -data is collected and stored. By design, it allows one to log and analyse | |
41 | -on different machines and different arches. | |
42 | - | |
43 | -As of writing this, the ABI is not considered stable, though it might not | |
44 | -change much. However, no guarantees are made about compatibility yet. When | |
45 | -deemed stable, the ABI should still allow easy extension while maintaining | |
46 | -backward compatibility. This is described further in Documentation/ABI. | |
47 | - | |
48 | -Summary of design goals: | |
49 | - - allow logging and analysis to be done across different machines | |
50 | - - be fast and anticipate usage in high-load environments (*) | |
51 | - - be reasonably extensible | |
52 | - - make it possible for GNU/Linux distributions to have kmemtrace | |
53 | - included in their repositories | |
54 | - | |
55 | -(*) - one of the reasons Pekka Enberg's original userspace data analysis | |
56 | - tool's code was rewritten from Perl to C (although this is more than a | |
57 | - simple conversion) | |
58 | - | |
59 | - | |
60 | -III. Quick usage guide | |
61 | -====================== | |
62 | - | |
63 | -1) Get a kernel that supports kmemtrace and build it accordingly (i.e. enable | |
64 | -CONFIG_KMEMTRACE). | |
65 | - | |
66 | -2) Get the userspace tool and build it: | |
67 | -$ git clone git://repo.or.cz/kmemtrace-user.git # current repository | |
68 | -$ cd kmemtrace-user/ | |
69 | -$ ./autogen.sh | |
70 | -$ ./configure | |
71 | -$ make | |
72 | - | |
73 | -3) Boot the kmemtrace-enabled kernel if you haven't, preferably in the | |
74 | -'single' runlevel (so that relay buffers don't fill up easily), and run | |
75 | -kmemtrace: | |
76 | -# '$' does not mean user, but root here. | |
77 | -$ mount -t debugfs none /sys/kernel/debug | |
78 | -$ mount -t proc none /proc | |
79 | -$ cd path/to/kmemtrace-user/ | |
80 | -$ ./kmemtraced | |
81 | -Wait a bit, then stop it with CTRL+C. | |
82 | -$ cat /sys/kernel/debug/kmemtrace/total_overruns # Check if we didn't | |
83 | - # overrun, should | |
84 | - # be zero. | |
85 | -$ (Optionally) [Run kmemtrace_check separately on each cpu[0-9]*.out file to | |
86 | - check its correctness] | |
87 | -$ ./kmemtrace-report | |
88 | - | |
89 | -Now you should have a nice and short summary of how the allocator performs. | |
90 | - | |
91 | -IV. FAQ and known issues | |
92 | -======================== | |
93 | - | |
94 | -Q: 'cat /sys/kernel/debug/kmemtrace/total_overruns' is non-zero, how do I fix | |
95 | -this? Should I worry? | |
96 | -A: If it's non-zero, this affects kmemtrace's accuracy, depending on how | |
97 | -large the number is. You can fix it by supplying a higher | |
98 | -'kmemtrace.subbufs=N' kernel parameter. | |
99 | ---- | |
100 | - | |
101 | -Q: kmemtrace_check reports errors, how do I fix this? Should I worry? | |
102 | -A: This is a bug and should be reported. It can occur for a variety of | |
103 | -reasons: | |
104 | - - possible bugs in relay code | |
105 | - - possible misuse of relay by kmemtrace | |
106 | - - timestamps being collected unorderly | |
107 | -Or you may fix it yourself and send us a patch. | |
108 | ---- | |
109 | - | |
110 | -Q: kmemtrace_report shows many errors, how do I fix this? Should I worry? | |
111 | -A: This is a known issue and I'm working on it. These might be true errors | |
112 | -in kernel code, which may have inconsistent behavior (e.g. allocating memory | |
113 | -with kmem_cache_alloc() and freeing it with kfree()). Pekka Enberg pointed | |
114 | -out this behavior may work with SLAB, but may fail with other allocators. | |
115 | - | |
116 | -It may also be due to lack of tracing in some unusual allocator functions. | |
117 | - | |
118 | -We don't want bug reports regarding this issue yet. | |
119 | ---- | |
120 | - | |
121 | -V. See also | |
122 | -=========== | |
123 | - | |
124 | -Documentation/kernel-parameters.txt | |
125 | -Documentation/ABI/testing/debugfs-kmemtrace |
MAINTAINERS
... | ... | @@ -3361,13 +3361,6 @@ |
3361 | 3361 | F: mm/kmemleak.c |
3362 | 3362 | F: mm/kmemleak-test.c |
3363 | 3363 | |
3364 | -KMEMTRACE | |
3365 | -M: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> | |
3366 | -S: Maintained | |
3367 | -F: Documentation/trace/kmemtrace.txt | |
3368 | -F: include/linux/kmemtrace.h | |
3369 | -F: kernel/trace/kmemtrace.c | |
3370 | - | |
3371 | 3364 | KPROBES |
3372 | 3365 | M: Ananth N Mavinakayanahalli <ananth@in.ibm.com> |
3373 | 3366 | M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> |
include/linux/kmemtrace.h
1 | -/* | |
2 | - * Copyright (C) 2008 Eduard - Gabriel Munteanu | |
3 | - * | |
4 | - * This file is released under GPL version 2. | |
5 | - */ | |
6 | - | |
7 | -#ifndef _LINUX_KMEMTRACE_H | |
8 | -#define _LINUX_KMEMTRACE_H | |
9 | - | |
10 | -#ifdef __KERNEL__ | |
11 | - | |
12 | -#include <trace/events/kmem.h> | |
13 | - | |
14 | -#ifdef CONFIG_KMEMTRACE | |
15 | -extern void kmemtrace_init(void); | |
16 | -#else | |
17 | -static inline void kmemtrace_init(void) | |
18 | -{ | |
19 | -} | |
20 | -#endif | |
21 | - | |
22 | -#endif /* __KERNEL__ */ | |
23 | - | |
24 | -#endif /* _LINUX_KMEMTRACE_H */ |
include/linux/slab_def.h
... | ... | @@ -14,7 +14,8 @@ |
14 | 14 | #include <asm/page.h> /* kmalloc_sizes.h needs PAGE_SIZE */ |
15 | 15 | #include <asm/cache.h> /* kmalloc_sizes.h needs L1_CACHE_BYTES */ |
16 | 16 | #include <linux/compiler.h> |
17 | -#include <linux/kmemtrace.h> | |
17 | + | |
18 | +#include <trace/events/kmem.h> | |
18 | 19 | |
19 | 20 | #ifndef ARCH_KMALLOC_MINALIGN |
20 | 21 | /* |
include/linux/slub_def.h
... | ... | @@ -10,8 +10,9 @@ |
10 | 10 | #include <linux/gfp.h> |
11 | 11 | #include <linux/workqueue.h> |
12 | 12 | #include <linux/kobject.h> |
13 | -#include <linux/kmemtrace.h> | |
14 | 13 | #include <linux/kmemleak.h> |
14 | + | |
15 | +#include <trace/events/kmem.h> | |
15 | 16 | |
16 | 17 | enum stat_item { |
17 | 18 | ALLOC_FASTPATH, /* Allocation from cpu slab */ |
init/main.c
... | ... | @@ -66,7 +66,6 @@ |
66 | 66 | #include <linux/ftrace.h> |
67 | 67 | #include <linux/async.h> |
68 | 68 | #include <linux/kmemcheck.h> |
69 | -#include <linux/kmemtrace.h> | |
70 | 69 | #include <linux/sfi.h> |
71 | 70 | #include <linux/shmem_fs.h> |
72 | 71 | #include <linux/slab.h> |
... | ... | @@ -652,7 +651,6 @@ |
652 | 651 | #endif |
653 | 652 | page_cgroup_init(); |
654 | 653 | enable_debug_pagealloc(); |
655 | - kmemtrace_init(); | |
656 | 654 | kmemleak_init(); |
657 | 655 | debug_objects_mem_init(); |
658 | 656 | idr_init_cache(); |
kernel/trace/Kconfig
... | ... | @@ -354,26 +354,6 @@ |
354 | 354 | |
355 | 355 | Say N if unsure. |
356 | 356 | |
357 | -config KMEMTRACE | |
358 | - bool "Trace SLAB allocations" | |
359 | - select GENERIC_TRACER | |
360 | - help | |
361 | - kmemtrace provides tracing for slab allocator functions, such as | |
362 | - kmalloc, kfree, kmem_cache_alloc, kmem_cache_free, etc. Collected | |
363 | - data is then fed to the userspace application in order to analyse | |
364 | - allocation hotspots, internal fragmentation and so on, making it | |
365 | - possible to see how well an allocator performs, as well as debug | |
366 | - and profile kernel code. | |
367 | - | |
368 | - This requires an userspace application to use. See | |
369 | - Documentation/trace/kmemtrace.txt for more information. | |
370 | - | |
371 | - Saying Y will make the kernel somewhat larger and slower. However, | |
372 | - if you disable kmemtrace at run-time or boot-time, the performance | |
373 | - impact is minimal (depending on the arch the kernel is built for). | |
374 | - | |
375 | - If unsure, say N. | |
376 | - | |
377 | 357 | config WORKQUEUE_TRACER |
378 | 358 | bool "Trace workqueues" |
379 | 359 | select GENERIC_TRACER |
kernel/trace/Makefile
... | ... | @@ -40,7 +40,6 @@ |
40 | 40 | obj-$(CONFIG_MMIOTRACE) += trace_mmiotrace.o |
41 | 41 | obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += trace_functions_graph.o |
42 | 42 | obj-$(CONFIG_TRACE_BRANCH_PROFILING) += trace_branch.o |
43 | -obj-$(CONFIG_KMEMTRACE) += kmemtrace.o | |
44 | 43 | obj-$(CONFIG_WORKQUEUE_TRACER) += trace_workqueue.o |
45 | 44 | obj-$(CONFIG_BLK_DEV_IO_TRACE) += blktrace.o |
46 | 45 | ifeq ($(CONFIG_BLOCK),y) |
kernel/trace/kmemtrace.c
1 | -/* | |
2 | - * Memory allocator tracing | |
3 | - * | |
4 | - * Copyright (C) 2008 Eduard - Gabriel Munteanu | |
5 | - * Copyright (C) 2008 Pekka Enberg <penberg@cs.helsinki.fi> | |
6 | - * Copyright (C) 2008 Frederic Weisbecker <fweisbec@gmail.com> | |
7 | - */ | |
8 | - | |
9 | -#include <linux/tracepoint.h> | |
10 | -#include <linux/seq_file.h> | |
11 | -#include <linux/debugfs.h> | |
12 | -#include <linux/dcache.h> | |
13 | -#include <linux/fs.h> | |
14 | - | |
15 | -#include <linux/kmemtrace.h> | |
16 | - | |
17 | -#include "trace_output.h" | |
18 | -#include "trace.h" | |
19 | - | |
20 | -/* Select an alternative, minimalistic output than the original one */ | |
21 | -#define TRACE_KMEM_OPT_MINIMAL 0x1 | |
22 | - | |
23 | -static struct tracer_opt kmem_opts[] = { | |
24 | - /* Default disable the minimalistic output */ | |
25 | - { TRACER_OPT(kmem_minimalistic, TRACE_KMEM_OPT_MINIMAL) }, | |
26 | - { } | |
27 | -}; | |
28 | - | |
29 | -static struct tracer_flags kmem_tracer_flags = { | |
30 | - .val = 0, | |
31 | - .opts = kmem_opts | |
32 | -}; | |
33 | - | |
34 | -static struct trace_array *kmemtrace_array; | |
35 | - | |
36 | -/* Trace allocations */ | |
37 | -static inline void kmemtrace_alloc(enum kmemtrace_type_id type_id, | |
38 | - unsigned long call_site, | |
39 | - const void *ptr, | |
40 | - size_t bytes_req, | |
41 | - size_t bytes_alloc, | |
42 | - gfp_t gfp_flags, | |
43 | - int node) | |
44 | -{ | |
45 | - struct ftrace_event_call *call = &event_kmem_alloc; | |
46 | - struct trace_array *tr = kmemtrace_array; | |
47 | - struct kmemtrace_alloc_entry *entry; | |
48 | - struct ring_buffer_event *event; | |
49 | - | |
50 | - event = ring_buffer_lock_reserve(tr->buffer, sizeof(*entry)); | |
51 | - if (!event) | |
52 | - return; | |
53 | - | |
54 | - entry = ring_buffer_event_data(event); | |
55 | - tracing_generic_entry_update(&entry->ent, 0, 0); | |
56 | - | |
57 | - entry->ent.type = TRACE_KMEM_ALLOC; | |
58 | - entry->type_id = type_id; | |
59 | - entry->call_site = call_site; | |
60 | - entry->ptr = ptr; | |
61 | - entry->bytes_req = bytes_req; | |
62 | - entry->bytes_alloc = bytes_alloc; | |
63 | - entry->gfp_flags = gfp_flags; | |
64 | - entry->node = node; | |
65 | - | |
66 | - if (!filter_check_discard(call, entry, tr->buffer, event)) | |
67 | - ring_buffer_unlock_commit(tr->buffer, event); | |
68 | - | |
69 | - trace_wake_up(); | |
70 | -} | |
71 | - | |
72 | -static inline void kmemtrace_free(enum kmemtrace_type_id type_id, | |
73 | - unsigned long call_site, | |
74 | - const void *ptr) | |
75 | -{ | |
76 | - struct ftrace_event_call *call = &event_kmem_free; | |
77 | - struct trace_array *tr = kmemtrace_array; | |
78 | - struct kmemtrace_free_entry *entry; | |
79 | - struct ring_buffer_event *event; | |
80 | - | |
81 | - event = ring_buffer_lock_reserve(tr->buffer, sizeof(*entry)); | |
82 | - if (!event) | |
83 | - return; | |
84 | - entry = ring_buffer_event_data(event); | |
85 | - tracing_generic_entry_update(&entry->ent, 0, 0); | |
86 | - | |
87 | - entry->ent.type = TRACE_KMEM_FREE; | |
88 | - entry->type_id = type_id; | |
89 | - entry->call_site = call_site; | |
90 | - entry->ptr = ptr; | |
91 | - | |
92 | - if (!filter_check_discard(call, entry, tr->buffer, event)) | |
93 | - ring_buffer_unlock_commit(tr->buffer, event); | |
94 | - | |
95 | - trace_wake_up(); | |
96 | -} | |
97 | - | |
98 | -static void kmemtrace_kmalloc(void *ignore, | |
99 | - unsigned long call_site, | |
100 | - const void *ptr, | |
101 | - size_t bytes_req, | |
102 | - size_t bytes_alloc, | |
103 | - gfp_t gfp_flags) | |
104 | -{ | |
105 | - kmemtrace_alloc(KMEMTRACE_TYPE_KMALLOC, call_site, ptr, | |
106 | - bytes_req, bytes_alloc, gfp_flags, -1); | |
107 | -} | |
108 | - | |
109 | -static void kmemtrace_kmem_cache_alloc(void *ignore, | |
110 | - unsigned long call_site, | |
111 | - const void *ptr, | |
112 | - size_t bytes_req, | |
113 | - size_t bytes_alloc, | |
114 | - gfp_t gfp_flags) | |
115 | -{ | |
116 | - kmemtrace_alloc(KMEMTRACE_TYPE_CACHE, call_site, ptr, | |
117 | - bytes_req, bytes_alloc, gfp_flags, -1); | |
118 | -} | |
119 | - | |
120 | -static void kmemtrace_kmalloc_node(void *ignore, | |
121 | - unsigned long call_site, | |
122 | - const void *ptr, | |
123 | - size_t bytes_req, | |
124 | - size_t bytes_alloc, | |
125 | - gfp_t gfp_flags, | |
126 | - int node) | |
127 | -{ | |
128 | - kmemtrace_alloc(KMEMTRACE_TYPE_KMALLOC, call_site, ptr, | |
129 | - bytes_req, bytes_alloc, gfp_flags, node); | |
130 | -} | |
131 | - | |
132 | -static void kmemtrace_kmem_cache_alloc_node(void *ignore, | |
133 | - unsigned long call_site, | |
134 | - const void *ptr, | |
135 | - size_t bytes_req, | |
136 | - size_t bytes_alloc, | |
137 | - gfp_t gfp_flags, | |
138 | - int node) | |
139 | -{ | |
140 | - kmemtrace_alloc(KMEMTRACE_TYPE_CACHE, call_site, ptr, | |
141 | - bytes_req, bytes_alloc, gfp_flags, node); | |
142 | -} | |
143 | - | |
144 | -static void | |
145 | -kmemtrace_kfree(void *ignore, unsigned long call_site, const void *ptr) | |
146 | -{ | |
147 | - kmemtrace_free(KMEMTRACE_TYPE_KMALLOC, call_site, ptr); | |
148 | -} | |
149 | - | |
150 | -static void kmemtrace_kmem_cache_free(void *ignore, | |
151 | - unsigned long call_site, const void *ptr) | |
152 | -{ | |
153 | - kmemtrace_free(KMEMTRACE_TYPE_CACHE, call_site, ptr); | |
154 | -} | |
155 | - | |
156 | -static int kmemtrace_start_probes(void) | |
157 | -{ | |
158 | - int err; | |
159 | - | |
160 | - err = register_trace_kmalloc(kmemtrace_kmalloc, NULL); | |
161 | - if (err) | |
162 | - return err; | |
163 | - err = register_trace_kmem_cache_alloc(kmemtrace_kmem_cache_alloc, NULL); | |
164 | - if (err) | |
165 | - return err; | |
166 | - err = register_trace_kmalloc_node(kmemtrace_kmalloc_node, NULL); | |
167 | - if (err) | |
168 | - return err; | |
169 | - err = register_trace_kmem_cache_alloc_node(kmemtrace_kmem_cache_alloc_node, NULL); | |
170 | - if (err) | |
171 | - return err; | |
172 | - err = register_trace_kfree(kmemtrace_kfree, NULL); | |
173 | - if (err) | |
174 | - return err; | |
175 | - err = register_trace_kmem_cache_free(kmemtrace_kmem_cache_free, NULL); | |
176 | - | |
177 | - return err; | |
178 | -} | |
179 | - | |
180 | -static void kmemtrace_stop_probes(void) | |
181 | -{ | |
182 | - unregister_trace_kmalloc(kmemtrace_kmalloc, NULL); | |
183 | - unregister_trace_kmem_cache_alloc(kmemtrace_kmem_cache_alloc, NULL); | |
184 | - unregister_trace_kmalloc_node(kmemtrace_kmalloc_node, NULL); | |
185 | - unregister_trace_kmem_cache_alloc_node(kmemtrace_kmem_cache_alloc_node, NULL); | |
186 | - unregister_trace_kfree(kmemtrace_kfree, NULL); | |
187 | - unregister_trace_kmem_cache_free(kmemtrace_kmem_cache_free, NULL); | |
188 | -} | |
189 | - | |
190 | -static int kmem_trace_init(struct trace_array *tr) | |
191 | -{ | |
192 | - kmemtrace_array = tr; | |
193 | - | |
194 | - tracing_reset_online_cpus(tr); | |
195 | - | |
196 | - kmemtrace_start_probes(); | |
197 | - | |
198 | - return 0; | |
199 | -} | |
200 | - | |
201 | -static void kmem_trace_reset(struct trace_array *tr) | |
202 | -{ | |
203 | - kmemtrace_stop_probes(); | |
204 | -} | |
205 | - | |
206 | -static void kmemtrace_headers(struct seq_file *s) | |
207 | -{ | |
208 | - /* Don't need headers for the original kmemtrace output */ | |
209 | - if (!(kmem_tracer_flags.val & TRACE_KMEM_OPT_MINIMAL)) | |
210 | - return; | |
211 | - | |
212 | - seq_printf(s, "#\n"); | |
213 | - seq_printf(s, "# ALLOC TYPE REQ GIVEN FLAGS " | |
214 | - " POINTER NODE CALLER\n"); | |
215 | - seq_printf(s, "# FREE | | | | " | |
216 | - " | | | |\n"); | |
217 | - seq_printf(s, "# |\n\n"); | |
218 | -} | |
219 | - | |
220 | -/* | |
221 | - * The following functions give the original output from kmemtrace, | |
222 | - * plus the origin CPU, since reordering occurs in-kernel now. | |
223 | - */ | |
224 | - | |
225 | -#define KMEMTRACE_USER_ALLOC 0 | |
226 | -#define KMEMTRACE_USER_FREE 1 | |
227 | - | |
228 | -struct kmemtrace_user_event { | |
229 | - u8 event_id; | |
230 | - u8 type_id; | |
231 | - u16 event_size; | |
232 | - u32 cpu; | |
233 | - u64 timestamp; | |
234 | - unsigned long call_site; | |
235 | - unsigned long ptr; | |
236 | -}; | |
237 | - | |
238 | -struct kmemtrace_user_event_alloc { | |
239 | - size_t bytes_req; | |
240 | - size_t bytes_alloc; | |
241 | - unsigned gfp_flags; | |
242 | - int node; | |
243 | -}; | |
244 | - | |
245 | -static enum print_line_t | |
246 | -kmemtrace_print_alloc(struct trace_iterator *iter, int flags, | |
247 | - struct trace_event *event) | |
248 | -{ | |
249 | - struct trace_seq *s = &iter->seq; | |
250 | - struct kmemtrace_alloc_entry *entry; | |
251 | - int ret; | |
252 | - | |
253 | - trace_assign_type(entry, iter->ent); | |
254 | - | |
255 | - ret = trace_seq_printf(s, "type_id %d call_site %pF ptr %lu " | |
256 | - "bytes_req %lu bytes_alloc %lu gfp_flags %lu node %d\n", | |
257 | - entry->type_id, (void *)entry->call_site, (unsigned long)entry->ptr, | |
258 | - (unsigned long)entry->bytes_req, (unsigned long)entry->bytes_alloc, | |
259 | - (unsigned long)entry->gfp_flags, entry->node); | |
260 | - | |
261 | - if (!ret) | |
262 | - return TRACE_TYPE_PARTIAL_LINE; | |
263 | - return TRACE_TYPE_HANDLED; | |
264 | -} | |
265 | - | |
266 | -static enum print_line_t | |
267 | -kmemtrace_print_free(struct trace_iterator *iter, int flags, | |
268 | - struct trace_event *event) | |
269 | -{ | |
270 | - struct trace_seq *s = &iter->seq; | |
271 | - struct kmemtrace_free_entry *entry; | |
272 | - int ret; | |
273 | - | |
274 | - trace_assign_type(entry, iter->ent); | |
275 | - | |
276 | - ret = trace_seq_printf(s, "type_id %d call_site %pF ptr %lu\n", | |
277 | - entry->type_id, (void *)entry->call_site, | |
278 | - (unsigned long)entry->ptr); | |
279 | - | |
280 | - if (!ret) | |
281 | - return TRACE_TYPE_PARTIAL_LINE; | |
282 | - return TRACE_TYPE_HANDLED; | |
283 | -} | |
284 | - | |
285 | -static enum print_line_t | |
286 | -kmemtrace_print_alloc_user(struct trace_iterator *iter, int flags, | |
287 | - struct trace_event *event) | |
288 | -{ | |
289 | - struct trace_seq *s = &iter->seq; | |
290 | - struct kmemtrace_alloc_entry *entry; | |
291 | - struct kmemtrace_user_event *ev; | |
292 | - struct kmemtrace_user_event_alloc *ev_alloc; | |
293 | - | |
294 | - trace_assign_type(entry, iter->ent); | |
295 | - | |
296 | - ev = trace_seq_reserve(s, sizeof(*ev)); | |
297 | - if (!ev) | |
298 | - return TRACE_TYPE_PARTIAL_LINE; | |
299 | - | |
300 | - ev->event_id = KMEMTRACE_USER_ALLOC; | |
301 | - ev->type_id = entry->type_id; | |
302 | - ev->event_size = sizeof(*ev) + sizeof(*ev_alloc); | |
303 | - ev->cpu = iter->cpu; | |
304 | - ev->timestamp = iter->ts; | |
305 | - ev->call_site = entry->call_site; | |
306 | - ev->ptr = (unsigned long)entry->ptr; | |
307 | - | |
308 | - ev_alloc = trace_seq_reserve(s, sizeof(*ev_alloc)); | |
309 | - if (!ev_alloc) | |
310 | - return TRACE_TYPE_PARTIAL_LINE; | |
311 | - | |
312 | - ev_alloc->bytes_req = entry->bytes_req; | |
313 | - ev_alloc->bytes_alloc = entry->bytes_alloc; | |
314 | - ev_alloc->gfp_flags = entry->gfp_flags; | |
315 | - ev_alloc->node = entry->node; | |
316 | - | |
317 | - return TRACE_TYPE_HANDLED; | |
318 | -} | |
319 | - | |
320 | -static enum print_line_t | |
321 | -kmemtrace_print_free_user(struct trace_iterator *iter, int flags, | |
322 | - struct trace_event *event) | |
323 | -{ | |
324 | - struct trace_seq *s = &iter->seq; | |
325 | - struct kmemtrace_free_entry *entry; | |
326 | - struct kmemtrace_user_event *ev; | |
327 | - | |
328 | - trace_assign_type(entry, iter->ent); | |
329 | - | |
330 | - ev = trace_seq_reserve(s, sizeof(*ev)); | |
331 | - if (!ev) | |
332 | - return TRACE_TYPE_PARTIAL_LINE; | |
333 | - | |
334 | - ev->event_id = KMEMTRACE_USER_FREE; | |
335 | - ev->type_id = entry->type_id; | |
336 | - ev->event_size = sizeof(*ev); | |
337 | - ev->cpu = iter->cpu; | |
338 | - ev->timestamp = iter->ts; | |
339 | - ev->call_site = entry->call_site; | |
340 | - ev->ptr = (unsigned long)entry->ptr; | |
341 | - | |
342 | - return TRACE_TYPE_HANDLED; | |
343 | -} | |
344 | - | |
345 | -/* The two other following provide a more minimalistic output */ | |
346 | -static enum print_line_t | |
347 | -kmemtrace_print_alloc_compress(struct trace_iterator *iter) | |
348 | -{ | |
349 | - struct kmemtrace_alloc_entry *entry; | |
350 | - struct trace_seq *s = &iter->seq; | |
351 | - int ret; | |
352 | - | |
353 | - trace_assign_type(entry, iter->ent); | |
354 | - | |
355 | - /* Alloc entry */ | |
356 | - ret = trace_seq_printf(s, " + "); | |
357 | - if (!ret) | |
358 | - return TRACE_TYPE_PARTIAL_LINE; | |
359 | - | |
360 | - /* Type */ | |
361 | - switch (entry->type_id) { | |
362 | - case KMEMTRACE_TYPE_KMALLOC: | |
363 | - ret = trace_seq_printf(s, "K "); | |
364 | - break; | |
365 | - case KMEMTRACE_TYPE_CACHE: | |
366 | - ret = trace_seq_printf(s, "C "); | |
367 | - break; | |
368 | - case KMEMTRACE_TYPE_PAGES: | |
369 | - ret = trace_seq_printf(s, "P "); | |
370 | - break; | |
371 | - default: | |
372 | - ret = trace_seq_printf(s, "? "); | |
373 | - } | |
374 | - | |
375 | - if (!ret) | |
376 | - return TRACE_TYPE_PARTIAL_LINE; | |
377 | - | |
378 | - /* Requested */ | |
379 | - ret = trace_seq_printf(s, "%4zu ", entry->bytes_req); | |
380 | - if (!ret) | |
381 | - return TRACE_TYPE_PARTIAL_LINE; | |
382 | - | |
383 | - /* Allocated */ | |
384 | - ret = trace_seq_printf(s, "%4zu ", entry->bytes_alloc); | |
385 | - if (!ret) | |
386 | - return TRACE_TYPE_PARTIAL_LINE; | |
387 | - | |
388 | - /* Flags | |
389 | - * TODO: would be better to see the name of the GFP flag names | |
390 | - */ | |
391 | - ret = trace_seq_printf(s, "%08x ", entry->gfp_flags); | |
392 | - if (!ret) | |
393 | - return TRACE_TYPE_PARTIAL_LINE; | |
394 | - | |
395 | - /* Pointer to allocated */ | |
396 | - ret = trace_seq_printf(s, "0x%tx ", (ptrdiff_t)entry->ptr); | |
397 | - if (!ret) | |
398 | - return TRACE_TYPE_PARTIAL_LINE; | |
399 | - | |
400 | - /* Node and call site*/ | |
401 | - ret = trace_seq_printf(s, "%4d %pf\n", entry->node, | |
402 | - (void *)entry->call_site); | |
403 | - if (!ret) | |
404 | - return TRACE_TYPE_PARTIAL_LINE; | |
405 | - | |
406 | - return TRACE_TYPE_HANDLED; | |
407 | -} | |
408 | - | |
409 | -static enum print_line_t | |
410 | -kmemtrace_print_free_compress(struct trace_iterator *iter) | |
411 | -{ | |
412 | - struct kmemtrace_free_entry *entry; | |
413 | - struct trace_seq *s = &iter->seq; | |
414 | - int ret; | |
415 | - | |
416 | - trace_assign_type(entry, iter->ent); | |
417 | - | |
418 | - /* Free entry */ | |
419 | - ret = trace_seq_printf(s, " - "); | |
420 | - if (!ret) | |
421 | - return TRACE_TYPE_PARTIAL_LINE; | |
422 | - | |
423 | - /* Type */ | |
424 | - switch (entry->type_id) { | |
425 | - case KMEMTRACE_TYPE_KMALLOC: | |
426 | - ret = trace_seq_printf(s, "K "); | |
427 | - break; | |
428 | - case KMEMTRACE_TYPE_CACHE: | |
429 | - ret = trace_seq_printf(s, "C "); | |
430 | - break; | |
431 | - case KMEMTRACE_TYPE_PAGES: | |
432 | - ret = trace_seq_printf(s, "P "); | |
433 | - break; | |
434 | - default: | |
435 | - ret = trace_seq_printf(s, "? "); | |
436 | - } | |
437 | - | |
438 | - if (!ret) | |
439 | - return TRACE_TYPE_PARTIAL_LINE; | |
440 | - | |
441 | - /* Skip requested/allocated/flags */ | |
442 | - ret = trace_seq_printf(s, " "); | |
443 | - if (!ret) | |
444 | - return TRACE_TYPE_PARTIAL_LINE; | |
445 | - | |
446 | - /* Pointer to allocated */ | |
447 | - ret = trace_seq_printf(s, "0x%tx ", (ptrdiff_t)entry->ptr); | |
448 | - if (!ret) | |
449 | - return TRACE_TYPE_PARTIAL_LINE; | |
450 | - | |
451 | - /* Skip node and print call site*/ | |
452 | - ret = trace_seq_printf(s, " %pf\n", (void *)entry->call_site); | |
453 | - if (!ret) | |
454 | - return TRACE_TYPE_PARTIAL_LINE; | |
455 | - | |
456 | - return TRACE_TYPE_HANDLED; | |
457 | -} | |
458 | - | |
459 | -static enum print_line_t kmemtrace_print_line(struct trace_iterator *iter) | |
460 | -{ | |
461 | - struct trace_entry *entry = iter->ent; | |
462 | - | |
463 | - if (!(kmem_tracer_flags.val & TRACE_KMEM_OPT_MINIMAL)) | |
464 | - return TRACE_TYPE_UNHANDLED; | |
465 | - | |
466 | - switch (entry->type) { | |
467 | - case TRACE_KMEM_ALLOC: | |
468 | - return kmemtrace_print_alloc_compress(iter); | |
469 | - case TRACE_KMEM_FREE: | |
470 | - return kmemtrace_print_free_compress(iter); | |
471 | - default: | |
472 | - return TRACE_TYPE_UNHANDLED; | |
473 | - } | |
474 | -} | |
475 | - | |
476 | -static struct trace_event_functions kmem_trace_alloc_funcs = { | |
477 | - .trace = kmemtrace_print_alloc, | |
478 | - .binary = kmemtrace_print_alloc_user, | |
479 | -}; | |
480 | - | |
481 | -static struct trace_event kmem_trace_alloc = { | |
482 | - .type = TRACE_KMEM_ALLOC, | |
483 | - .funcs = &kmem_trace_alloc_funcs, | |
484 | -}; | |
485 | - | |
486 | -static struct trace_event_functions kmem_trace_free_funcs = { | |
487 | - .trace = kmemtrace_print_free, | |
488 | - .binary = kmemtrace_print_free_user, | |
489 | -}; | |
490 | - | |
491 | -static struct trace_event kmem_trace_free = { | |
492 | - .type = TRACE_KMEM_FREE, | |
493 | - .funcs = &kmem_trace_free_funcs, | |
494 | -}; | |
495 | - | |
496 | -static struct tracer kmem_tracer __read_mostly = { | |
497 | - .name = "kmemtrace", | |
498 | - .init = kmem_trace_init, | |
499 | - .reset = kmem_trace_reset, | |
500 | - .print_line = kmemtrace_print_line, | |
501 | - .print_header = kmemtrace_headers, | |
502 | - .flags = &kmem_tracer_flags | |
503 | -}; | |
504 | - | |
505 | -void kmemtrace_init(void) | |
506 | -{ | |
507 | - /* earliest opportunity to start kmem tracing */ | |
508 | -} | |
509 | - | |
510 | -static int __init init_kmem_tracer(void) | |
511 | -{ | |
512 | - if (!register_ftrace_event(&kmem_trace_alloc)) { | |
513 | - pr_warning("Warning: could not register kmem events\n"); | |
514 | - return 1; | |
515 | - } | |
516 | - | |
517 | - if (!register_ftrace_event(&kmem_trace_free)) { | |
518 | - pr_warning("Warning: could not register kmem events\n"); | |
519 | - return 1; | |
520 | - } | |
521 | - | |
522 | - if (register_tracer(&kmem_tracer) != 0) { | |
523 | - pr_warning("Warning: could not register the kmem tracer\n"); | |
524 | - return 1; | |
525 | - } | |
526 | - | |
527 | - return 0; | |
528 | -} | |
529 | -device_initcall(init_kmem_tracer); |
kernel/trace/trace.h
... | ... | @@ -9,7 +9,6 @@ |
9 | 9 | #include <linux/mmiotrace.h> |
10 | 10 | #include <linux/tracepoint.h> |
11 | 11 | #include <linux/ftrace.h> |
12 | -#include <linux/kmemtrace.h> | |
13 | 12 | #include <linux/hw_breakpoint.h> |
14 | 13 | #include <linux/trace_seq.h> |
15 | 14 | #include <linux/ftrace_event.h> |
16 | 15 | |
... | ... | @@ -30,19 +29,12 @@ |
30 | 29 | TRACE_GRAPH_RET, |
31 | 30 | TRACE_GRAPH_ENT, |
32 | 31 | TRACE_USER_STACK, |
33 | - TRACE_KMEM_ALLOC, | |
34 | - TRACE_KMEM_FREE, | |
35 | 32 | TRACE_BLK, |
36 | 33 | TRACE_KSYM, |
37 | 34 | |
38 | 35 | __TRACE_LAST_TYPE, |
39 | 36 | }; |
40 | 37 | |
41 | -enum kmemtrace_type_id { | |
42 | - KMEMTRACE_TYPE_KMALLOC = 0, /* kmalloc() or kfree(). */ | |
43 | - KMEMTRACE_TYPE_CACHE, /* kmem_cache_*(). */ | |
44 | - KMEMTRACE_TYPE_PAGES, /* __get_free_pages() and friends. */ | |
45 | -}; | |
46 | 38 | |
47 | 39 | #undef __field |
48 | 40 | #define __field(type, item) type item; |
... | ... | @@ -208,10 +200,6 @@ |
208 | 200 | TRACE_GRAPH_ENT); \ |
209 | 201 | IF_ASSIGN(var, ent, struct ftrace_graph_ret_entry, \ |
210 | 202 | TRACE_GRAPH_RET); \ |
211 | - IF_ASSIGN(var, ent, struct kmemtrace_alloc_entry, \ | |
212 | - TRACE_KMEM_ALLOC); \ | |
213 | - IF_ASSIGN(var, ent, struct kmemtrace_free_entry, \ | |
214 | - TRACE_KMEM_FREE); \ | |
215 | 203 | IF_ASSIGN(var, ent, struct ksym_trace_entry, TRACE_KSYM);\ |
216 | 204 | __ftrace_bad_type(); \ |
217 | 205 | } while (0) |
kernel/trace/trace_entries.h
... | ... | @@ -291,41 +291,6 @@ |
291 | 291 | __entry->func, __entry->file, __entry->correct) |
292 | 292 | ); |
293 | 293 | |
294 | -FTRACE_ENTRY(kmem_alloc, kmemtrace_alloc_entry, | |
295 | - | |
296 | - TRACE_KMEM_ALLOC, | |
297 | - | |
298 | - F_STRUCT( | |
299 | - __field( enum kmemtrace_type_id, type_id ) | |
300 | - __field( unsigned long, call_site ) | |
301 | - __field( const void *, ptr ) | |
302 | - __field( size_t, bytes_req ) | |
303 | - __field( size_t, bytes_alloc ) | |
304 | - __field( gfp_t, gfp_flags ) | |
305 | - __field( int, node ) | |
306 | - ), | |
307 | - | |
308 | - F_printk("type:%u call_site:%lx ptr:%p req:%zi alloc:%zi" | |
309 | - " flags:%x node:%d", | |
310 | - __entry->type_id, __entry->call_site, __entry->ptr, | |
311 | - __entry->bytes_req, __entry->bytes_alloc, | |
312 | - __entry->gfp_flags, __entry->node) | |
313 | -); | |
314 | - | |
315 | -FTRACE_ENTRY(kmem_free, kmemtrace_free_entry, | |
316 | - | |
317 | - TRACE_KMEM_FREE, | |
318 | - | |
319 | - F_STRUCT( | |
320 | - __field( enum kmemtrace_type_id, type_id ) | |
321 | - __field( unsigned long, call_site ) | |
322 | - __field( const void *, ptr ) | |
323 | - ), | |
324 | - | |
325 | - F_printk("type:%u call_site:%lx ptr:%p", | |
326 | - __entry->type_id, __entry->call_site, __entry->ptr) | |
327 | -); | |
328 | - | |
329 | 294 | FTRACE_ENTRY(ksym_trace, ksym_trace_entry, |
330 | 295 | |
331 | 296 | TRACE_KSYM, |
mm/slab.c
mm/slob.c