Commit bcf6edcd6fdb8965290f0b635a530fa3c6c212e1

Authored by Xiao Guangrong
Committed by Arnaldo Carvalho de Melo
1 parent 26bf264e87

perf kvm: Events analysis tool

Add 'perf kvm stat' support to analyze kvm vmexit/mmio/ioport smartly

Usage:
- kvm stat
  run a command and gather performance counter statistics, it is the alias of
  perf stat

- trace kvm events:
  perf kvm stat record, or, if other tracepoints are interesting as well, we
  can append the events like this:
  perf kvm stat record -e timer:* -a

  If many guests are running, we can track the specified guest by using -p or
  --pid, -a is used to track events generated by all guests.

- show the result:
  perf kvm stat report

The output example is following:
13005
13059

total 2 guests are running on the host

Then, track the guest whose pid is 13059:
^C[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.253 MB perf.data.guest (~11065 samples) ]

See the vmexit events:

Analyze events for all VCPUs:

             VM-EXIT    Samples  Samples%     Time%         Avg time

         APIC_ACCESS        460    70.55%     0.01%     22.44us ( +-   1.75% )
                 HLT         93    14.26%    99.98% 832077.26us ( +-  10.42% )
  EXTERNAL_INTERRUPT         64     9.82%     0.00%     35.35us ( +-  14.21% )
   PENDING_INTERRUPT         24     3.68%     0.00%      9.29us ( +-  31.39% )
           CR_ACCESS          7     1.07%     0.00%      8.12us ( +-   5.76% )
      IO_INSTRUCTION          3     0.46%     0.00%     18.00us ( +-  11.79% )
       EXCEPTION_NMI          1     0.15%     0.00%      5.83us ( +-   -nan% )

Total Samples:652, Total events handled time:77396109.80us.

See the mmio events:

Analyze events for all VCPUs:

         MMIO Access    Samples  Samples%     Time%         Avg time

        0xfee00380:W        387    84.31%    79.28%      8.29us ( +-   3.32% )
        0xfee00300:W         24     5.23%     9.96%     16.79us ( +-   1.97% )
        0xfee00300:R         24     5.23%     7.83%     13.20us ( +-   3.00% )
        0xfee00310:W         24     5.23%     2.93%      4.94us ( +-   3.84% )

Total Samples:459, Total events handled time:4044.59us.

See the ioport event:

Analyze events for all VCPUs:

      IO Port Access    Samples  Samples%     Time%         Avg time

         0xc050:POUT          3   100.00%   100.00%     13.75us ( +-  10.83% )

Total Samples:3, Total events handled time:41.26us.

And, --vcpu is used to track the specified vcpu and --key is used to sort the
result:

Analyze events for VCPU 0:

             VM-EXIT    Samples  Samples%     Time%         Avg time

                 HLT         27    13.85%    99.97% 405790.24us ( +-  12.70% )
  EXTERNAL_INTERRUPT         13     6.67%     0.00%     27.94us ( +-  22.26% )
         APIC_ACCESS        146    74.87%     0.03%     21.69us ( +-   2.91% )
      IO_INSTRUCTION          2     1.03%     0.00%     17.77us ( +-  20.56% )
           CR_ACCESS          2     1.03%     0.00%      8.55us ( +-   6.47% )
   PENDING_INTERRUPT          5     2.56%     0.00%      6.27us ( +-   3.94% )

Total Samples:195, Total events handled time:10959950.90us.

Signed-off-by: Dong Hao <haodong@linux.vnet.ibm.com>
Signed-off-by: Runzhen Wang <runzhen@linux.vnet.ibm.com>
[ Dong Hao <haodong@linux.vnet.ibm.com>
  Runzhen Wang <runzhen@linux.vnet.ibm.com>:
     - rebase it on current acme's tree
     - fix the compiling-error on i386 ]
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Acked-by: David Ahern <dsahern@gmail.com>
Cc: Avi Kivity <avi@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: kvm@vger.kernel.org
Cc: Runzhen Wang <runzhen@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1347870675-31495-4-git-send-email-haodong@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

Showing 6 changed files with 929 additions and 6 deletions Inline Diff

tools/perf/Documentation/perf-kvm.txt
1 perf-kvm(1) 1 perf-kvm(1)
2 =========== 2 ===========
3 3
4 NAME 4 NAME
5 ---- 5 ----
6 perf-kvm - Tool to trace/measure kvm guest os 6 perf-kvm - Tool to trace/measure kvm guest os
7 7
8 SYNOPSIS 8 SYNOPSIS
9 -------- 9 --------
10 [verse] 10 [verse]
11 'perf kvm' [--host] [--guest] [--guestmount=<path> 11 'perf kvm' [--host] [--guest] [--guestmount=<path>
12 [--guestkallsyms=<path> --guestmodules=<path> | --guestvmlinux=<path>]] 12 [--guestkallsyms=<path> --guestmodules=<path> | --guestvmlinux=<path>]]
13 {top|record|report|diff|buildid-list} 13 {top|record|report|diff|buildid-list}
14 'perf kvm' [--host] [--guest] [--guestkallsyms=<path> --guestmodules=<path> 14 'perf kvm' [--host] [--guest] [--guestkallsyms=<path> --guestmodules=<path>
15 | --guestvmlinux=<path>] {top|record|report|diff|buildid-list} 15 | --guestvmlinux=<path>] {top|record|report|diff|buildid-list|stat}
16 16
17 DESCRIPTION 17 DESCRIPTION
18 ----------- 18 -----------
19 There are a couple of variants of perf kvm: 19 There are a couple of variants of perf kvm:
20 20
21 'perf kvm [options] top <command>' to generates and displays 21 'perf kvm [options] top <command>' to generates and displays
22 a performance counter profile of guest os in realtime 22 a performance counter profile of guest os in realtime
23 of an arbitrary workload. 23 of an arbitrary workload.
24 24
25 'perf kvm record <command>' to record the performance counter profile 25 'perf kvm record <command>' to record the performance counter profile
26 of an arbitrary workload and save it into a perf data file. If both 26 of an arbitrary workload and save it into a perf data file. If both
27 --host and --guest are input, the perf data file name is perf.data.kvm. 27 --host and --guest are input, the perf data file name is perf.data.kvm.
28 If there is no --host but --guest, the file name is perf.data.guest. 28 If there is no --host but --guest, the file name is perf.data.guest.
29 If there is no --guest but --host, the file name is perf.data.host. 29 If there is no --guest but --host, the file name is perf.data.host.
30 30
31 'perf kvm report' to display the performance counter profile information 31 'perf kvm report' to display the performance counter profile information
32 recorded via perf kvm record. 32 recorded via perf kvm record.
33 33
34 'perf kvm diff' to displays the performance difference amongst two perf.data 34 'perf kvm diff' to displays the performance difference amongst two perf.data
35 files captured via perf record. 35 files captured via perf record.
36 36
37 'perf kvm buildid-list' to display the buildids found in a perf data file, 37 'perf kvm buildid-list' to display the buildids found in a perf data file,
38 so that other tools can be used to fetch packages with matching symbol tables 38 so that other tools can be used to fetch packages with matching symbol tables
39 for use by perf report. 39 for use by perf report.
40 40
41 'perf kvm stat <command>' to run a command and gather performance counter
42 statistics.
43 Especially, perf 'kvm stat record/report' generates a statistical analysis
44 of KVM events. Currently, vmexit, mmio and ioport events are supported.
45 'perf kvm stat record <command>' records kvm events and the events between
46 start and end <command>.
47 And this command produces a file which contains tracing results of kvm
48 events.
49
50 'perf kvm stat report' reports statistical data which includes events
51 handled time, samples, and so on.
52
41 OPTIONS 53 OPTIONS
42 ------- 54 -------
43 -i:: 55 -i::
44 --input=:: 56 --input=::
45 Input file name. 57 Input file name.
46 -o:: 58 -o::
47 --output:: 59 --output::
48 Output file name. 60 Output file name.
49 --host=:: 61 --host=::
50 Collect host side performance profile. 62 Collect host side performance profile.
51 --guest=:: 63 --guest=::
52 Collect guest side performance profile. 64 Collect guest side performance profile.
53 --guestmount=<path>:: 65 --guestmount=<path>::
54 Guest os root file system mount directory. Users mounts guest os 66 Guest os root file system mount directory. Users mounts guest os
55 root directories under <path> by a specific filesystem access method, 67 root directories under <path> by a specific filesystem access method,
56 typically, sshfs. For example, start 2 guest os. The one's pid is 8888 68 typically, sshfs. For example, start 2 guest os. The one's pid is 8888
57 and the other's is 9999. 69 and the other's is 9999.
58 #mkdir ~/guestmount; cd ~/guestmount 70 #mkdir ~/guestmount; cd ~/guestmount
59 #sshfs -o allow_other,direct_io -p 5551 localhost:/ 8888/ 71 #sshfs -o allow_other,direct_io -p 5551 localhost:/ 8888/
60 #sshfs -o allow_other,direct_io -p 5552 localhost:/ 9999/ 72 #sshfs -o allow_other,direct_io -p 5552 localhost:/ 9999/
61 #perf kvm --host --guest --guestmount=~/guestmount top 73 #perf kvm --host --guest --guestmount=~/guestmount top
62 --guestkallsyms=<path>:: 74 --guestkallsyms=<path>::
63 Guest os /proc/kallsyms file copy. 'perf' kvm' reads it to get guest 75 Guest os /proc/kallsyms file copy. 'perf' kvm' reads it to get guest
64 kernel symbols. Users copy it out from guest os. 76 kernel symbols. Users copy it out from guest os.
65 --guestmodules=<path>:: 77 --guestmodules=<path>::
66 Guest os /proc/modules file copy. 'perf' kvm' reads it to get guest 78 Guest os /proc/modules file copy. 'perf' kvm' reads it to get guest
67 kernel module information. Users copy it out from guest os. 79 kernel module information. Users copy it out from guest os.
68 --guestvmlinux=<path>:: 80 --guestvmlinux=<path>::
69 Guest os kernel vmlinux. 81 Guest os kernel vmlinux.
70 82
83 STAT REPORT OPTIONS
84 -------------------
85 --vcpu=<value>::
86 analyze events which occures on this vcpu. (default: all vcpus)
87
88 --events=<value>::
89 events to be analyzed. Possible values: vmexit, mmio, ioport.
90 (default: vmexit)
91 -k::
92 --key=<value>::
93 Sorting key. Possible values: sample (default, sort by samples
94 number), time (sort by average time).
95
71 SEE ALSO 96 SEE ALSO
72 -------- 97 --------
73 linkperf:perf-top[1], linkperf:perf-record[1], linkperf:perf-report[1], 98 linkperf:perf-top[1], linkperf:perf-record[1], linkperf:perf-report[1],
74 linkperf:perf-diff[1], linkperf:perf-buildid-list[1] 99 linkperf:perf-diff[1], linkperf:perf-buildid-list[1],
100 linkperf:perf-stat[1]
75 101
1 tools/perf 1 tools/perf
2 tools/scripts 2 tools/scripts
3 tools/lib/traceevent 3 tools/lib/traceevent
4 include/linux/const.h 4 include/linux/const.h
5 include/linux/perf_event.h 5 include/linux/perf_event.h
6 include/linux/rbtree.h 6 include/linux/rbtree.h
7 include/linux/list.h 7 include/linux/list.h
8 include/linux/hash.h 8 include/linux/hash.h
9 include/linux/stringify.h 9 include/linux/stringify.h
10 lib/rbtree.c 10 lib/rbtree.c
11 include/linux/swab.h 11 include/linux/swab.h
12 arch/*/include/asm/unistd*.h 12 arch/*/include/asm/unistd*.h
13 arch/*/include/asm/perf_regs.h 13 arch/*/include/asm/perf_regs.h
14 arch/*/lib/memcpy*.S 14 arch/*/lib/memcpy*.S
15 arch/*/lib/memset*.S 15 arch/*/lib/memset*.S
16 include/linux/poison.h 16 include/linux/poison.h
17 include/linux/magic.h 17 include/linux/magic.h
18 include/linux/hw_breakpoint.h 18 include/linux/hw_breakpoint.h
19 arch/x86/include/asm/svm.h
20 arch/x86/include/asm/vmx.h
21 arch/x86/include/asm/kvm_host.h
19 22
tools/perf/builtin-kvm.c
1 #include "builtin.h" 1 #include "builtin.h"
2 #include "perf.h" 2 #include "perf.h"
3 3
4 #include "util/evsel.h"
4 #include "util/util.h" 5 #include "util/util.h"
5 #include "util/cache.h" 6 #include "util/cache.h"
6 #include "util/symbol.h" 7 #include "util/symbol.h"
7 #include "util/thread.h" 8 #include "util/thread.h"
8 #include "util/header.h" 9 #include "util/header.h"
9 #include "util/session.h" 10 #include "util/session.h"
10 11
11 #include "util/parse-options.h" 12 #include "util/parse-options.h"
12 #include "util/trace-event.h" 13 #include "util/trace-event.h"
13
14 #include "util/debug.h" 14 #include "util/debug.h"
15 #include "util/debugfs.h"
16 #include "util/tool.h"
17 #include "util/stat.h"
15 18
16 #include <sys/prctl.h> 19 #include <sys/prctl.h>
17 20
18 #include <semaphore.h> 21 #include <semaphore.h>
19 #include <pthread.h> 22 #include <pthread.h>
20 #include <math.h> 23 #include <math.h>
21 24
22 static const char *file_name; 25 #include "../../arch/x86/include/asm/svm.h"
26 #include "../../arch/x86/include/asm/vmx.h"
27 #include "../../arch/x86/include/asm/kvm.h"
28
29 struct event_key {
30 #define INVALID_KEY (~0ULL)
31 u64 key;
32 int info;
33 };
34
35 struct kvm_events_ops {
36 bool (*is_begin_event)(struct event_format *event, void *data,
37 struct event_key *key);
38 bool (*is_end_event)(struct event_format *event, void *data,
39 struct event_key *key);
40 void (*decode_key)(struct event_key *key, char decode[20]);
41 const char *name;
42 };
43
44 static void exit_event_get_key(struct event_format *event, void *data,
45 struct event_key *key)
46 {
47 key->info = 0;
48 key->key = raw_field_value(event, "exit_reason", data);
49 }
50
51 static bool kvm_exit_event(struct event_format *event)
52 {
53 return !strcmp(event->name, "kvm_exit");
54 }
55
56 static bool exit_event_begin(struct event_format *event, void *data,
57 struct event_key *key)
58 {
59 if (kvm_exit_event(event)) {
60 exit_event_get_key(event, data, key);
61 return true;
62 }
63
64 return false;
65 }
66
67 static bool kvm_entry_event(struct event_format *event)
68 {
69 return !strcmp(event->name, "kvm_entry");
70 }
71
72 static bool exit_event_end(struct event_format *event, void *data __maybe_unused,
73 struct event_key *key __maybe_unused)
74 {
75 return kvm_entry_event(event);
76 }
77
78 struct exit_reasons_table {
79 unsigned long exit_code;
80 const char *reason;
81 };
82
83 struct exit_reasons_table vmx_exit_reasons[] = {
84 VMX_EXIT_REASONS
85 };
86
87 struct exit_reasons_table svm_exit_reasons[] = {
88 SVM_EXIT_REASONS
89 };
90
91 static int cpu_isa;
92
93 static const char *get_exit_reason(u64 exit_code)
94 {
95 int table_size = ARRAY_SIZE(svm_exit_reasons);
96 struct exit_reasons_table *table = svm_exit_reasons;
97
98 if (cpu_isa == 1) {
99 table = vmx_exit_reasons;
100 table_size = ARRAY_SIZE(vmx_exit_reasons);
101 }
102
103 while (table_size--) {
104 if (table->exit_code == exit_code)
105 return table->reason;
106 table++;
107 }
108
109 pr_err("unknown kvm exit code:%lld on %s\n",
110 (unsigned long long)exit_code, cpu_isa ? "VMX" : "SVM");
111 return "UNKNOWN";
112 }
113
114 static void exit_event_decode_key(struct event_key *key, char decode[20])
115 {
116 const char *exit_reason = get_exit_reason(key->key);
117
118 scnprintf(decode, 20, "%s", exit_reason);
119 }
120
121 static struct kvm_events_ops exit_events = {
122 .is_begin_event = exit_event_begin,
123 .is_end_event = exit_event_end,
124 .decode_key = exit_event_decode_key,
125 .name = "VM-EXIT"
126 };
127
128 /*
129 * For the mmio events, we treat:
130 * the time of MMIO write: kvm_mmio(KVM_TRACE_MMIO_WRITE...) -> kvm_entry
131 * the time of MMIO read: kvm_exit -> kvm_mmio(KVM_TRACE_MMIO_READ...).
132 */
133 static void mmio_event_get_key(struct event_format *event, void *data,
134 struct event_key *key)
135 {
136 key->key = raw_field_value(event, "gpa", data);
137 key->info = raw_field_value(event, "type", data);
138 }
139
140 #define KVM_TRACE_MMIO_READ_UNSATISFIED 0
141 #define KVM_TRACE_MMIO_READ 1
142 #define KVM_TRACE_MMIO_WRITE 2
143
144 static bool mmio_event_begin(struct event_format *event, void *data,
145 struct event_key *key)
146 {
147 /* MMIO read begin event in kernel. */
148 if (kvm_exit_event(event))
149 return true;
150
151 /* MMIO write begin event in kernel. */
152 if (!strcmp(event->name, "kvm_mmio") &&
153 raw_field_value(event, "type", data) == KVM_TRACE_MMIO_WRITE) {
154 mmio_event_get_key(event, data, key);
155 return true;
156 }
157
158 return false;
159 }
160
161 static bool mmio_event_end(struct event_format *event, void *data,
162 struct event_key *key)
163 {
164 /* MMIO write end event in kernel. */
165 if (kvm_entry_event(event))
166 return true;
167
168 /* MMIO read end event in kernel.*/
169 if (!strcmp(event->name, "kvm_mmio") &&
170 raw_field_value(event, "type", data) == KVM_TRACE_MMIO_READ) {
171 mmio_event_get_key(event, data, key);
172 return true;
173 }
174
175 return false;
176 }
177
178 static void mmio_event_decode_key(struct event_key *key, char decode[20])
179 {
180 scnprintf(decode, 20, "%#lx:%s", (unsigned long)key->key,
181 key->info == KVM_TRACE_MMIO_WRITE ? "W" : "R");
182 }
183
184 static struct kvm_events_ops mmio_events = {
185 .is_begin_event = mmio_event_begin,
186 .is_end_event = mmio_event_end,
187 .decode_key = mmio_event_decode_key,
188 .name = "MMIO Access"
189 };
190
191 /* The time of emulation pio access is from kvm_pio to kvm_entry. */
192 static void ioport_event_get_key(struct event_format *event, void *data,
193 struct event_key *key)
194 {
195 key->key = raw_field_value(event, "port", data);
196 key->info = raw_field_value(event, "rw", data);
197 }
198
199 static bool ioport_event_begin(struct event_format *event, void *data,
200 struct event_key *key)
201 {
202 if (!strcmp(event->name, "kvm_pio")) {
203 ioport_event_get_key(event, data, key);
204 return true;
205 }
206
207 return false;
208 }
209
210 static bool ioport_event_end(struct event_format *event, void *data __maybe_unused,
211 struct event_key *key __maybe_unused)
212 {
213 if (kvm_entry_event(event))
214 return true;
215
216 return false;
217 }
218
219 static void ioport_event_decode_key(struct event_key *key, char decode[20])
220 {
221 scnprintf(decode, 20, "%#llx:%s", (unsigned long long)key->key,
222 key->info ? "POUT" : "PIN");
223 }
224
225 static struct kvm_events_ops ioport_events = {
226 .is_begin_event = ioport_event_begin,
227 .is_end_event = ioport_event_end,
228 .decode_key = ioport_event_decode_key,
229 .name = "IO Port Access"
230 };
231
232 static const char *report_event = "vmexit";
233 struct kvm_events_ops *events_ops;
234
235 static bool register_kvm_events_ops(void)
236 {
237 bool ret = true;
238
239 if (!strcmp(report_event, "vmexit"))
240 events_ops = &exit_events;
241 else if (!strcmp(report_event, "mmio"))
242 events_ops = &mmio_events;
243 else if (!strcmp(report_event, "ioport"))
244 events_ops = &ioport_events;
245 else {
246 pr_err("Unknown report event:%s\n", report_event);
247 ret = false;
248 }
249
250 return ret;
251 }
252
253 struct kvm_event_stats {
254 u64 time;
255 struct stats stats;
256 };
257
258 struct kvm_event {
259 struct list_head hash_entry;
260 struct rb_node rb;
261
262 struct event_key key;
263
264 struct kvm_event_stats total;
265
266 #define DEFAULT_VCPU_NUM 8
267 int max_vcpu;
268 struct kvm_event_stats *vcpu;
269 };
270
271 struct vcpu_event_record {
272 int vcpu_id;
273 u64 start_time;
274 struct kvm_event *last_event;
275 };
276
277 #define EVENTS_BITS 12
278 #define EVENTS_CACHE_SIZE (1UL << EVENTS_BITS)
279
280 static u64 total_time;
281 static u64 total_count;
282 static struct list_head kvm_events_cache[EVENTS_CACHE_SIZE];
283
284 static void init_kvm_event_record(void)
285 {
286 int i;
287
288 for (i = 0; i < (int)EVENTS_CACHE_SIZE; i++)
289 INIT_LIST_HEAD(&kvm_events_cache[i]);
290 }
291
292 static int kvm_events_hash_fn(u64 key)
293 {
294 return key & (EVENTS_CACHE_SIZE - 1);
295 }
296
297 static bool kvm_event_expand(struct kvm_event *event, int vcpu_id)
298 {
299 int old_max_vcpu = event->max_vcpu;
300
301 if (vcpu_id < event->max_vcpu)
302 return true;
303
304 while (event->max_vcpu <= vcpu_id)
305 event->max_vcpu += DEFAULT_VCPU_NUM;
306
307 event->vcpu = realloc(event->vcpu,
308 event->max_vcpu * sizeof(*event->vcpu));
309 if (!event->vcpu) {
310 pr_err("Not enough memory\n");
311 return false;
312 }
313
314 memset(event->vcpu + old_max_vcpu, 0,
315 (event->max_vcpu - old_max_vcpu) * sizeof(*event->vcpu));
316 return true;
317 }
318
319 static struct kvm_event *kvm_alloc_init_event(struct event_key *key)
320 {
321 struct kvm_event *event;
322
323 event = zalloc(sizeof(*event));
324 if (!event) {
325 pr_err("Not enough memory\n");
326 return NULL;
327 }
328
329 event->key = *key;
330 return event;
331 }
332
333 static struct kvm_event *find_create_kvm_event(struct event_key *key)
334 {
335 struct kvm_event *event;
336 struct list_head *head;
337
338 BUG_ON(key->key == INVALID_KEY);
339
340 head = &kvm_events_cache[kvm_events_hash_fn(key->key)];
341 list_for_each_entry(event, head, hash_entry)
342 if (event->key.key == key->key && event->key.info == key->info)
343 return event;
344
345 event = kvm_alloc_init_event(key);
346 if (!event)
347 return NULL;
348
349 list_add(&event->hash_entry, head);
350 return event;
351 }
352
353 static bool handle_begin_event(struct vcpu_event_record *vcpu_record,
354 struct event_key *key, u64 timestamp)
355 {
356 struct kvm_event *event = NULL;
357
358 if (key->key != INVALID_KEY)
359 event = find_create_kvm_event(key);
360
361 vcpu_record->last_event = event;
362 vcpu_record->start_time = timestamp;
363 return true;
364 }
365
366 static void
367 kvm_update_event_stats(struct kvm_event_stats *kvm_stats, u64 time_diff)
368 {
369 kvm_stats->time += time_diff;
370 update_stats(&kvm_stats->stats, time_diff);
371 }
372
373 static double kvm_event_rel_stddev(int vcpu_id, struct kvm_event *event)
374 {
375 struct kvm_event_stats *kvm_stats = &event->total;
376
377 if (vcpu_id != -1)
378 kvm_stats = &event->vcpu[vcpu_id];
379
380 return rel_stddev_stats(stddev_stats(&kvm_stats->stats),
381 avg_stats(&kvm_stats->stats));
382 }
383
384 static bool update_kvm_event(struct kvm_event *event, int vcpu_id,
385 u64 time_diff)
386 {
387 kvm_update_event_stats(&event->total, time_diff);
388
389 if (!kvm_event_expand(event, vcpu_id))
390 return false;
391
392 kvm_update_event_stats(&event->vcpu[vcpu_id], time_diff);
393 return true;
394 }
395
396 static bool handle_end_event(struct vcpu_event_record *vcpu_record,
397 struct event_key *key, u64 timestamp)
398 {
399 struct kvm_event *event;
400 u64 time_begin, time_diff;
401
402 event = vcpu_record->last_event;
403 time_begin = vcpu_record->start_time;
404
405 /* The begin event is not caught. */
406 if (!time_begin)
407 return true;
408
409 /*
410 * In some case, the 'begin event' only records the start timestamp,
411 * the actual event is recognized in the 'end event' (e.g. mmio-event).
412 */
413
414 /* Both begin and end events did not get the key. */
415 if (!event && key->key == INVALID_KEY)
416 return true;
417
418 if (!event)
419 event = find_create_kvm_event(key);
420
421 if (!event)
422 return false;
423
424 vcpu_record->last_event = NULL;
425 vcpu_record->start_time = 0;
426
427 BUG_ON(timestamp < time_begin);
428
429 time_diff = timestamp - time_begin;
430 return update_kvm_event(event, vcpu_record->vcpu_id, time_diff);
431 }
432
433 static struct vcpu_event_record
434 *per_vcpu_record(struct thread *thread, struct event_format *event, void *data)
435 {
436 /* Only kvm_entry records vcpu id. */
437 if (!thread->priv && kvm_entry_event(event)) {
438 struct vcpu_event_record *vcpu_record;
439
440 vcpu_record = zalloc(sizeof(struct vcpu_event_record));
441 if (!vcpu_record) {
442 pr_err("Not enough memory\n");
443 return NULL;
444 }
445
446 vcpu_record->vcpu_id = raw_field_value(event, "vcpu_id", data);
447 thread->priv = vcpu_record;
448 }
449
450 return (struct vcpu_event_record *)thread->priv;
451 }
452
453 static bool handle_kvm_event(struct thread *thread, struct event_format *event,
454 void *data, u64 timestamp)
455 {
456 struct vcpu_event_record *vcpu_record;
457 struct event_key key = {.key = INVALID_KEY};
458
459 vcpu_record = per_vcpu_record(thread, event, data);
460 if (!vcpu_record)
461 return true;
462
463 if (events_ops->is_begin_event(event, data, &key))
464 return handle_begin_event(vcpu_record, &key, timestamp);
465
466 if (events_ops->is_end_event(event, data, &key))
467 return handle_end_event(vcpu_record, &key, timestamp);
468
469 return true;
470 }
471
472 typedef int (*key_cmp_fun)(struct kvm_event*, struct kvm_event*, int);
473 struct kvm_event_key {
474 const char *name;
475 key_cmp_fun key;
476 };
477
478 static int trace_vcpu = -1;
479 #define GET_EVENT_KEY(func, field) \
480 static u64 get_event_ ##func(struct kvm_event *event, int vcpu) \
481 { \
482 if (vcpu == -1) \
483 return event->total.field; \
484 \
485 if (vcpu >= event->max_vcpu) \
486 return 0; \
487 \
488 return event->vcpu[vcpu].field; \
489 }
490
491 #define COMPARE_EVENT_KEY(func, field) \
492 GET_EVENT_KEY(func, field) \
493 static int compare_kvm_event_ ## func(struct kvm_event *one, \
494 struct kvm_event *two, int vcpu)\
495 { \
496 return get_event_ ##func(one, vcpu) > \
497 get_event_ ##func(two, vcpu); \
498 }
499
500 GET_EVENT_KEY(time, time);
501 COMPARE_EVENT_KEY(count, stats.n);
502 COMPARE_EVENT_KEY(mean, stats.mean);
503
504 #define DEF_SORT_NAME_KEY(name, compare_key) \
505 { #name, compare_kvm_event_ ## compare_key }
506
507 static struct kvm_event_key keys[] = {
508 DEF_SORT_NAME_KEY(sample, count),
509 DEF_SORT_NAME_KEY(time, mean),
510 { NULL, NULL }
511 };
512
513 static const char *sort_key = "sample";
514 static key_cmp_fun compare;
515
516 static bool select_key(void)
517 {
518 int i;
519
520 for (i = 0; keys[i].name; i++) {
521 if (!strcmp(keys[i].name, sort_key)) {
522 compare = keys[i].key;
523 return true;
524 }
525 }
526
527 pr_err("Unknown compare key:%s\n", sort_key);
528 return false;
529 }
530
531 static struct rb_root result;
532 static void insert_to_result(struct kvm_event *event, key_cmp_fun bigger,
533 int vcpu)
534 {
535 struct rb_node **rb = &result.rb_node;
536 struct rb_node *parent = NULL;
537 struct kvm_event *p;
538
539 while (*rb) {
540 p = container_of(*rb, struct kvm_event, rb);
541 parent = *rb;
542
543 if (bigger(event, p, vcpu))
544 rb = &(*rb)->rb_left;
545 else
546 rb = &(*rb)->rb_right;
547 }
548
549 rb_link_node(&event->rb, parent, rb);
550 rb_insert_color(&event->rb, &result);
551 }
552
553 static void update_total_count(struct kvm_event *event, int vcpu)
554 {
555 total_count += get_event_count(event, vcpu);
556 total_time += get_event_time(event, vcpu);
557 }
558
559 static bool event_is_valid(struct kvm_event *event, int vcpu)
560 {
561 return !!get_event_count(event, vcpu);
562 }
563
564 static void sort_result(int vcpu)
565 {
566 unsigned int i;
567 struct kvm_event *event;
568
569 for (i = 0; i < EVENTS_CACHE_SIZE; i++)
570 list_for_each_entry(event, &kvm_events_cache[i], hash_entry)
571 if (event_is_valid(event, vcpu)) {
572 update_total_count(event, vcpu);
573 insert_to_result(event, compare, vcpu);
574 }
575 }
576
577 /* returns left most element of result, and erase it */
578 static struct kvm_event *pop_from_result(void)
579 {
580 struct rb_node *node = rb_first(&result);
581
582 if (!node)
583 return NULL;
584
585 rb_erase(node, &result);
586 return container_of(node, struct kvm_event, rb);
587 }
588
589 static void print_vcpu_info(int vcpu)
590 {
591 pr_info("Analyze events for ");
592
593 if (vcpu == -1)
594 pr_info("all VCPUs:\n\n");
595 else
596 pr_info("VCPU %d:\n\n", vcpu);
597 }
598
599 static void print_result(int vcpu)
600 {
601 char decode[20];
602 struct kvm_event *event;
603
604 pr_info("\n\n");
605 print_vcpu_info(vcpu);
606 pr_info("%20s ", events_ops->name);
607 pr_info("%10s ", "Samples");
608 pr_info("%9s ", "Samples%");
609
610 pr_info("%9s ", "Time%");
611 pr_info("%16s ", "Avg time");
612 pr_info("\n\n");
613
614 while ((event = pop_from_result())) {
615 u64 ecount, etime;
616
617 ecount = get_event_count(event, vcpu);
618 etime = get_event_time(event, vcpu);
619
620 events_ops->decode_key(&event->key, decode);
621 pr_info("%20s ", decode);
622 pr_info("%10llu ", (unsigned long long)ecount);
623 pr_info("%8.2f%% ", (double)ecount / total_count * 100);
624 pr_info("%8.2f%% ", (double)etime / total_time * 100);
625 pr_info("%9.2fus ( +-%7.2f%% )", (double)etime / ecount/1e3,
626 kvm_event_rel_stddev(vcpu, event));
627 pr_info("\n");
628 }
629
630 pr_info("\nTotal Samples:%lld, Total events handled time:%.2fus.\n\n",
631 (unsigned long long)total_count, total_time / 1e3);
632 }
633
634 static int process_sample_event(struct perf_tool *tool __maybe_unused,
635 union perf_event *event,
636 struct perf_sample *sample,
637 struct perf_evsel *evsel,
638 struct machine *machine)
639 {
640 struct thread *thread = machine__findnew_thread(machine, sample->tid);
641
642 if (thread == NULL) {
643 pr_debug("problem processing %d event, skipping it.\n",
644 event->header.type);
645 return -1;
646 }
647
648 if (!handle_kvm_event(thread, evsel->tp_format, sample->raw_data,
649 sample->time))
650 return -1;
651
652 return 0;
653 }
654
655 static struct perf_tool eops = {
656 .sample = process_sample_event,
657 .comm = perf_event__process_comm,
658 .ordered_samples = true,
659 };
660
661 static int get_cpu_isa(struct perf_session *session)
662 {
663 char *cpuid;
664 int isa;
665
666 cpuid = perf_header__read_feature(session, HEADER_CPUID);
667
668 if (!cpuid) {
669 pr_err("read HEADER_CPUID failed.\n");
670 return -ENOTSUP;
671 }
672
673 if (strstr(cpuid, "Intel"))
674 isa = 1;
675 else if (strstr(cpuid, "AMD"))
676 isa = 0;
677 else {
678 pr_err("CPU %s is not supported.\n", cpuid);
679 isa = -ENOTSUP;
680 }
681
682 free(cpuid);
683 return isa;
684 }
685
686 static const char *file_name;
687
688 static int read_events(void)
689 {
690 struct perf_session *kvm_session;
691 int ret;
692
693 kvm_session = perf_session__new(file_name, O_RDONLY, 0, false, &eops);
694 if (!kvm_session) {
695 pr_err("Initializing perf session failed\n");
696 return -EINVAL;
697 }
698
699 if (!perf_session__has_traces(kvm_session, "kvm record"))
700 return -EINVAL;
701
702 /*
703 * Do not use 'isa' recorded in kvm_exit tracepoint since it is not
704 * traced in the old kernel.
705 */
706 ret = get_cpu_isa(kvm_session);
707
708 if (ret < 0)
709 return ret;
710
711 cpu_isa = ret;
712
713 return perf_session__process_events(kvm_session, &eops);
714 }
715
716 static bool verify_vcpu(int vcpu)
717 {
718 if (vcpu != -1 && vcpu < 0) {
719 pr_err("Invalid vcpu:%d.\n", vcpu);
720 return false;
721 }
722
723 return true;
724 }
725
726 static int kvm_events_report_vcpu(int vcpu)
727 {
728 int ret = -EINVAL;
729
730 if (!verify_vcpu(vcpu))
731 goto exit;
732
733 if (!select_key())
734 goto exit;
735
736 if (!register_kvm_events_ops())
737 goto exit;
738
739 init_kvm_event_record();
740 setup_pager();
741
742 ret = read_events();
743 if (ret)
744 goto exit;
745
746 sort_result(vcpu);
747 print_result(vcpu);
748 exit:
749 return ret;
750 }
751
752 static const char * const record_args[] = {
753 "record",
754 "-R",
755 "-f",
756 "-m", "1024",
757 "-c", "1",
758 "-e", "kvm:kvm_entry",
759 "-e", "kvm:kvm_exit",
760 "-e", "kvm:kvm_mmio",
761 "-e", "kvm:kvm_pio",
762 };
763
764 #define STRDUP_FAIL_EXIT(s) \
765 ({ char *_p; \
766 _p = strdup(s); \
767 if (!_p) \
768 return -ENOMEM; \
769 _p; \
770 })
771
772 static int kvm_events_record(int argc, const char **argv)
773 {
774 unsigned int rec_argc, i, j;
775 const char **rec_argv;
776
777 rec_argc = ARRAY_SIZE(record_args) + argc + 2;
778 rec_argv = calloc(rec_argc + 1, sizeof(char *));
779
780 if (rec_argv == NULL)
781 return -ENOMEM;
782
783 for (i = 0; i < ARRAY_SIZE(record_args); i++)
784 rec_argv[i] = STRDUP_FAIL_EXIT(record_args[i]);
785
786 rec_argv[i++] = STRDUP_FAIL_EXIT("-o");
787 rec_argv[i++] = STRDUP_FAIL_EXIT(file_name);
788
789 for (j = 1; j < (unsigned int)argc; j++, i++)
790 rec_argv[i] = argv[j];
791
792 return cmd_record(i, rec_argv, NULL);
793 }
794
795 static const char * const kvm_events_report_usage[] = {
796 "perf kvm stat report [<options>]",
797 NULL
798 };
799
800 static const struct option kvm_events_report_options[] = {
801 OPT_STRING(0, "event", &report_event, "report event",
802 "event for reporting: vmexit, mmio, ioport"),
803 OPT_INTEGER(0, "vcpu", &trace_vcpu,
804 "vcpu id to report"),
805 OPT_STRING('k', "key", &sort_key, "sort-key",
806 "key for sorting: sample(sort by samples number)"
807 " time (sort by avg time)"),
808 OPT_END()
809 };
810
811 static int kvm_events_report(int argc, const char **argv)
812 {
813 symbol__init();
814
815 if (argc) {
816 argc = parse_options(argc, argv,
817 kvm_events_report_options,
818 kvm_events_report_usage, 0);
819 if (argc)
820 usage_with_options(kvm_events_report_usage,
821 kvm_events_report_options);
822 }
823
824 return kvm_events_report_vcpu(trace_vcpu);
825 }
826
827 static void print_kvm_stat_usage(void)
828 {
829 printf("Usage: perf kvm stat <command>\n\n");
830
831 printf("# Available commands:\n");
832 printf("\trecord: record kvm events\n");
833 printf("\treport: report statistical data of kvm events\n");
834
835 printf("\nOtherwise, it is the alias of 'perf stat':\n");
836 }
837
838 static int kvm_cmd_stat(int argc, const char **argv)
839 {
840 if (argc == 1) {
841 print_kvm_stat_usage();
842 goto perf_stat;
843 }
844
845 if (!strncmp(argv[1], "rec", 3))
846 return kvm_events_record(argc - 1, argv + 1);
847
848 if (!strncmp(argv[1], "rep", 3))
849 return kvm_events_report(argc - 1 , argv + 1);
850
851 perf_stat:
852 return cmd_stat(argc, argv, NULL);
853 }
854
23 static char name_buffer[256]; 855 static char name_buffer[256];
24 856
25 static const char * const kvm_usage[] = { 857 static const char * const kvm_usage[] = {
26 "perf kvm [<options>] {top|record|report|diff|buildid-list}", 858 "perf kvm [<options>] {top|record|report|diff|buildid-list|stat}",
27 NULL 859 NULL
28 }; 860 };
29 861
30 static const struct option kvm_options[] = { 862 static const struct option kvm_options[] = {
31 OPT_STRING('i', "input", &file_name, "file", 863 OPT_STRING('i', "input", &file_name, "file",
32 "Input file name"), 864 "Input file name"),
33 OPT_STRING('o', "output", &file_name, "file", 865 OPT_STRING('o', "output", &file_name, "file",
34 "Output file name"), 866 "Output file name"),
35 OPT_BOOLEAN(0, "guest", &perf_guest, 867 OPT_BOOLEAN(0, "guest", &perf_guest,
36 "Collect guest os data"), 868 "Collect guest os data"),
37 OPT_BOOLEAN(0, "host", &perf_host, 869 OPT_BOOLEAN(0, "host", &perf_host,
38 "Collect host os data"), 870 "Collect host os data"),
39 OPT_STRING(0, "guestmount", &symbol_conf.guestmount, "directory", 871 OPT_STRING(0, "guestmount", &symbol_conf.guestmount, "directory",
40 "guest mount directory under which every guest os" 872 "guest mount directory under which every guest os"
41 " instance has a subdir"), 873 " instance has a subdir"),
42 OPT_STRING(0, "guestvmlinux", &symbol_conf.default_guest_vmlinux_name, 874 OPT_STRING(0, "guestvmlinux", &symbol_conf.default_guest_vmlinux_name,
43 "file", "file saving guest os vmlinux"), 875 "file", "file saving guest os vmlinux"),
44 OPT_STRING(0, "guestkallsyms", &symbol_conf.default_guest_kallsyms, 876 OPT_STRING(0, "guestkallsyms", &symbol_conf.default_guest_kallsyms,
45 "file", "file saving guest os /proc/kallsyms"), 877 "file", "file saving guest os /proc/kallsyms"),
46 OPT_STRING(0, "guestmodules", &symbol_conf.default_guest_modules, 878 OPT_STRING(0, "guestmodules", &symbol_conf.default_guest_modules,
47 "file", "file saving guest os /proc/modules"), 879 "file", "file saving guest os /proc/modules"),
48 OPT_END() 880 OPT_END()
49 }; 881 };
50 882
51 static int __cmd_record(int argc, const char **argv) 883 static int __cmd_record(int argc, const char **argv)
52 { 884 {
53 int rec_argc, i = 0, j; 885 int rec_argc, i = 0, j;
54 const char **rec_argv; 886 const char **rec_argv;
55 887
56 rec_argc = argc + 2; 888 rec_argc = argc + 2;
57 rec_argv = calloc(rec_argc + 1, sizeof(char *)); 889 rec_argv = calloc(rec_argc + 1, sizeof(char *));
58 rec_argv[i++] = strdup("record"); 890 rec_argv[i++] = strdup("record");
59 rec_argv[i++] = strdup("-o"); 891 rec_argv[i++] = strdup("-o");
60 rec_argv[i++] = strdup(file_name); 892 rec_argv[i++] = strdup(file_name);
61 for (j = 1; j < argc; j++, i++) 893 for (j = 1; j < argc; j++, i++)
62 rec_argv[i] = argv[j]; 894 rec_argv[i] = argv[j];
63 895
64 BUG_ON(i != rec_argc); 896 BUG_ON(i != rec_argc);
65 897
66 return cmd_record(i, rec_argv, NULL); 898 return cmd_record(i, rec_argv, NULL);
67 } 899 }
68 900
69 static int __cmd_report(int argc, const char **argv) 901 static int __cmd_report(int argc, const char **argv)
70 { 902 {
71 int rec_argc, i = 0, j; 903 int rec_argc, i = 0, j;
72 const char **rec_argv; 904 const char **rec_argv;
73 905
74 rec_argc = argc + 2; 906 rec_argc = argc + 2;
75 rec_argv = calloc(rec_argc + 1, sizeof(char *)); 907 rec_argv = calloc(rec_argc + 1, sizeof(char *));
76 rec_argv[i++] = strdup("report"); 908 rec_argv[i++] = strdup("report");
77 rec_argv[i++] = strdup("-i"); 909 rec_argv[i++] = strdup("-i");
78 rec_argv[i++] = strdup(file_name); 910 rec_argv[i++] = strdup(file_name);
79 for (j = 1; j < argc; j++, i++) 911 for (j = 1; j < argc; j++, i++)
80 rec_argv[i] = argv[j]; 912 rec_argv[i] = argv[j];
81 913
82 BUG_ON(i != rec_argc); 914 BUG_ON(i != rec_argc);
83 915
84 return cmd_report(i, rec_argv, NULL); 916 return cmd_report(i, rec_argv, NULL);
85 } 917 }
86 918
87 static int __cmd_buildid_list(int argc, const char **argv) 919 static int __cmd_buildid_list(int argc, const char **argv)
88 { 920 {
89 int rec_argc, i = 0, j; 921 int rec_argc, i = 0, j;
90 const char **rec_argv; 922 const char **rec_argv;
91 923
92 rec_argc = argc + 2; 924 rec_argc = argc + 2;
93 rec_argv = calloc(rec_argc + 1, sizeof(char *)); 925 rec_argv = calloc(rec_argc + 1, sizeof(char *));
94 rec_argv[i++] = strdup("buildid-list"); 926 rec_argv[i++] = strdup("buildid-list");
95 rec_argv[i++] = strdup("-i"); 927 rec_argv[i++] = strdup("-i");
96 rec_argv[i++] = strdup(file_name); 928 rec_argv[i++] = strdup(file_name);
97 for (j = 1; j < argc; j++, i++) 929 for (j = 1; j < argc; j++, i++)
98 rec_argv[i] = argv[j]; 930 rec_argv[i] = argv[j];
99 931
100 BUG_ON(i != rec_argc); 932 BUG_ON(i != rec_argc);
101 933
102 return cmd_buildid_list(i, rec_argv, NULL); 934 return cmd_buildid_list(i, rec_argv, NULL);
103 } 935 }
104 936
105 int cmd_kvm(int argc, const char **argv, const char *prefix __maybe_unused) 937 int cmd_kvm(int argc, const char **argv, const char *prefix __maybe_unused)
106 { 938 {
107 perf_host = 0; 939 perf_host = 0;
108 perf_guest = 1; 940 perf_guest = 1;
109 941
110 argc = parse_options(argc, argv, kvm_options, kvm_usage, 942 argc = parse_options(argc, argv, kvm_options, kvm_usage,
111 PARSE_OPT_STOP_AT_NON_OPTION); 943 PARSE_OPT_STOP_AT_NON_OPTION);
112 if (!argc) 944 if (!argc)
113 usage_with_options(kvm_usage, kvm_options); 945 usage_with_options(kvm_usage, kvm_options);
114 946
115 if (!perf_host) 947 if (!perf_host)
116 perf_guest = 1; 948 perf_guest = 1;
117 949
118 if (!file_name) { 950 if (!file_name) {
119 if (perf_host && !perf_guest) 951 if (perf_host && !perf_guest)
120 sprintf(name_buffer, "perf.data.host"); 952 sprintf(name_buffer, "perf.data.host");
121 else if (!perf_host && perf_guest) 953 else if (!perf_host && perf_guest)
122 sprintf(name_buffer, "perf.data.guest"); 954 sprintf(name_buffer, "perf.data.guest");
123 else 955 else
124 sprintf(name_buffer, "perf.data.kvm"); 956 sprintf(name_buffer, "perf.data.kvm");
125 file_name = name_buffer; 957 file_name = name_buffer;
126 } 958 }
127 959
128 if (!strncmp(argv[0], "rec", 3)) 960 if (!strncmp(argv[0], "rec", 3))
129 return __cmd_record(argc, argv); 961 return __cmd_record(argc, argv);
130 else if (!strncmp(argv[0], "rep", 3)) 962 else if (!strncmp(argv[0], "rep", 3))
131 return __cmd_report(argc, argv); 963 return __cmd_report(argc, argv);
132 else if (!strncmp(argv[0], "diff", 4)) 964 else if (!strncmp(argv[0], "diff", 4))
133 return cmd_diff(argc, argv, NULL); 965 return cmd_diff(argc, argv, NULL);
134 else if (!strncmp(argv[0], "top", 3)) 966 else if (!strncmp(argv[0], "top", 3))
135 return cmd_top(argc, argv, NULL); 967 return cmd_top(argc, argv, NULL);
136 else if (!strncmp(argv[0], "buildid-list", 12)) 968 else if (!strncmp(argv[0], "buildid-list", 12))
137 return __cmd_buildid_list(argc, argv); 969 return __cmd_buildid_list(argc, argv);
970 else if (!strncmp(argv[0], "stat", 4))
971 return kvm_cmd_stat(argc, argv);
138 else 972 else
139 usage_with_options(kvm_usage, kvm_options); 973 usage_with_options(kvm_usage, kvm_options);
140 974
141 return 0; 975 return 0;
142 } 976 }
tools/perf/util/header.c
1 #define _FILE_OFFSET_BITS 64 1 #define _FILE_OFFSET_BITS 64
2 2
3 #include "util.h" 3 #include "util.h"
4 #include <sys/types.h> 4 #include <sys/types.h>
5 #include <byteswap.h> 5 #include <byteswap.h>
6 #include <unistd.h> 6 #include <unistd.h>
7 #include <stdio.h> 7 #include <stdio.h>
8 #include <stdlib.h> 8 #include <stdlib.h>
9 #include <linux/list.h> 9 #include <linux/list.h>
10 #include <linux/kernel.h> 10 #include <linux/kernel.h>
11 #include <linux/bitops.h> 11 #include <linux/bitops.h>
12 #include <sys/utsname.h> 12 #include <sys/utsname.h>
13 13
14 #include "evlist.h" 14 #include "evlist.h"
15 #include "evsel.h" 15 #include "evsel.h"
16 #include "header.h" 16 #include "header.h"
17 #include "../perf.h" 17 #include "../perf.h"
18 #include "trace-event.h" 18 #include "trace-event.h"
19 #include "session.h" 19 #include "session.h"
20 #include "symbol.h" 20 #include "symbol.h"
21 #include "debug.h" 21 #include "debug.h"
22 #include "cpumap.h" 22 #include "cpumap.h"
23 #include "pmu.h" 23 #include "pmu.h"
24 #include "vdso.h" 24 #include "vdso.h"
25 25
26 static bool no_buildid_cache = false; 26 static bool no_buildid_cache = false;
27 27
28 static int trace_event_count; 28 static int trace_event_count;
29 static struct perf_trace_event_type *trace_events; 29 static struct perf_trace_event_type *trace_events;
30 30
31 static u32 header_argc; 31 static u32 header_argc;
32 static const char **header_argv; 32 static const char **header_argv;
33 33
34 int perf_header__push_event(u64 id, const char *name) 34 int perf_header__push_event(u64 id, const char *name)
35 { 35 {
36 struct perf_trace_event_type *nevents; 36 struct perf_trace_event_type *nevents;
37 37
38 if (strlen(name) > MAX_EVENT_NAME) 38 if (strlen(name) > MAX_EVENT_NAME)
39 pr_warning("Event %s will be truncated\n", name); 39 pr_warning("Event %s will be truncated\n", name);
40 40
41 nevents = realloc(trace_events, (trace_event_count + 1) * sizeof(*trace_events)); 41 nevents = realloc(trace_events, (trace_event_count + 1) * sizeof(*trace_events));
42 if (nevents == NULL) 42 if (nevents == NULL)
43 return -ENOMEM; 43 return -ENOMEM;
44 trace_events = nevents; 44 trace_events = nevents;
45 45
46 memset(&trace_events[trace_event_count], 0, sizeof(struct perf_trace_event_type)); 46 memset(&trace_events[trace_event_count], 0, sizeof(struct perf_trace_event_type));
47 trace_events[trace_event_count].event_id = id; 47 trace_events[trace_event_count].event_id = id;
48 strncpy(trace_events[trace_event_count].name, name, MAX_EVENT_NAME - 1); 48 strncpy(trace_events[trace_event_count].name, name, MAX_EVENT_NAME - 1);
49 trace_event_count++; 49 trace_event_count++;
50 return 0; 50 return 0;
51 } 51 }
52 52
53 char *perf_header__find_event(u64 id) 53 char *perf_header__find_event(u64 id)
54 { 54 {
55 int i; 55 int i;
56 for (i = 0 ; i < trace_event_count; i++) { 56 for (i = 0 ; i < trace_event_count; i++) {
57 if (trace_events[i].event_id == id) 57 if (trace_events[i].event_id == id)
58 return trace_events[i].name; 58 return trace_events[i].name;
59 } 59 }
60 return NULL; 60 return NULL;
61 } 61 }
62 62
63 /* 63 /*
64 * magic2 = "PERFILE2" 64 * magic2 = "PERFILE2"
65 * must be a numerical value to let the endianness 65 * must be a numerical value to let the endianness
66 * determine the memory layout. That way we are able 66 * determine the memory layout. That way we are able
67 * to detect endianness when reading the perf.data file 67 * to detect endianness when reading the perf.data file
68 * back. 68 * back.
69 * 69 *
70 * we check for legacy (PERFFILE) format. 70 * we check for legacy (PERFFILE) format.
71 */ 71 */
72 static const char *__perf_magic1 = "PERFFILE"; 72 static const char *__perf_magic1 = "PERFFILE";
73 static const u64 __perf_magic2 = 0x32454c4946524550ULL; 73 static const u64 __perf_magic2 = 0x32454c4946524550ULL;
74 static const u64 __perf_magic2_sw = 0x50455246494c4532ULL; 74 static const u64 __perf_magic2_sw = 0x50455246494c4532ULL;
75 75
76 #define PERF_MAGIC __perf_magic2 76 #define PERF_MAGIC __perf_magic2
77 77
78 struct perf_file_attr { 78 struct perf_file_attr {
79 struct perf_event_attr attr; 79 struct perf_event_attr attr;
80 struct perf_file_section ids; 80 struct perf_file_section ids;
81 }; 81 };
82 82
83 void perf_header__set_feat(struct perf_header *header, int feat) 83 void perf_header__set_feat(struct perf_header *header, int feat)
84 { 84 {
85 set_bit(feat, header->adds_features); 85 set_bit(feat, header->adds_features);
86 } 86 }
87 87
88 void perf_header__clear_feat(struct perf_header *header, int feat) 88 void perf_header__clear_feat(struct perf_header *header, int feat)
89 { 89 {
90 clear_bit(feat, header->adds_features); 90 clear_bit(feat, header->adds_features);
91 } 91 }
92 92
93 bool perf_header__has_feat(const struct perf_header *header, int feat) 93 bool perf_header__has_feat(const struct perf_header *header, int feat)
94 { 94 {
95 return test_bit(feat, header->adds_features); 95 return test_bit(feat, header->adds_features);
96 } 96 }
97 97
98 static int do_write(int fd, const void *buf, size_t size) 98 static int do_write(int fd, const void *buf, size_t size)
99 { 99 {
100 while (size) { 100 while (size) {
101 int ret = write(fd, buf, size); 101 int ret = write(fd, buf, size);
102 102
103 if (ret < 0) 103 if (ret < 0)
104 return -errno; 104 return -errno;
105 105
106 size -= ret; 106 size -= ret;
107 buf += ret; 107 buf += ret;
108 } 108 }
109 109
110 return 0; 110 return 0;
111 } 111 }
112 112
113 #define NAME_ALIGN 64 113 #define NAME_ALIGN 64
114 114
115 static int write_padded(int fd, const void *bf, size_t count, 115 static int write_padded(int fd, const void *bf, size_t count,
116 size_t count_aligned) 116 size_t count_aligned)
117 { 117 {
118 static const char zero_buf[NAME_ALIGN]; 118 static const char zero_buf[NAME_ALIGN];
119 int err = do_write(fd, bf, count); 119 int err = do_write(fd, bf, count);
120 120
121 if (!err) 121 if (!err)
122 err = do_write(fd, zero_buf, count_aligned - count); 122 err = do_write(fd, zero_buf, count_aligned - count);
123 123
124 return err; 124 return err;
125 } 125 }
126 126
127 static int do_write_string(int fd, const char *str) 127 static int do_write_string(int fd, const char *str)
128 { 128 {
129 u32 len, olen; 129 u32 len, olen;
130 int ret; 130 int ret;
131 131
132 olen = strlen(str) + 1; 132 olen = strlen(str) + 1;
133 len = PERF_ALIGN(olen, NAME_ALIGN); 133 len = PERF_ALIGN(olen, NAME_ALIGN);
134 134
135 /* write len, incl. \0 */ 135 /* write len, incl. \0 */
136 ret = do_write(fd, &len, sizeof(len)); 136 ret = do_write(fd, &len, sizeof(len));
137 if (ret < 0) 137 if (ret < 0)
138 return ret; 138 return ret;
139 139
140 return write_padded(fd, str, olen, len); 140 return write_padded(fd, str, olen, len);
141 } 141 }
142 142
143 static char *do_read_string(int fd, struct perf_header *ph) 143 static char *do_read_string(int fd, struct perf_header *ph)
144 { 144 {
145 ssize_t sz, ret; 145 ssize_t sz, ret;
146 u32 len; 146 u32 len;
147 char *buf; 147 char *buf;
148 148
149 sz = read(fd, &len, sizeof(len)); 149 sz = read(fd, &len, sizeof(len));
150 if (sz < (ssize_t)sizeof(len)) 150 if (sz < (ssize_t)sizeof(len))
151 return NULL; 151 return NULL;
152 152
153 if (ph->needs_swap) 153 if (ph->needs_swap)
154 len = bswap_32(len); 154 len = bswap_32(len);
155 155
156 buf = malloc(len); 156 buf = malloc(len);
157 if (!buf) 157 if (!buf)
158 return NULL; 158 return NULL;
159 159
160 ret = read(fd, buf, len); 160 ret = read(fd, buf, len);
161 if (ret == (ssize_t)len) { 161 if (ret == (ssize_t)len) {
162 /* 162 /*
163 * strings are padded by zeroes 163 * strings are padded by zeroes
164 * thus the actual strlen of buf 164 * thus the actual strlen of buf
165 * may be less than len 165 * may be less than len
166 */ 166 */
167 return buf; 167 return buf;
168 } 168 }
169 169
170 free(buf); 170 free(buf);
171 return NULL; 171 return NULL;
172 } 172 }
173 173
174 int 174 int
175 perf_header__set_cmdline(int argc, const char **argv) 175 perf_header__set_cmdline(int argc, const char **argv)
176 { 176 {
177 int i; 177 int i;
178 178
179 /* 179 /*
180 * If header_argv has already been set, do not override it. 180 * If header_argv has already been set, do not override it.
181 * This allows a command to set the cmdline, parse args and 181 * This allows a command to set the cmdline, parse args and
182 * then call another builtin function that implements a 182 * then call another builtin function that implements a
183 * command -- e.g, cmd_kvm calling cmd_record. 183 * command -- e.g, cmd_kvm calling cmd_record.
184 */ 184 */
185 if (header_argv) 185 if (header_argv)
186 return 0; 186 return 0;
187 187
188 header_argc = (u32)argc; 188 header_argc = (u32)argc;
189 189
190 /* do not include NULL termination */ 190 /* do not include NULL termination */
191 header_argv = calloc(argc, sizeof(char *)); 191 header_argv = calloc(argc, sizeof(char *));
192 if (!header_argv) 192 if (!header_argv)
193 return -ENOMEM; 193 return -ENOMEM;
194 194
195 /* 195 /*
196 * must copy argv contents because it gets moved 196 * must copy argv contents because it gets moved
197 * around during option parsing 197 * around during option parsing
198 */ 198 */
199 for (i = 0; i < argc ; i++) 199 for (i = 0; i < argc ; i++)
200 header_argv[i] = argv[i]; 200 header_argv[i] = argv[i];
201 201
202 return 0; 202 return 0;
203 } 203 }
204 204
205 #define dsos__for_each_with_build_id(pos, head) \ 205 #define dsos__for_each_with_build_id(pos, head) \
206 list_for_each_entry(pos, head, node) \ 206 list_for_each_entry(pos, head, node) \
207 if (!pos->has_build_id) \ 207 if (!pos->has_build_id) \
208 continue; \ 208 continue; \
209 else 209 else
210 210
211 static int write_buildid(char *name, size_t name_len, u8 *build_id, 211 static int write_buildid(char *name, size_t name_len, u8 *build_id,
212 pid_t pid, u16 misc, int fd) 212 pid_t pid, u16 misc, int fd)
213 { 213 {
214 int err; 214 int err;
215 struct build_id_event b; 215 struct build_id_event b;
216 size_t len; 216 size_t len;
217 217
218 len = name_len + 1; 218 len = name_len + 1;
219 len = PERF_ALIGN(len, NAME_ALIGN); 219 len = PERF_ALIGN(len, NAME_ALIGN);
220 220
221 memset(&b, 0, sizeof(b)); 221 memset(&b, 0, sizeof(b));
222 memcpy(&b.build_id, build_id, BUILD_ID_SIZE); 222 memcpy(&b.build_id, build_id, BUILD_ID_SIZE);
223 b.pid = pid; 223 b.pid = pid;
224 b.header.misc = misc; 224 b.header.misc = misc;
225 b.header.size = sizeof(b) + len; 225 b.header.size = sizeof(b) + len;
226 226
227 err = do_write(fd, &b, sizeof(b)); 227 err = do_write(fd, &b, sizeof(b));
228 if (err < 0) 228 if (err < 0)
229 return err; 229 return err;
230 230
231 return write_padded(fd, name, name_len + 1, len); 231 return write_padded(fd, name, name_len + 1, len);
232 } 232 }
233 233
234 static int __dsos__write_buildid_table(struct list_head *head, pid_t pid, 234 static int __dsos__write_buildid_table(struct list_head *head, pid_t pid,
235 u16 misc, int fd) 235 u16 misc, int fd)
236 { 236 {
237 struct dso *pos; 237 struct dso *pos;
238 238
239 dsos__for_each_with_build_id(pos, head) { 239 dsos__for_each_with_build_id(pos, head) {
240 int err; 240 int err;
241 char *name; 241 char *name;
242 size_t name_len; 242 size_t name_len;
243 243
244 if (!pos->hit) 244 if (!pos->hit)
245 continue; 245 continue;
246 246
247 if (is_vdso_map(pos->short_name)) { 247 if (is_vdso_map(pos->short_name)) {
248 name = (char *) VDSO__MAP_NAME; 248 name = (char *) VDSO__MAP_NAME;
249 name_len = sizeof(VDSO__MAP_NAME) + 1; 249 name_len = sizeof(VDSO__MAP_NAME) + 1;
250 } else { 250 } else {
251 name = pos->long_name; 251 name = pos->long_name;
252 name_len = pos->long_name_len + 1; 252 name_len = pos->long_name_len + 1;
253 } 253 }
254 254
255 err = write_buildid(name, name_len, pos->build_id, 255 err = write_buildid(name, name_len, pos->build_id,
256 pid, misc, fd); 256 pid, misc, fd);
257 if (err) 257 if (err)
258 return err; 258 return err;
259 } 259 }
260 260
261 return 0; 261 return 0;
262 } 262 }
263 263
264 static int machine__write_buildid_table(struct machine *machine, int fd) 264 static int machine__write_buildid_table(struct machine *machine, int fd)
265 { 265 {
266 int err; 266 int err;
267 u16 kmisc = PERF_RECORD_MISC_KERNEL, 267 u16 kmisc = PERF_RECORD_MISC_KERNEL,
268 umisc = PERF_RECORD_MISC_USER; 268 umisc = PERF_RECORD_MISC_USER;
269 269
270 if (!machine__is_host(machine)) { 270 if (!machine__is_host(machine)) {
271 kmisc = PERF_RECORD_MISC_GUEST_KERNEL; 271 kmisc = PERF_RECORD_MISC_GUEST_KERNEL;
272 umisc = PERF_RECORD_MISC_GUEST_USER; 272 umisc = PERF_RECORD_MISC_GUEST_USER;
273 } 273 }
274 274
275 err = __dsos__write_buildid_table(&machine->kernel_dsos, machine->pid, 275 err = __dsos__write_buildid_table(&machine->kernel_dsos, machine->pid,
276 kmisc, fd); 276 kmisc, fd);
277 if (err == 0) 277 if (err == 0)
278 err = __dsos__write_buildid_table(&machine->user_dsos, 278 err = __dsos__write_buildid_table(&machine->user_dsos,
279 machine->pid, umisc, fd); 279 machine->pid, umisc, fd);
280 return err; 280 return err;
281 } 281 }
282 282
283 static int dsos__write_buildid_table(struct perf_header *header, int fd) 283 static int dsos__write_buildid_table(struct perf_header *header, int fd)
284 { 284 {
285 struct perf_session *session = container_of(header, 285 struct perf_session *session = container_of(header,
286 struct perf_session, header); 286 struct perf_session, header);
287 struct rb_node *nd; 287 struct rb_node *nd;
288 int err = machine__write_buildid_table(&session->host_machine, fd); 288 int err = machine__write_buildid_table(&session->host_machine, fd);
289 289
290 if (err) 290 if (err)
291 return err; 291 return err;
292 292
293 for (nd = rb_first(&session->machines); nd; nd = rb_next(nd)) { 293 for (nd = rb_first(&session->machines); nd; nd = rb_next(nd)) {
294 struct machine *pos = rb_entry(nd, struct machine, rb_node); 294 struct machine *pos = rb_entry(nd, struct machine, rb_node);
295 err = machine__write_buildid_table(pos, fd); 295 err = machine__write_buildid_table(pos, fd);
296 if (err) 296 if (err)
297 break; 297 break;
298 } 298 }
299 return err; 299 return err;
300 } 300 }
301 301
302 int build_id_cache__add_s(const char *sbuild_id, const char *debugdir, 302 int build_id_cache__add_s(const char *sbuild_id, const char *debugdir,
303 const char *name, bool is_kallsyms, bool is_vdso) 303 const char *name, bool is_kallsyms, bool is_vdso)
304 { 304 {
305 const size_t size = PATH_MAX; 305 const size_t size = PATH_MAX;
306 char *realname, *filename = zalloc(size), 306 char *realname, *filename = zalloc(size),
307 *linkname = zalloc(size), *targetname; 307 *linkname = zalloc(size), *targetname;
308 int len, err = -1; 308 int len, err = -1;
309 bool slash = is_kallsyms || is_vdso; 309 bool slash = is_kallsyms || is_vdso;
310 310
311 if (is_kallsyms) { 311 if (is_kallsyms) {
312 if (symbol_conf.kptr_restrict) { 312 if (symbol_conf.kptr_restrict) {
313 pr_debug("Not caching a kptr_restrict'ed /proc/kallsyms\n"); 313 pr_debug("Not caching a kptr_restrict'ed /proc/kallsyms\n");
314 return 0; 314 return 0;
315 } 315 }
316 realname = (char *) name; 316 realname = (char *) name;
317 } else 317 } else
318 realname = realpath(name, NULL); 318 realname = realpath(name, NULL);
319 319
320 if (realname == NULL || filename == NULL || linkname == NULL) 320 if (realname == NULL || filename == NULL || linkname == NULL)
321 goto out_free; 321 goto out_free;
322 322
323 len = scnprintf(filename, size, "%s%s%s", 323 len = scnprintf(filename, size, "%s%s%s",
324 debugdir, slash ? "/" : "", 324 debugdir, slash ? "/" : "",
325 is_vdso ? VDSO__MAP_NAME : realname); 325 is_vdso ? VDSO__MAP_NAME : realname);
326 if (mkdir_p(filename, 0755)) 326 if (mkdir_p(filename, 0755))
327 goto out_free; 327 goto out_free;
328 328
329 snprintf(filename + len, size - len, "/%s", sbuild_id); 329 snprintf(filename + len, size - len, "/%s", sbuild_id);
330 330
331 if (access(filename, F_OK)) { 331 if (access(filename, F_OK)) {
332 if (is_kallsyms) { 332 if (is_kallsyms) {
333 if (copyfile("/proc/kallsyms", filename)) 333 if (copyfile("/proc/kallsyms", filename))
334 goto out_free; 334 goto out_free;
335 } else if (link(realname, filename) && copyfile(name, filename)) 335 } else if (link(realname, filename) && copyfile(name, filename))
336 goto out_free; 336 goto out_free;
337 } 337 }
338 338
339 len = scnprintf(linkname, size, "%s/.build-id/%.2s", 339 len = scnprintf(linkname, size, "%s/.build-id/%.2s",
340 debugdir, sbuild_id); 340 debugdir, sbuild_id);
341 341
342 if (access(linkname, X_OK) && mkdir_p(linkname, 0755)) 342 if (access(linkname, X_OK) && mkdir_p(linkname, 0755))
343 goto out_free; 343 goto out_free;
344 344
345 snprintf(linkname + len, size - len, "/%s", sbuild_id + 2); 345 snprintf(linkname + len, size - len, "/%s", sbuild_id + 2);
346 targetname = filename + strlen(debugdir) - 5; 346 targetname = filename + strlen(debugdir) - 5;
347 memcpy(targetname, "../..", 5); 347 memcpy(targetname, "../..", 5);
348 348
349 if (symlink(targetname, linkname) == 0) 349 if (symlink(targetname, linkname) == 0)
350 err = 0; 350 err = 0;
351 out_free: 351 out_free:
352 if (!is_kallsyms) 352 if (!is_kallsyms)
353 free(realname); 353 free(realname);
354 free(filename); 354 free(filename);
355 free(linkname); 355 free(linkname);
356 return err; 356 return err;
357 } 357 }
358 358
359 static int build_id_cache__add_b(const u8 *build_id, size_t build_id_size, 359 static int build_id_cache__add_b(const u8 *build_id, size_t build_id_size,
360 const char *name, const char *debugdir, 360 const char *name, const char *debugdir,
361 bool is_kallsyms, bool is_vdso) 361 bool is_kallsyms, bool is_vdso)
362 { 362 {
363 char sbuild_id[BUILD_ID_SIZE * 2 + 1]; 363 char sbuild_id[BUILD_ID_SIZE * 2 + 1];
364 364
365 build_id__sprintf(build_id, build_id_size, sbuild_id); 365 build_id__sprintf(build_id, build_id_size, sbuild_id);
366 366
367 return build_id_cache__add_s(sbuild_id, debugdir, name, 367 return build_id_cache__add_s(sbuild_id, debugdir, name,
368 is_kallsyms, is_vdso); 368 is_kallsyms, is_vdso);
369 } 369 }
370 370
371 int build_id_cache__remove_s(const char *sbuild_id, const char *debugdir) 371 int build_id_cache__remove_s(const char *sbuild_id, const char *debugdir)
372 { 372 {
373 const size_t size = PATH_MAX; 373 const size_t size = PATH_MAX;
374 char *filename = zalloc(size), 374 char *filename = zalloc(size),
375 *linkname = zalloc(size); 375 *linkname = zalloc(size);
376 int err = -1; 376 int err = -1;
377 377
378 if (filename == NULL || linkname == NULL) 378 if (filename == NULL || linkname == NULL)
379 goto out_free; 379 goto out_free;
380 380
381 snprintf(linkname, size, "%s/.build-id/%.2s/%s", 381 snprintf(linkname, size, "%s/.build-id/%.2s/%s",
382 debugdir, sbuild_id, sbuild_id + 2); 382 debugdir, sbuild_id, sbuild_id + 2);
383 383
384 if (access(linkname, F_OK)) 384 if (access(linkname, F_OK))
385 goto out_free; 385 goto out_free;
386 386
387 if (readlink(linkname, filename, size - 1) < 0) 387 if (readlink(linkname, filename, size - 1) < 0)
388 goto out_free; 388 goto out_free;
389 389
390 if (unlink(linkname)) 390 if (unlink(linkname))
391 goto out_free; 391 goto out_free;
392 392
393 /* 393 /*
394 * Since the link is relative, we must make it absolute: 394 * Since the link is relative, we must make it absolute:
395 */ 395 */
396 snprintf(linkname, size, "%s/.build-id/%.2s/%s", 396 snprintf(linkname, size, "%s/.build-id/%.2s/%s",
397 debugdir, sbuild_id, filename); 397 debugdir, sbuild_id, filename);
398 398
399 if (unlink(linkname)) 399 if (unlink(linkname))
400 goto out_free; 400 goto out_free;
401 401
402 err = 0; 402 err = 0;
403 out_free: 403 out_free:
404 free(filename); 404 free(filename);
405 free(linkname); 405 free(linkname);
406 return err; 406 return err;
407 } 407 }
408 408
409 static int dso__cache_build_id(struct dso *dso, const char *debugdir) 409 static int dso__cache_build_id(struct dso *dso, const char *debugdir)
410 { 410 {
411 bool is_kallsyms = dso->kernel && dso->long_name[0] != '/'; 411 bool is_kallsyms = dso->kernel && dso->long_name[0] != '/';
412 bool is_vdso = is_vdso_map(dso->short_name); 412 bool is_vdso = is_vdso_map(dso->short_name);
413 413
414 return build_id_cache__add_b(dso->build_id, sizeof(dso->build_id), 414 return build_id_cache__add_b(dso->build_id, sizeof(dso->build_id),
415 dso->long_name, debugdir, 415 dso->long_name, debugdir,
416 is_kallsyms, is_vdso); 416 is_kallsyms, is_vdso);
417 } 417 }
418 418
419 static int __dsos__cache_build_ids(struct list_head *head, const char *debugdir) 419 static int __dsos__cache_build_ids(struct list_head *head, const char *debugdir)
420 { 420 {
421 struct dso *pos; 421 struct dso *pos;
422 int err = 0; 422 int err = 0;
423 423
424 dsos__for_each_with_build_id(pos, head) 424 dsos__for_each_with_build_id(pos, head)
425 if (dso__cache_build_id(pos, debugdir)) 425 if (dso__cache_build_id(pos, debugdir))
426 err = -1; 426 err = -1;
427 427
428 return err; 428 return err;
429 } 429 }
430 430
431 static int machine__cache_build_ids(struct machine *machine, const char *debugdir) 431 static int machine__cache_build_ids(struct machine *machine, const char *debugdir)
432 { 432 {
433 int ret = __dsos__cache_build_ids(&machine->kernel_dsos, debugdir); 433 int ret = __dsos__cache_build_ids(&machine->kernel_dsos, debugdir);
434 ret |= __dsos__cache_build_ids(&machine->user_dsos, debugdir); 434 ret |= __dsos__cache_build_ids(&machine->user_dsos, debugdir);
435 return ret; 435 return ret;
436 } 436 }
437 437
438 static int perf_session__cache_build_ids(struct perf_session *session) 438 static int perf_session__cache_build_ids(struct perf_session *session)
439 { 439 {
440 struct rb_node *nd; 440 struct rb_node *nd;
441 int ret; 441 int ret;
442 char debugdir[PATH_MAX]; 442 char debugdir[PATH_MAX];
443 443
444 snprintf(debugdir, sizeof(debugdir), "%s", buildid_dir); 444 snprintf(debugdir, sizeof(debugdir), "%s", buildid_dir);
445 445
446 if (mkdir(debugdir, 0755) != 0 && errno != EEXIST) 446 if (mkdir(debugdir, 0755) != 0 && errno != EEXIST)
447 return -1; 447 return -1;
448 448
449 ret = machine__cache_build_ids(&session->host_machine, debugdir); 449 ret = machine__cache_build_ids(&session->host_machine, debugdir);
450 450
451 for (nd = rb_first(&session->machines); nd; nd = rb_next(nd)) { 451 for (nd = rb_first(&session->machines); nd; nd = rb_next(nd)) {
452 struct machine *pos = rb_entry(nd, struct machine, rb_node); 452 struct machine *pos = rb_entry(nd, struct machine, rb_node);
453 ret |= machine__cache_build_ids(pos, debugdir); 453 ret |= machine__cache_build_ids(pos, debugdir);
454 } 454 }
455 return ret ? -1 : 0; 455 return ret ? -1 : 0;
456 } 456 }
457 457
458 static bool machine__read_build_ids(struct machine *machine, bool with_hits) 458 static bool machine__read_build_ids(struct machine *machine, bool with_hits)
459 { 459 {
460 bool ret = __dsos__read_build_ids(&machine->kernel_dsos, with_hits); 460 bool ret = __dsos__read_build_ids(&machine->kernel_dsos, with_hits);
461 ret |= __dsos__read_build_ids(&machine->user_dsos, with_hits); 461 ret |= __dsos__read_build_ids(&machine->user_dsos, with_hits);
462 return ret; 462 return ret;
463 } 463 }
464 464
465 static bool perf_session__read_build_ids(struct perf_session *session, bool with_hits) 465 static bool perf_session__read_build_ids(struct perf_session *session, bool with_hits)
466 { 466 {
467 struct rb_node *nd; 467 struct rb_node *nd;
468 bool ret = machine__read_build_ids(&session->host_machine, with_hits); 468 bool ret = machine__read_build_ids(&session->host_machine, with_hits);
469 469
470 for (nd = rb_first(&session->machines); nd; nd = rb_next(nd)) { 470 for (nd = rb_first(&session->machines); nd; nd = rb_next(nd)) {
471 struct machine *pos = rb_entry(nd, struct machine, rb_node); 471 struct machine *pos = rb_entry(nd, struct machine, rb_node);
472 ret |= machine__read_build_ids(pos, with_hits); 472 ret |= machine__read_build_ids(pos, with_hits);
473 } 473 }
474 474
475 return ret; 475 return ret;
476 } 476 }
477 477
478 static int write_tracing_data(int fd, struct perf_header *h __maybe_unused, 478 static int write_tracing_data(int fd, struct perf_header *h __maybe_unused,
479 struct perf_evlist *evlist) 479 struct perf_evlist *evlist)
480 { 480 {
481 return read_tracing_data(fd, &evlist->entries); 481 return read_tracing_data(fd, &evlist->entries);
482 } 482 }
483 483
484 484
485 static int write_build_id(int fd, struct perf_header *h, 485 static int write_build_id(int fd, struct perf_header *h,
486 struct perf_evlist *evlist __maybe_unused) 486 struct perf_evlist *evlist __maybe_unused)
487 { 487 {
488 struct perf_session *session; 488 struct perf_session *session;
489 int err; 489 int err;
490 490
491 session = container_of(h, struct perf_session, header); 491 session = container_of(h, struct perf_session, header);
492 492
493 if (!perf_session__read_build_ids(session, true)) 493 if (!perf_session__read_build_ids(session, true))
494 return -1; 494 return -1;
495 495
496 err = dsos__write_buildid_table(h, fd); 496 err = dsos__write_buildid_table(h, fd);
497 if (err < 0) { 497 if (err < 0) {
498 pr_debug("failed to write buildid table\n"); 498 pr_debug("failed to write buildid table\n");
499 return err; 499 return err;
500 } 500 }
501 if (!no_buildid_cache) 501 if (!no_buildid_cache)
502 perf_session__cache_build_ids(session); 502 perf_session__cache_build_ids(session);
503 503
504 return 0; 504 return 0;
505 } 505 }
506 506
507 static int write_hostname(int fd, struct perf_header *h __maybe_unused, 507 static int write_hostname(int fd, struct perf_header *h __maybe_unused,
508 struct perf_evlist *evlist __maybe_unused) 508 struct perf_evlist *evlist __maybe_unused)
509 { 509 {
510 struct utsname uts; 510 struct utsname uts;
511 int ret; 511 int ret;
512 512
513 ret = uname(&uts); 513 ret = uname(&uts);
514 if (ret < 0) 514 if (ret < 0)
515 return -1; 515 return -1;
516 516
517 return do_write_string(fd, uts.nodename); 517 return do_write_string(fd, uts.nodename);
518 } 518 }
519 519
520 static int write_osrelease(int fd, struct perf_header *h __maybe_unused, 520 static int write_osrelease(int fd, struct perf_header *h __maybe_unused,
521 struct perf_evlist *evlist __maybe_unused) 521 struct perf_evlist *evlist __maybe_unused)
522 { 522 {
523 struct utsname uts; 523 struct utsname uts;
524 int ret; 524 int ret;
525 525
526 ret = uname(&uts); 526 ret = uname(&uts);
527 if (ret < 0) 527 if (ret < 0)
528 return -1; 528 return -1;
529 529
530 return do_write_string(fd, uts.release); 530 return do_write_string(fd, uts.release);
531 } 531 }
532 532
533 static int write_arch(int fd, struct perf_header *h __maybe_unused, 533 static int write_arch(int fd, struct perf_header *h __maybe_unused,
534 struct perf_evlist *evlist __maybe_unused) 534 struct perf_evlist *evlist __maybe_unused)
535 { 535 {
536 struct utsname uts; 536 struct utsname uts;
537 int ret; 537 int ret;
538 538
539 ret = uname(&uts); 539 ret = uname(&uts);
540 if (ret < 0) 540 if (ret < 0)
541 return -1; 541 return -1;
542 542
543 return do_write_string(fd, uts.machine); 543 return do_write_string(fd, uts.machine);
544 } 544 }
545 545
546 static int write_version(int fd, struct perf_header *h __maybe_unused, 546 static int write_version(int fd, struct perf_header *h __maybe_unused,
547 struct perf_evlist *evlist __maybe_unused) 547 struct perf_evlist *evlist __maybe_unused)
548 { 548 {
549 return do_write_string(fd, perf_version_string); 549 return do_write_string(fd, perf_version_string);
550 } 550 }
551 551
552 static int write_cpudesc(int fd, struct perf_header *h __maybe_unused, 552 static int write_cpudesc(int fd, struct perf_header *h __maybe_unused,
553 struct perf_evlist *evlist __maybe_unused) 553 struct perf_evlist *evlist __maybe_unused)
554 { 554 {
555 #ifndef CPUINFO_PROC 555 #ifndef CPUINFO_PROC
556 #define CPUINFO_PROC NULL 556 #define CPUINFO_PROC NULL
557 #endif 557 #endif
558 FILE *file; 558 FILE *file;
559 char *buf = NULL; 559 char *buf = NULL;
560 char *s, *p; 560 char *s, *p;
561 const char *search = CPUINFO_PROC; 561 const char *search = CPUINFO_PROC;
562 size_t len = 0; 562 size_t len = 0;
563 int ret = -1; 563 int ret = -1;
564 564
565 if (!search) 565 if (!search)
566 return -1; 566 return -1;
567 567
568 file = fopen("/proc/cpuinfo", "r"); 568 file = fopen("/proc/cpuinfo", "r");
569 if (!file) 569 if (!file)
570 return -1; 570 return -1;
571 571
572 while (getline(&buf, &len, file) > 0) { 572 while (getline(&buf, &len, file) > 0) {
573 ret = strncmp(buf, search, strlen(search)); 573 ret = strncmp(buf, search, strlen(search));
574 if (!ret) 574 if (!ret)
575 break; 575 break;
576 } 576 }
577 577
578 if (ret) 578 if (ret)
579 goto done; 579 goto done;
580 580
581 s = buf; 581 s = buf;
582 582
583 p = strchr(buf, ':'); 583 p = strchr(buf, ':');
584 if (p && *(p+1) == ' ' && *(p+2)) 584 if (p && *(p+1) == ' ' && *(p+2))
585 s = p + 2; 585 s = p + 2;
586 p = strchr(s, '\n'); 586 p = strchr(s, '\n');
587 if (p) 587 if (p)
588 *p = '\0'; 588 *p = '\0';
589 589
590 /* squash extra space characters (branding string) */ 590 /* squash extra space characters (branding string) */
591 p = s; 591 p = s;
592 while (*p) { 592 while (*p) {
593 if (isspace(*p)) { 593 if (isspace(*p)) {
594 char *r = p + 1; 594 char *r = p + 1;
595 char *q = r; 595 char *q = r;
596 *p = ' '; 596 *p = ' ';
597 while (*q && isspace(*q)) 597 while (*q && isspace(*q))
598 q++; 598 q++;
599 if (q != (p+1)) 599 if (q != (p+1))
600 while ((*r++ = *q++)); 600 while ((*r++ = *q++));
601 } 601 }
602 p++; 602 p++;
603 } 603 }
604 ret = do_write_string(fd, s); 604 ret = do_write_string(fd, s);
605 done: 605 done:
606 free(buf); 606 free(buf);
607 fclose(file); 607 fclose(file);
608 return ret; 608 return ret;
609 } 609 }
610 610
611 static int write_nrcpus(int fd, struct perf_header *h __maybe_unused, 611 static int write_nrcpus(int fd, struct perf_header *h __maybe_unused,
612 struct perf_evlist *evlist __maybe_unused) 612 struct perf_evlist *evlist __maybe_unused)
613 { 613 {
614 long nr; 614 long nr;
615 u32 nrc, nra; 615 u32 nrc, nra;
616 int ret; 616 int ret;
617 617
618 nr = sysconf(_SC_NPROCESSORS_CONF); 618 nr = sysconf(_SC_NPROCESSORS_CONF);
619 if (nr < 0) 619 if (nr < 0)
620 return -1; 620 return -1;
621 621
622 nrc = (u32)(nr & UINT_MAX); 622 nrc = (u32)(nr & UINT_MAX);
623 623
624 nr = sysconf(_SC_NPROCESSORS_ONLN); 624 nr = sysconf(_SC_NPROCESSORS_ONLN);
625 if (nr < 0) 625 if (nr < 0)
626 return -1; 626 return -1;
627 627
628 nra = (u32)(nr & UINT_MAX); 628 nra = (u32)(nr & UINT_MAX);
629 629
630 ret = do_write(fd, &nrc, sizeof(nrc)); 630 ret = do_write(fd, &nrc, sizeof(nrc));
631 if (ret < 0) 631 if (ret < 0)
632 return ret; 632 return ret;
633 633
634 return do_write(fd, &nra, sizeof(nra)); 634 return do_write(fd, &nra, sizeof(nra));
635 } 635 }
636 636
637 static int write_event_desc(int fd, struct perf_header *h __maybe_unused, 637 static int write_event_desc(int fd, struct perf_header *h __maybe_unused,
638 struct perf_evlist *evlist) 638 struct perf_evlist *evlist)
639 { 639 {
640 struct perf_evsel *evsel; 640 struct perf_evsel *evsel;
641 u32 nre, nri, sz; 641 u32 nre, nri, sz;
642 int ret; 642 int ret;
643 643
644 nre = evlist->nr_entries; 644 nre = evlist->nr_entries;
645 645
646 /* 646 /*
647 * write number of events 647 * write number of events
648 */ 648 */
649 ret = do_write(fd, &nre, sizeof(nre)); 649 ret = do_write(fd, &nre, sizeof(nre));
650 if (ret < 0) 650 if (ret < 0)
651 return ret; 651 return ret;
652 652
653 /* 653 /*
654 * size of perf_event_attr struct 654 * size of perf_event_attr struct
655 */ 655 */
656 sz = (u32)sizeof(evsel->attr); 656 sz = (u32)sizeof(evsel->attr);
657 ret = do_write(fd, &sz, sizeof(sz)); 657 ret = do_write(fd, &sz, sizeof(sz));
658 if (ret < 0) 658 if (ret < 0)
659 return ret; 659 return ret;
660 660
661 list_for_each_entry(evsel, &evlist->entries, node) { 661 list_for_each_entry(evsel, &evlist->entries, node) {
662 662
663 ret = do_write(fd, &evsel->attr, sz); 663 ret = do_write(fd, &evsel->attr, sz);
664 if (ret < 0) 664 if (ret < 0)
665 return ret; 665 return ret;
666 /* 666 /*
667 * write number of unique id per event 667 * write number of unique id per event
668 * there is one id per instance of an event 668 * there is one id per instance of an event
669 * 669 *
670 * copy into an nri to be independent of the 670 * copy into an nri to be independent of the
671 * type of ids, 671 * type of ids,
672 */ 672 */
673 nri = evsel->ids; 673 nri = evsel->ids;
674 ret = do_write(fd, &nri, sizeof(nri)); 674 ret = do_write(fd, &nri, sizeof(nri));
675 if (ret < 0) 675 if (ret < 0)
676 return ret; 676 return ret;
677 677
678 /* 678 /*
679 * write event string as passed on cmdline 679 * write event string as passed on cmdline
680 */ 680 */
681 ret = do_write_string(fd, perf_evsel__name(evsel)); 681 ret = do_write_string(fd, perf_evsel__name(evsel));
682 if (ret < 0) 682 if (ret < 0)
683 return ret; 683 return ret;
684 /* 684 /*
685 * write unique ids for this event 685 * write unique ids for this event
686 */ 686 */
687 ret = do_write(fd, evsel->id, evsel->ids * sizeof(u64)); 687 ret = do_write(fd, evsel->id, evsel->ids * sizeof(u64));
688 if (ret < 0) 688 if (ret < 0)
689 return ret; 689 return ret;
690 } 690 }
691 return 0; 691 return 0;
692 } 692 }
693 693
694 static int write_cmdline(int fd, struct perf_header *h __maybe_unused, 694 static int write_cmdline(int fd, struct perf_header *h __maybe_unused,
695 struct perf_evlist *evlist __maybe_unused) 695 struct perf_evlist *evlist __maybe_unused)
696 { 696 {
697 char buf[MAXPATHLEN]; 697 char buf[MAXPATHLEN];
698 char proc[32]; 698 char proc[32];
699 u32 i, n; 699 u32 i, n;
700 int ret; 700 int ret;
701 701
702 /* 702 /*
703 * actual atual path to perf binary 703 * actual atual path to perf binary
704 */ 704 */
705 sprintf(proc, "/proc/%d/exe", getpid()); 705 sprintf(proc, "/proc/%d/exe", getpid());
706 ret = readlink(proc, buf, sizeof(buf)); 706 ret = readlink(proc, buf, sizeof(buf));
707 if (ret <= 0) 707 if (ret <= 0)
708 return -1; 708 return -1;
709 709
710 /* readlink() does not add null termination */ 710 /* readlink() does not add null termination */
711 buf[ret] = '\0'; 711 buf[ret] = '\0';
712 712
713 /* account for binary path */ 713 /* account for binary path */
714 n = header_argc + 1; 714 n = header_argc + 1;
715 715
716 ret = do_write(fd, &n, sizeof(n)); 716 ret = do_write(fd, &n, sizeof(n));
717 if (ret < 0) 717 if (ret < 0)
718 return ret; 718 return ret;
719 719
720 ret = do_write_string(fd, buf); 720 ret = do_write_string(fd, buf);
721 if (ret < 0) 721 if (ret < 0)
722 return ret; 722 return ret;
723 723
724 for (i = 0 ; i < header_argc; i++) { 724 for (i = 0 ; i < header_argc; i++) {
725 ret = do_write_string(fd, header_argv[i]); 725 ret = do_write_string(fd, header_argv[i]);
726 if (ret < 0) 726 if (ret < 0)
727 return ret; 727 return ret;
728 } 728 }
729 return 0; 729 return 0;
730 } 730 }
731 731
732 #define CORE_SIB_FMT \ 732 #define CORE_SIB_FMT \
733 "/sys/devices/system/cpu/cpu%d/topology/core_siblings_list" 733 "/sys/devices/system/cpu/cpu%d/topology/core_siblings_list"
734 #define THRD_SIB_FMT \ 734 #define THRD_SIB_FMT \
735 "/sys/devices/system/cpu/cpu%d/topology/thread_siblings_list" 735 "/sys/devices/system/cpu/cpu%d/topology/thread_siblings_list"
736 736
737 struct cpu_topo { 737 struct cpu_topo {
738 u32 core_sib; 738 u32 core_sib;
739 u32 thread_sib; 739 u32 thread_sib;
740 char **core_siblings; 740 char **core_siblings;
741 char **thread_siblings; 741 char **thread_siblings;
742 }; 742 };
743 743
744 static int build_cpu_topo(struct cpu_topo *tp, int cpu) 744 static int build_cpu_topo(struct cpu_topo *tp, int cpu)
745 { 745 {
746 FILE *fp; 746 FILE *fp;
747 char filename[MAXPATHLEN]; 747 char filename[MAXPATHLEN];
748 char *buf = NULL, *p; 748 char *buf = NULL, *p;
749 size_t len = 0; 749 size_t len = 0;
750 u32 i = 0; 750 u32 i = 0;
751 int ret = -1; 751 int ret = -1;
752 752
753 sprintf(filename, CORE_SIB_FMT, cpu); 753 sprintf(filename, CORE_SIB_FMT, cpu);
754 fp = fopen(filename, "r"); 754 fp = fopen(filename, "r");
755 if (!fp) 755 if (!fp)
756 return -1; 756 return -1;
757 757
758 if (getline(&buf, &len, fp) <= 0) 758 if (getline(&buf, &len, fp) <= 0)
759 goto done; 759 goto done;
760 760
761 fclose(fp); 761 fclose(fp);
762 762
763 p = strchr(buf, '\n'); 763 p = strchr(buf, '\n');
764 if (p) 764 if (p)
765 *p = '\0'; 765 *p = '\0';
766 766
767 for (i = 0; i < tp->core_sib; i++) { 767 for (i = 0; i < tp->core_sib; i++) {
768 if (!strcmp(buf, tp->core_siblings[i])) 768 if (!strcmp(buf, tp->core_siblings[i]))
769 break; 769 break;
770 } 770 }
771 if (i == tp->core_sib) { 771 if (i == tp->core_sib) {
772 tp->core_siblings[i] = buf; 772 tp->core_siblings[i] = buf;
773 tp->core_sib++; 773 tp->core_sib++;
774 buf = NULL; 774 buf = NULL;
775 len = 0; 775 len = 0;
776 } 776 }
777 777
778 sprintf(filename, THRD_SIB_FMT, cpu); 778 sprintf(filename, THRD_SIB_FMT, cpu);
779 fp = fopen(filename, "r"); 779 fp = fopen(filename, "r");
780 if (!fp) 780 if (!fp)
781 goto done; 781 goto done;
782 782
783 if (getline(&buf, &len, fp) <= 0) 783 if (getline(&buf, &len, fp) <= 0)
784 goto done; 784 goto done;
785 785
786 p = strchr(buf, '\n'); 786 p = strchr(buf, '\n');
787 if (p) 787 if (p)
788 *p = '\0'; 788 *p = '\0';
789 789
790 for (i = 0; i < tp->thread_sib; i++) { 790 for (i = 0; i < tp->thread_sib; i++) {
791 if (!strcmp(buf, tp->thread_siblings[i])) 791 if (!strcmp(buf, tp->thread_siblings[i]))
792 break; 792 break;
793 } 793 }
794 if (i == tp->thread_sib) { 794 if (i == tp->thread_sib) {
795 tp->thread_siblings[i] = buf; 795 tp->thread_siblings[i] = buf;
796 tp->thread_sib++; 796 tp->thread_sib++;
797 buf = NULL; 797 buf = NULL;
798 } 798 }
799 ret = 0; 799 ret = 0;
800 done: 800 done:
801 if(fp) 801 if(fp)
802 fclose(fp); 802 fclose(fp);
803 free(buf); 803 free(buf);
804 return ret; 804 return ret;
805 } 805 }
806 806
807 static void free_cpu_topo(struct cpu_topo *tp) 807 static void free_cpu_topo(struct cpu_topo *tp)
808 { 808 {
809 u32 i; 809 u32 i;
810 810
811 if (!tp) 811 if (!tp)
812 return; 812 return;
813 813
814 for (i = 0 ; i < tp->core_sib; i++) 814 for (i = 0 ; i < tp->core_sib; i++)
815 free(tp->core_siblings[i]); 815 free(tp->core_siblings[i]);
816 816
817 for (i = 0 ; i < tp->thread_sib; i++) 817 for (i = 0 ; i < tp->thread_sib; i++)
818 free(tp->thread_siblings[i]); 818 free(tp->thread_siblings[i]);
819 819
820 free(tp); 820 free(tp);
821 } 821 }
822 822
823 static struct cpu_topo *build_cpu_topology(void) 823 static struct cpu_topo *build_cpu_topology(void)
824 { 824 {
825 struct cpu_topo *tp; 825 struct cpu_topo *tp;
826 void *addr; 826 void *addr;
827 u32 nr, i; 827 u32 nr, i;
828 size_t sz; 828 size_t sz;
829 long ncpus; 829 long ncpus;
830 int ret = -1; 830 int ret = -1;
831 831
832 ncpus = sysconf(_SC_NPROCESSORS_CONF); 832 ncpus = sysconf(_SC_NPROCESSORS_CONF);
833 if (ncpus < 0) 833 if (ncpus < 0)
834 return NULL; 834 return NULL;
835 835
836 nr = (u32)(ncpus & UINT_MAX); 836 nr = (u32)(ncpus & UINT_MAX);
837 837
838 sz = nr * sizeof(char *); 838 sz = nr * sizeof(char *);
839 839
840 addr = calloc(1, sizeof(*tp) + 2 * sz); 840 addr = calloc(1, sizeof(*tp) + 2 * sz);
841 if (!addr) 841 if (!addr)
842 return NULL; 842 return NULL;
843 843
844 tp = addr; 844 tp = addr;
845 845
846 addr += sizeof(*tp); 846 addr += sizeof(*tp);
847 tp->core_siblings = addr; 847 tp->core_siblings = addr;
848 addr += sz; 848 addr += sz;
849 tp->thread_siblings = addr; 849 tp->thread_siblings = addr;
850 850
851 for (i = 0; i < nr; i++) { 851 for (i = 0; i < nr; i++) {
852 ret = build_cpu_topo(tp, i); 852 ret = build_cpu_topo(tp, i);
853 if (ret < 0) 853 if (ret < 0)
854 break; 854 break;
855 } 855 }
856 if (ret) { 856 if (ret) {
857 free_cpu_topo(tp); 857 free_cpu_topo(tp);
858 tp = NULL; 858 tp = NULL;
859 } 859 }
860 return tp; 860 return tp;
861 } 861 }
862 862
863 static int write_cpu_topology(int fd, struct perf_header *h __maybe_unused, 863 static int write_cpu_topology(int fd, struct perf_header *h __maybe_unused,
864 struct perf_evlist *evlist __maybe_unused) 864 struct perf_evlist *evlist __maybe_unused)
865 { 865 {
866 struct cpu_topo *tp; 866 struct cpu_topo *tp;
867 u32 i; 867 u32 i;
868 int ret; 868 int ret;
869 869
870 tp = build_cpu_topology(); 870 tp = build_cpu_topology();
871 if (!tp) 871 if (!tp)
872 return -1; 872 return -1;
873 873
874 ret = do_write(fd, &tp->core_sib, sizeof(tp->core_sib)); 874 ret = do_write(fd, &tp->core_sib, sizeof(tp->core_sib));
875 if (ret < 0) 875 if (ret < 0)
876 goto done; 876 goto done;
877 877
878 for (i = 0; i < tp->core_sib; i++) { 878 for (i = 0; i < tp->core_sib; i++) {
879 ret = do_write_string(fd, tp->core_siblings[i]); 879 ret = do_write_string(fd, tp->core_siblings[i]);
880 if (ret < 0) 880 if (ret < 0)
881 goto done; 881 goto done;
882 } 882 }
883 ret = do_write(fd, &tp->thread_sib, sizeof(tp->thread_sib)); 883 ret = do_write(fd, &tp->thread_sib, sizeof(tp->thread_sib));
884 if (ret < 0) 884 if (ret < 0)
885 goto done; 885 goto done;
886 886
887 for (i = 0; i < tp->thread_sib; i++) { 887 for (i = 0; i < tp->thread_sib; i++) {
888 ret = do_write_string(fd, tp->thread_siblings[i]); 888 ret = do_write_string(fd, tp->thread_siblings[i]);
889 if (ret < 0) 889 if (ret < 0)
890 break; 890 break;
891 } 891 }
892 done: 892 done:
893 free_cpu_topo(tp); 893 free_cpu_topo(tp);
894 return ret; 894 return ret;
895 } 895 }
896 896
897 897
898 898
899 static int write_total_mem(int fd, struct perf_header *h __maybe_unused, 899 static int write_total_mem(int fd, struct perf_header *h __maybe_unused,
900 struct perf_evlist *evlist __maybe_unused) 900 struct perf_evlist *evlist __maybe_unused)
901 { 901 {
902 char *buf = NULL; 902 char *buf = NULL;
903 FILE *fp; 903 FILE *fp;
904 size_t len = 0; 904 size_t len = 0;
905 int ret = -1, n; 905 int ret = -1, n;
906 uint64_t mem; 906 uint64_t mem;
907 907
908 fp = fopen("/proc/meminfo", "r"); 908 fp = fopen("/proc/meminfo", "r");
909 if (!fp) 909 if (!fp)
910 return -1; 910 return -1;
911 911
912 while (getline(&buf, &len, fp) > 0) { 912 while (getline(&buf, &len, fp) > 0) {
913 ret = strncmp(buf, "MemTotal:", 9); 913 ret = strncmp(buf, "MemTotal:", 9);
914 if (!ret) 914 if (!ret)
915 break; 915 break;
916 } 916 }
917 if (!ret) { 917 if (!ret) {
918 n = sscanf(buf, "%*s %"PRIu64, &mem); 918 n = sscanf(buf, "%*s %"PRIu64, &mem);
919 if (n == 1) 919 if (n == 1)
920 ret = do_write(fd, &mem, sizeof(mem)); 920 ret = do_write(fd, &mem, sizeof(mem));
921 } 921 }
922 free(buf); 922 free(buf);
923 fclose(fp); 923 fclose(fp);
924 return ret; 924 return ret;
925 } 925 }
926 926
927 static int write_topo_node(int fd, int node) 927 static int write_topo_node(int fd, int node)
928 { 928 {
929 char str[MAXPATHLEN]; 929 char str[MAXPATHLEN];
930 char field[32]; 930 char field[32];
931 char *buf = NULL, *p; 931 char *buf = NULL, *p;
932 size_t len = 0; 932 size_t len = 0;
933 FILE *fp; 933 FILE *fp;
934 u64 mem_total, mem_free, mem; 934 u64 mem_total, mem_free, mem;
935 int ret = -1; 935 int ret = -1;
936 936
937 sprintf(str, "/sys/devices/system/node/node%d/meminfo", node); 937 sprintf(str, "/sys/devices/system/node/node%d/meminfo", node);
938 fp = fopen(str, "r"); 938 fp = fopen(str, "r");
939 if (!fp) 939 if (!fp)
940 return -1; 940 return -1;
941 941
942 while (getline(&buf, &len, fp) > 0) { 942 while (getline(&buf, &len, fp) > 0) {
943 /* skip over invalid lines */ 943 /* skip over invalid lines */
944 if (!strchr(buf, ':')) 944 if (!strchr(buf, ':'))
945 continue; 945 continue;
946 if (sscanf(buf, "%*s %*d %s %"PRIu64, field, &mem) != 2) 946 if (sscanf(buf, "%*s %*d %s %"PRIu64, field, &mem) != 2)
947 goto done; 947 goto done;
948 if (!strcmp(field, "MemTotal:")) 948 if (!strcmp(field, "MemTotal:"))
949 mem_total = mem; 949 mem_total = mem;
950 if (!strcmp(field, "MemFree:")) 950 if (!strcmp(field, "MemFree:"))
951 mem_free = mem; 951 mem_free = mem;
952 } 952 }
953 953
954 fclose(fp); 954 fclose(fp);
955 955
956 ret = do_write(fd, &mem_total, sizeof(u64)); 956 ret = do_write(fd, &mem_total, sizeof(u64));
957 if (ret) 957 if (ret)
958 goto done; 958 goto done;
959 959
960 ret = do_write(fd, &mem_free, sizeof(u64)); 960 ret = do_write(fd, &mem_free, sizeof(u64));
961 if (ret) 961 if (ret)
962 goto done; 962 goto done;
963 963
964 ret = -1; 964 ret = -1;
965 sprintf(str, "/sys/devices/system/node/node%d/cpulist", node); 965 sprintf(str, "/sys/devices/system/node/node%d/cpulist", node);
966 966
967 fp = fopen(str, "r"); 967 fp = fopen(str, "r");
968 if (!fp) 968 if (!fp)
969 goto done; 969 goto done;
970 970
971 if (getline(&buf, &len, fp) <= 0) 971 if (getline(&buf, &len, fp) <= 0)
972 goto done; 972 goto done;
973 973
974 p = strchr(buf, '\n'); 974 p = strchr(buf, '\n');
975 if (p) 975 if (p)
976 *p = '\0'; 976 *p = '\0';
977 977
978 ret = do_write_string(fd, buf); 978 ret = do_write_string(fd, buf);
979 done: 979 done:
980 free(buf); 980 free(buf);
981 fclose(fp); 981 fclose(fp);
982 return ret; 982 return ret;
983 } 983 }
984 984
985 static int write_numa_topology(int fd, struct perf_header *h __maybe_unused, 985 static int write_numa_topology(int fd, struct perf_header *h __maybe_unused,
986 struct perf_evlist *evlist __maybe_unused) 986 struct perf_evlist *evlist __maybe_unused)
987 { 987 {
988 char *buf = NULL; 988 char *buf = NULL;
989 size_t len = 0; 989 size_t len = 0;
990 FILE *fp; 990 FILE *fp;
991 struct cpu_map *node_map = NULL; 991 struct cpu_map *node_map = NULL;
992 char *c; 992 char *c;
993 u32 nr, i, j; 993 u32 nr, i, j;
994 int ret = -1; 994 int ret = -1;
995 995
996 fp = fopen("/sys/devices/system/node/online", "r"); 996 fp = fopen("/sys/devices/system/node/online", "r");
997 if (!fp) 997 if (!fp)
998 return -1; 998 return -1;
999 999
1000 if (getline(&buf, &len, fp) <= 0) 1000 if (getline(&buf, &len, fp) <= 0)
1001 goto done; 1001 goto done;
1002 1002
1003 c = strchr(buf, '\n'); 1003 c = strchr(buf, '\n');
1004 if (c) 1004 if (c)
1005 *c = '\0'; 1005 *c = '\0';
1006 1006
1007 node_map = cpu_map__new(buf); 1007 node_map = cpu_map__new(buf);
1008 if (!node_map) 1008 if (!node_map)
1009 goto done; 1009 goto done;
1010 1010
1011 nr = (u32)node_map->nr; 1011 nr = (u32)node_map->nr;
1012 1012
1013 ret = do_write(fd, &nr, sizeof(nr)); 1013 ret = do_write(fd, &nr, sizeof(nr));
1014 if (ret < 0) 1014 if (ret < 0)
1015 goto done; 1015 goto done;
1016 1016
1017 for (i = 0; i < nr; i++) { 1017 for (i = 0; i < nr; i++) {
1018 j = (u32)node_map->map[i]; 1018 j = (u32)node_map->map[i];
1019 ret = do_write(fd, &j, sizeof(j)); 1019 ret = do_write(fd, &j, sizeof(j));
1020 if (ret < 0) 1020 if (ret < 0)
1021 break; 1021 break;
1022 1022
1023 ret = write_topo_node(fd, i); 1023 ret = write_topo_node(fd, i);
1024 if (ret < 0) 1024 if (ret < 0)
1025 break; 1025 break;
1026 } 1026 }
1027 done: 1027 done:
1028 free(buf); 1028 free(buf);
1029 fclose(fp); 1029 fclose(fp);
1030 free(node_map); 1030 free(node_map);
1031 return ret; 1031 return ret;
1032 } 1032 }
1033 1033
1034 /* 1034 /*
1035 * File format: 1035 * File format:
1036 * 1036 *
1037 * struct pmu_mappings { 1037 * struct pmu_mappings {
1038 * u32 pmu_num; 1038 * u32 pmu_num;
1039 * struct pmu_map { 1039 * struct pmu_map {
1040 * u32 type; 1040 * u32 type;
1041 * char name[]; 1041 * char name[];
1042 * }[pmu_num]; 1042 * }[pmu_num];
1043 * }; 1043 * };
1044 */ 1044 */
1045 1045
1046 static int write_pmu_mappings(int fd, struct perf_header *h __maybe_unused, 1046 static int write_pmu_mappings(int fd, struct perf_header *h __maybe_unused,
1047 struct perf_evlist *evlist __maybe_unused) 1047 struct perf_evlist *evlist __maybe_unused)
1048 { 1048 {
1049 struct perf_pmu *pmu = NULL; 1049 struct perf_pmu *pmu = NULL;
1050 off_t offset = lseek(fd, 0, SEEK_CUR); 1050 off_t offset = lseek(fd, 0, SEEK_CUR);
1051 __u32 pmu_num = 0; 1051 __u32 pmu_num = 0;
1052 1052
1053 /* write real pmu_num later */ 1053 /* write real pmu_num later */
1054 do_write(fd, &pmu_num, sizeof(pmu_num)); 1054 do_write(fd, &pmu_num, sizeof(pmu_num));
1055 1055
1056 while ((pmu = perf_pmu__scan(pmu))) { 1056 while ((pmu = perf_pmu__scan(pmu))) {
1057 if (!pmu->name) 1057 if (!pmu->name)
1058 continue; 1058 continue;
1059 pmu_num++; 1059 pmu_num++;
1060 do_write(fd, &pmu->type, sizeof(pmu->type)); 1060 do_write(fd, &pmu->type, sizeof(pmu->type));
1061 do_write_string(fd, pmu->name); 1061 do_write_string(fd, pmu->name);
1062 } 1062 }
1063 1063
1064 if (pwrite(fd, &pmu_num, sizeof(pmu_num), offset) != sizeof(pmu_num)) { 1064 if (pwrite(fd, &pmu_num, sizeof(pmu_num), offset) != sizeof(pmu_num)) {
1065 /* discard all */ 1065 /* discard all */
1066 lseek(fd, offset, SEEK_SET); 1066 lseek(fd, offset, SEEK_SET);
1067 return -1; 1067 return -1;
1068 } 1068 }
1069 1069
1070 return 0; 1070 return 0;
1071 } 1071 }
1072 1072
1073 /* 1073 /*
1074 * default get_cpuid(): nothing gets recorded 1074 * default get_cpuid(): nothing gets recorded
1075 * actual implementation must be in arch/$(ARCH)/util/header.c 1075 * actual implementation must be in arch/$(ARCH)/util/header.c
1076 */ 1076 */
1077 int __attribute__ ((weak)) get_cpuid(char *buffer __maybe_unused, 1077 int __attribute__ ((weak)) get_cpuid(char *buffer __maybe_unused,
1078 size_t sz __maybe_unused) 1078 size_t sz __maybe_unused)
1079 { 1079 {
1080 return -1; 1080 return -1;
1081 } 1081 }
1082 1082
1083 static int write_cpuid(int fd, struct perf_header *h __maybe_unused, 1083 static int write_cpuid(int fd, struct perf_header *h __maybe_unused,
1084 struct perf_evlist *evlist __maybe_unused) 1084 struct perf_evlist *evlist __maybe_unused)
1085 { 1085 {
1086 char buffer[64]; 1086 char buffer[64];
1087 int ret; 1087 int ret;
1088 1088
1089 ret = get_cpuid(buffer, sizeof(buffer)); 1089 ret = get_cpuid(buffer, sizeof(buffer));
1090 if (!ret) 1090 if (!ret)
1091 goto write_it; 1091 goto write_it;
1092 1092
1093 return -1; 1093 return -1;
1094 write_it: 1094 write_it:
1095 return do_write_string(fd, buffer); 1095 return do_write_string(fd, buffer);
1096 } 1096 }
1097 1097
1098 static int write_branch_stack(int fd __maybe_unused, 1098 static int write_branch_stack(int fd __maybe_unused,
1099 struct perf_header *h __maybe_unused, 1099 struct perf_header *h __maybe_unused,
1100 struct perf_evlist *evlist __maybe_unused) 1100 struct perf_evlist *evlist __maybe_unused)
1101 { 1101 {
1102 return 0; 1102 return 0;
1103 } 1103 }
1104 1104
1105 static void print_hostname(struct perf_header *ph, int fd, FILE *fp) 1105 static void print_hostname(struct perf_header *ph, int fd, FILE *fp)
1106 { 1106 {
1107 char *str = do_read_string(fd, ph); 1107 char *str = do_read_string(fd, ph);
1108 fprintf(fp, "# hostname : %s\n", str); 1108 fprintf(fp, "# hostname : %s\n", str);
1109 free(str); 1109 free(str);
1110 } 1110 }
1111 1111
1112 static void print_osrelease(struct perf_header *ph, int fd, FILE *fp) 1112 static void print_osrelease(struct perf_header *ph, int fd, FILE *fp)
1113 { 1113 {
1114 char *str = do_read_string(fd, ph); 1114 char *str = do_read_string(fd, ph);
1115 fprintf(fp, "# os release : %s\n", str); 1115 fprintf(fp, "# os release : %s\n", str);
1116 free(str); 1116 free(str);
1117 } 1117 }
1118 1118
1119 static void print_arch(struct perf_header *ph, int fd, FILE *fp) 1119 static void print_arch(struct perf_header *ph, int fd, FILE *fp)
1120 { 1120 {
1121 char *str = do_read_string(fd, ph); 1121 char *str = do_read_string(fd, ph);
1122 fprintf(fp, "# arch : %s\n", str); 1122 fprintf(fp, "# arch : %s\n", str);
1123 free(str); 1123 free(str);
1124 } 1124 }
1125 1125
1126 static void print_cpudesc(struct perf_header *ph, int fd, FILE *fp) 1126 static void print_cpudesc(struct perf_header *ph, int fd, FILE *fp)
1127 { 1127 {
1128 char *str = do_read_string(fd, ph); 1128 char *str = do_read_string(fd, ph);
1129 fprintf(fp, "# cpudesc : %s\n", str); 1129 fprintf(fp, "# cpudesc : %s\n", str);
1130 free(str); 1130 free(str);
1131 } 1131 }
1132 1132
1133 static void print_nrcpus(struct perf_header *ph, int fd, FILE *fp) 1133 static void print_nrcpus(struct perf_header *ph, int fd, FILE *fp)
1134 { 1134 {
1135 ssize_t ret; 1135 ssize_t ret;
1136 u32 nr; 1136 u32 nr;
1137 1137
1138 ret = read(fd, &nr, sizeof(nr)); 1138 ret = read(fd, &nr, sizeof(nr));
1139 if (ret != (ssize_t)sizeof(nr)) 1139 if (ret != (ssize_t)sizeof(nr))
1140 nr = -1; /* interpreted as error */ 1140 nr = -1; /* interpreted as error */
1141 1141
1142 if (ph->needs_swap) 1142 if (ph->needs_swap)
1143 nr = bswap_32(nr); 1143 nr = bswap_32(nr);
1144 1144
1145 fprintf(fp, "# nrcpus online : %u\n", nr); 1145 fprintf(fp, "# nrcpus online : %u\n", nr);
1146 1146
1147 ret = read(fd, &nr, sizeof(nr)); 1147 ret = read(fd, &nr, sizeof(nr));
1148 if (ret != (ssize_t)sizeof(nr)) 1148 if (ret != (ssize_t)sizeof(nr))
1149 nr = -1; /* interpreted as error */ 1149 nr = -1; /* interpreted as error */
1150 1150
1151 if (ph->needs_swap) 1151 if (ph->needs_swap)
1152 nr = bswap_32(nr); 1152 nr = bswap_32(nr);
1153 1153
1154 fprintf(fp, "# nrcpus avail : %u\n", nr); 1154 fprintf(fp, "# nrcpus avail : %u\n", nr);
1155 } 1155 }
1156 1156
1157 static void print_version(struct perf_header *ph, int fd, FILE *fp) 1157 static void print_version(struct perf_header *ph, int fd, FILE *fp)
1158 { 1158 {
1159 char *str = do_read_string(fd, ph); 1159 char *str = do_read_string(fd, ph);
1160 fprintf(fp, "# perf version : %s\n", str); 1160 fprintf(fp, "# perf version : %s\n", str);
1161 free(str); 1161 free(str);
1162 } 1162 }
1163 1163
1164 static void print_cmdline(struct perf_header *ph, int fd, FILE *fp) 1164 static void print_cmdline(struct perf_header *ph, int fd, FILE *fp)
1165 { 1165 {
1166 ssize_t ret; 1166 ssize_t ret;
1167 char *str; 1167 char *str;
1168 u32 nr, i; 1168 u32 nr, i;
1169 1169
1170 ret = read(fd, &nr, sizeof(nr)); 1170 ret = read(fd, &nr, sizeof(nr));
1171 if (ret != (ssize_t)sizeof(nr)) 1171 if (ret != (ssize_t)sizeof(nr))
1172 return; 1172 return;
1173 1173
1174 if (ph->needs_swap) 1174 if (ph->needs_swap)
1175 nr = bswap_32(nr); 1175 nr = bswap_32(nr);
1176 1176
1177 fprintf(fp, "# cmdline : "); 1177 fprintf(fp, "# cmdline : ");
1178 1178
1179 for (i = 0; i < nr; i++) { 1179 for (i = 0; i < nr; i++) {
1180 str = do_read_string(fd, ph); 1180 str = do_read_string(fd, ph);
1181 fprintf(fp, "%s ", str); 1181 fprintf(fp, "%s ", str);
1182 free(str); 1182 free(str);
1183 } 1183 }
1184 fputc('\n', fp); 1184 fputc('\n', fp);
1185 } 1185 }
1186 1186
1187 static void print_cpu_topology(struct perf_header *ph, int fd, FILE *fp) 1187 static void print_cpu_topology(struct perf_header *ph, int fd, FILE *fp)
1188 { 1188 {
1189 ssize_t ret; 1189 ssize_t ret;
1190 u32 nr, i; 1190 u32 nr, i;
1191 char *str; 1191 char *str;
1192 1192
1193 ret = read(fd, &nr, sizeof(nr)); 1193 ret = read(fd, &nr, sizeof(nr));
1194 if (ret != (ssize_t)sizeof(nr)) 1194 if (ret != (ssize_t)sizeof(nr))
1195 return; 1195 return;
1196 1196
1197 if (ph->needs_swap) 1197 if (ph->needs_swap)
1198 nr = bswap_32(nr); 1198 nr = bswap_32(nr);
1199 1199
1200 for (i = 0; i < nr; i++) { 1200 for (i = 0; i < nr; i++) {
1201 str = do_read_string(fd, ph); 1201 str = do_read_string(fd, ph);
1202 fprintf(fp, "# sibling cores : %s\n", str); 1202 fprintf(fp, "# sibling cores : %s\n", str);
1203 free(str); 1203 free(str);
1204 } 1204 }
1205 1205
1206 ret = read(fd, &nr, sizeof(nr)); 1206 ret = read(fd, &nr, sizeof(nr));
1207 if (ret != (ssize_t)sizeof(nr)) 1207 if (ret != (ssize_t)sizeof(nr))
1208 return; 1208 return;
1209 1209
1210 if (ph->needs_swap) 1210 if (ph->needs_swap)
1211 nr = bswap_32(nr); 1211 nr = bswap_32(nr);
1212 1212
1213 for (i = 0; i < nr; i++) { 1213 for (i = 0; i < nr; i++) {
1214 str = do_read_string(fd, ph); 1214 str = do_read_string(fd, ph);
1215 fprintf(fp, "# sibling threads : %s\n", str); 1215 fprintf(fp, "# sibling threads : %s\n", str);
1216 free(str); 1216 free(str);
1217 } 1217 }
1218 } 1218 }
1219 1219
1220 static void free_event_desc(struct perf_evsel *events) 1220 static void free_event_desc(struct perf_evsel *events)
1221 { 1221 {
1222 struct perf_evsel *evsel; 1222 struct perf_evsel *evsel;
1223 1223
1224 if (!events) 1224 if (!events)
1225 return; 1225 return;
1226 1226
1227 for (evsel = events; evsel->attr.size; evsel++) { 1227 for (evsel = events; evsel->attr.size; evsel++) {
1228 if (evsel->name) 1228 if (evsel->name)
1229 free(evsel->name); 1229 free(evsel->name);
1230 if (evsel->id) 1230 if (evsel->id)
1231 free(evsel->id); 1231 free(evsel->id);
1232 } 1232 }
1233 1233
1234 free(events); 1234 free(events);
1235 } 1235 }
1236 1236
1237 static struct perf_evsel * 1237 static struct perf_evsel *
1238 read_event_desc(struct perf_header *ph, int fd) 1238 read_event_desc(struct perf_header *ph, int fd)
1239 { 1239 {
1240 struct perf_evsel *evsel, *events = NULL; 1240 struct perf_evsel *evsel, *events = NULL;
1241 u64 *id; 1241 u64 *id;
1242 void *buf = NULL; 1242 void *buf = NULL;
1243 u32 nre, sz, nr, i, j; 1243 u32 nre, sz, nr, i, j;
1244 ssize_t ret; 1244 ssize_t ret;
1245 size_t msz; 1245 size_t msz;
1246 1246
1247 /* number of events */ 1247 /* number of events */
1248 ret = read(fd, &nre, sizeof(nre)); 1248 ret = read(fd, &nre, sizeof(nre));
1249 if (ret != (ssize_t)sizeof(nre)) 1249 if (ret != (ssize_t)sizeof(nre))
1250 goto error; 1250 goto error;
1251 1251
1252 if (ph->needs_swap) 1252 if (ph->needs_swap)
1253 nre = bswap_32(nre); 1253 nre = bswap_32(nre);
1254 1254
1255 ret = read(fd, &sz, sizeof(sz)); 1255 ret = read(fd, &sz, sizeof(sz));
1256 if (ret != (ssize_t)sizeof(sz)) 1256 if (ret != (ssize_t)sizeof(sz))
1257 goto error; 1257 goto error;
1258 1258
1259 if (ph->needs_swap) 1259 if (ph->needs_swap)
1260 sz = bswap_32(sz); 1260 sz = bswap_32(sz);
1261 1261
1262 /* buffer to hold on file attr struct */ 1262 /* buffer to hold on file attr struct */
1263 buf = malloc(sz); 1263 buf = malloc(sz);
1264 if (!buf) 1264 if (!buf)
1265 goto error; 1265 goto error;
1266 1266
1267 /* the last event terminates with evsel->attr.size == 0: */ 1267 /* the last event terminates with evsel->attr.size == 0: */
1268 events = calloc(nre + 1, sizeof(*events)); 1268 events = calloc(nre + 1, sizeof(*events));
1269 if (!events) 1269 if (!events)
1270 goto error; 1270 goto error;
1271 1271
1272 msz = sizeof(evsel->attr); 1272 msz = sizeof(evsel->attr);
1273 if (sz < msz) 1273 if (sz < msz)
1274 msz = sz; 1274 msz = sz;
1275 1275
1276 for (i = 0, evsel = events; i < nre; evsel++, i++) { 1276 for (i = 0, evsel = events; i < nre; evsel++, i++) {
1277 evsel->idx = i; 1277 evsel->idx = i;
1278 1278
1279 /* 1279 /*
1280 * must read entire on-file attr struct to 1280 * must read entire on-file attr struct to
1281 * sync up with layout. 1281 * sync up with layout.
1282 */ 1282 */
1283 ret = read(fd, buf, sz); 1283 ret = read(fd, buf, sz);
1284 if (ret != (ssize_t)sz) 1284 if (ret != (ssize_t)sz)
1285 goto error; 1285 goto error;
1286 1286
1287 if (ph->needs_swap) 1287 if (ph->needs_swap)
1288 perf_event__attr_swap(buf); 1288 perf_event__attr_swap(buf);
1289 1289
1290 memcpy(&evsel->attr, buf, msz); 1290 memcpy(&evsel->attr, buf, msz);
1291 1291
1292 ret = read(fd, &nr, sizeof(nr)); 1292 ret = read(fd, &nr, sizeof(nr));
1293 if (ret != (ssize_t)sizeof(nr)) 1293 if (ret != (ssize_t)sizeof(nr))
1294 goto error; 1294 goto error;
1295 1295
1296 if (ph->needs_swap) 1296 if (ph->needs_swap)
1297 nr = bswap_32(nr); 1297 nr = bswap_32(nr);
1298 1298
1299 evsel->name = do_read_string(fd, ph); 1299 evsel->name = do_read_string(fd, ph);
1300 1300
1301 if (!nr) 1301 if (!nr)
1302 continue; 1302 continue;
1303 1303
1304 id = calloc(nr, sizeof(*id)); 1304 id = calloc(nr, sizeof(*id));
1305 if (!id) 1305 if (!id)
1306 goto error; 1306 goto error;
1307 evsel->ids = nr; 1307 evsel->ids = nr;
1308 evsel->id = id; 1308 evsel->id = id;
1309 1309
1310 for (j = 0 ; j < nr; j++) { 1310 for (j = 0 ; j < nr; j++) {
1311 ret = read(fd, id, sizeof(*id)); 1311 ret = read(fd, id, sizeof(*id));
1312 if (ret != (ssize_t)sizeof(*id)) 1312 if (ret != (ssize_t)sizeof(*id))
1313 goto error; 1313 goto error;
1314 if (ph->needs_swap) 1314 if (ph->needs_swap)
1315 *id = bswap_64(*id); 1315 *id = bswap_64(*id);
1316 id++; 1316 id++;
1317 } 1317 }
1318 } 1318 }
1319 out: 1319 out:
1320 if (buf) 1320 if (buf)
1321 free(buf); 1321 free(buf);
1322 return events; 1322 return events;
1323 error: 1323 error:
1324 if (events) 1324 if (events)
1325 free_event_desc(events); 1325 free_event_desc(events);
1326 events = NULL; 1326 events = NULL;
1327 goto out; 1327 goto out;
1328 } 1328 }
1329 1329
1330 static void print_event_desc(struct perf_header *ph, int fd, FILE *fp) 1330 static void print_event_desc(struct perf_header *ph, int fd, FILE *fp)
1331 { 1331 {
1332 struct perf_evsel *evsel, *events = read_event_desc(ph, fd); 1332 struct perf_evsel *evsel, *events = read_event_desc(ph, fd);
1333 u32 j; 1333 u32 j;
1334 u64 *id; 1334 u64 *id;
1335 1335
1336 if (!events) { 1336 if (!events) {
1337 fprintf(fp, "# event desc: not available or unable to read\n"); 1337 fprintf(fp, "# event desc: not available or unable to read\n");
1338 return; 1338 return;
1339 } 1339 }
1340 1340
1341 for (evsel = events; evsel->attr.size; evsel++) { 1341 for (evsel = events; evsel->attr.size; evsel++) {
1342 fprintf(fp, "# event : name = %s, ", evsel->name); 1342 fprintf(fp, "# event : name = %s, ", evsel->name);
1343 1343
1344 fprintf(fp, "type = %d, config = 0x%"PRIx64 1344 fprintf(fp, "type = %d, config = 0x%"PRIx64
1345 ", config1 = 0x%"PRIx64", config2 = 0x%"PRIx64, 1345 ", config1 = 0x%"PRIx64", config2 = 0x%"PRIx64,
1346 evsel->attr.type, 1346 evsel->attr.type,
1347 (u64)evsel->attr.config, 1347 (u64)evsel->attr.config,
1348 (u64)evsel->attr.config1, 1348 (u64)evsel->attr.config1,
1349 (u64)evsel->attr.config2); 1349 (u64)evsel->attr.config2);
1350 1350
1351 fprintf(fp, ", excl_usr = %d, excl_kern = %d", 1351 fprintf(fp, ", excl_usr = %d, excl_kern = %d",
1352 evsel->attr.exclude_user, 1352 evsel->attr.exclude_user,
1353 evsel->attr.exclude_kernel); 1353 evsel->attr.exclude_kernel);
1354 1354
1355 fprintf(fp, ", excl_host = %d, excl_guest = %d", 1355 fprintf(fp, ", excl_host = %d, excl_guest = %d",
1356 evsel->attr.exclude_host, 1356 evsel->attr.exclude_host,
1357 evsel->attr.exclude_guest); 1357 evsel->attr.exclude_guest);
1358 1358
1359 fprintf(fp, ", precise_ip = %d", evsel->attr.precise_ip); 1359 fprintf(fp, ", precise_ip = %d", evsel->attr.precise_ip);
1360 1360
1361 if (evsel->ids) { 1361 if (evsel->ids) {
1362 fprintf(fp, ", id = {"); 1362 fprintf(fp, ", id = {");
1363 for (j = 0, id = evsel->id; j < evsel->ids; j++, id++) { 1363 for (j = 0, id = evsel->id; j < evsel->ids; j++, id++) {
1364 if (j) 1364 if (j)
1365 fputc(',', fp); 1365 fputc(',', fp);
1366 fprintf(fp, " %"PRIu64, *id); 1366 fprintf(fp, " %"PRIu64, *id);
1367 } 1367 }
1368 fprintf(fp, " }"); 1368 fprintf(fp, " }");
1369 } 1369 }
1370 1370
1371 fputc('\n', fp); 1371 fputc('\n', fp);
1372 } 1372 }
1373 1373
1374 free_event_desc(events); 1374 free_event_desc(events);
1375 } 1375 }
1376 1376
1377 static void print_total_mem(struct perf_header *h __maybe_unused, int fd, 1377 static void print_total_mem(struct perf_header *h __maybe_unused, int fd,
1378 FILE *fp) 1378 FILE *fp)
1379 { 1379 {
1380 uint64_t mem; 1380 uint64_t mem;
1381 ssize_t ret; 1381 ssize_t ret;
1382 1382
1383 ret = read(fd, &mem, sizeof(mem)); 1383 ret = read(fd, &mem, sizeof(mem));
1384 if (ret != sizeof(mem)) 1384 if (ret != sizeof(mem))
1385 goto error; 1385 goto error;
1386 1386
1387 if (h->needs_swap) 1387 if (h->needs_swap)
1388 mem = bswap_64(mem); 1388 mem = bswap_64(mem);
1389 1389
1390 fprintf(fp, "# total memory : %"PRIu64" kB\n", mem); 1390 fprintf(fp, "# total memory : %"PRIu64" kB\n", mem);
1391 return; 1391 return;
1392 error: 1392 error:
1393 fprintf(fp, "# total memory : unknown\n"); 1393 fprintf(fp, "# total memory : unknown\n");
1394 } 1394 }
1395 1395
1396 static void print_numa_topology(struct perf_header *h __maybe_unused, int fd, 1396 static void print_numa_topology(struct perf_header *h __maybe_unused, int fd,
1397 FILE *fp) 1397 FILE *fp)
1398 { 1398 {
1399 ssize_t ret; 1399 ssize_t ret;
1400 u32 nr, c, i; 1400 u32 nr, c, i;
1401 char *str; 1401 char *str;
1402 uint64_t mem_total, mem_free; 1402 uint64_t mem_total, mem_free;
1403 1403
1404 /* nr nodes */ 1404 /* nr nodes */
1405 ret = read(fd, &nr, sizeof(nr)); 1405 ret = read(fd, &nr, sizeof(nr));
1406 if (ret != (ssize_t)sizeof(nr)) 1406 if (ret != (ssize_t)sizeof(nr))
1407 goto error; 1407 goto error;
1408 1408
1409 if (h->needs_swap) 1409 if (h->needs_swap)
1410 nr = bswap_32(nr); 1410 nr = bswap_32(nr);
1411 1411
1412 for (i = 0; i < nr; i++) { 1412 for (i = 0; i < nr; i++) {
1413 1413
1414 /* node number */ 1414 /* node number */
1415 ret = read(fd, &c, sizeof(c)); 1415 ret = read(fd, &c, sizeof(c));
1416 if (ret != (ssize_t)sizeof(c)) 1416 if (ret != (ssize_t)sizeof(c))
1417 goto error; 1417 goto error;
1418 1418
1419 if (h->needs_swap) 1419 if (h->needs_swap)
1420 c = bswap_32(c); 1420 c = bswap_32(c);
1421 1421
1422 ret = read(fd, &mem_total, sizeof(u64)); 1422 ret = read(fd, &mem_total, sizeof(u64));
1423 if (ret != sizeof(u64)) 1423 if (ret != sizeof(u64))
1424 goto error; 1424 goto error;
1425 1425
1426 ret = read(fd, &mem_free, sizeof(u64)); 1426 ret = read(fd, &mem_free, sizeof(u64));
1427 if (ret != sizeof(u64)) 1427 if (ret != sizeof(u64))
1428 goto error; 1428 goto error;
1429 1429
1430 if (h->needs_swap) { 1430 if (h->needs_swap) {
1431 mem_total = bswap_64(mem_total); 1431 mem_total = bswap_64(mem_total);
1432 mem_free = bswap_64(mem_free); 1432 mem_free = bswap_64(mem_free);
1433 } 1433 }
1434 1434
1435 fprintf(fp, "# node%u meminfo : total = %"PRIu64" kB," 1435 fprintf(fp, "# node%u meminfo : total = %"PRIu64" kB,"
1436 " free = %"PRIu64" kB\n", 1436 " free = %"PRIu64" kB\n",
1437 c, 1437 c,
1438 mem_total, 1438 mem_total,
1439 mem_free); 1439 mem_free);
1440 1440
1441 str = do_read_string(fd, h); 1441 str = do_read_string(fd, h);
1442 fprintf(fp, "# node%u cpu list : %s\n", c, str); 1442 fprintf(fp, "# node%u cpu list : %s\n", c, str);
1443 free(str); 1443 free(str);
1444 } 1444 }
1445 return; 1445 return;
1446 error: 1446 error:
1447 fprintf(fp, "# numa topology : not available\n"); 1447 fprintf(fp, "# numa topology : not available\n");
1448 } 1448 }
1449 1449
1450 static void print_cpuid(struct perf_header *ph, int fd, FILE *fp) 1450 static void print_cpuid(struct perf_header *ph, int fd, FILE *fp)
1451 { 1451 {
1452 char *str = do_read_string(fd, ph); 1452 char *str = do_read_string(fd, ph);
1453 fprintf(fp, "# cpuid : %s\n", str); 1453 fprintf(fp, "# cpuid : %s\n", str);
1454 free(str); 1454 free(str);
1455 } 1455 }
1456 1456
1457 static void print_branch_stack(struct perf_header *ph __maybe_unused, 1457 static void print_branch_stack(struct perf_header *ph __maybe_unused,
1458 int fd __maybe_unused, 1458 int fd __maybe_unused,
1459 FILE *fp) 1459 FILE *fp)
1460 { 1460 {
1461 fprintf(fp, "# contains samples with branch stack\n"); 1461 fprintf(fp, "# contains samples with branch stack\n");
1462 } 1462 }
1463 1463
1464 static void print_pmu_mappings(struct perf_header *ph, int fd, FILE *fp) 1464 static void print_pmu_mappings(struct perf_header *ph, int fd, FILE *fp)
1465 { 1465 {
1466 const char *delimiter = "# pmu mappings: "; 1466 const char *delimiter = "# pmu mappings: ";
1467 char *name; 1467 char *name;
1468 int ret; 1468 int ret;
1469 u32 pmu_num; 1469 u32 pmu_num;
1470 u32 type; 1470 u32 type;
1471 1471
1472 ret = read(fd, &pmu_num, sizeof(pmu_num)); 1472 ret = read(fd, &pmu_num, sizeof(pmu_num));
1473 if (ret != sizeof(pmu_num)) 1473 if (ret != sizeof(pmu_num))
1474 goto error; 1474 goto error;
1475 1475
1476 if (ph->needs_swap) 1476 if (ph->needs_swap)
1477 pmu_num = bswap_32(pmu_num); 1477 pmu_num = bswap_32(pmu_num);
1478 1478
1479 if (!pmu_num) { 1479 if (!pmu_num) {
1480 fprintf(fp, "# pmu mappings: not available\n"); 1480 fprintf(fp, "# pmu mappings: not available\n");
1481 return; 1481 return;
1482 } 1482 }
1483 1483
1484 while (pmu_num) { 1484 while (pmu_num) {
1485 if (read(fd, &type, sizeof(type)) != sizeof(type)) 1485 if (read(fd, &type, sizeof(type)) != sizeof(type))
1486 break; 1486 break;
1487 if (ph->needs_swap) 1487 if (ph->needs_swap)
1488 type = bswap_32(type); 1488 type = bswap_32(type);
1489 1489
1490 name = do_read_string(fd, ph); 1490 name = do_read_string(fd, ph);
1491 if (!name) 1491 if (!name)
1492 break; 1492 break;
1493 pmu_num--; 1493 pmu_num--;
1494 fprintf(fp, "%s%s = %" PRIu32, delimiter, name, type); 1494 fprintf(fp, "%s%s = %" PRIu32, delimiter, name, type);
1495 free(name); 1495 free(name);
1496 delimiter = ", "; 1496 delimiter = ", ";
1497 } 1497 }
1498 1498
1499 fprintf(fp, "\n"); 1499 fprintf(fp, "\n");
1500 1500
1501 if (!pmu_num) 1501 if (!pmu_num)
1502 return; 1502 return;
1503 error: 1503 error:
1504 fprintf(fp, "# pmu mappings: unable to read\n"); 1504 fprintf(fp, "# pmu mappings: unable to read\n");
1505 } 1505 }
1506 1506
1507 static int __event_process_build_id(struct build_id_event *bev, 1507 static int __event_process_build_id(struct build_id_event *bev,
1508 char *filename, 1508 char *filename,
1509 struct perf_session *session) 1509 struct perf_session *session)
1510 { 1510 {
1511 int err = -1; 1511 int err = -1;
1512 struct list_head *head; 1512 struct list_head *head;
1513 struct machine *machine; 1513 struct machine *machine;
1514 u16 misc; 1514 u16 misc;
1515 struct dso *dso; 1515 struct dso *dso;
1516 enum dso_kernel_type dso_type; 1516 enum dso_kernel_type dso_type;
1517 1517
1518 machine = perf_session__findnew_machine(session, bev->pid); 1518 machine = perf_session__findnew_machine(session, bev->pid);
1519 if (!machine) 1519 if (!machine)
1520 goto out; 1520 goto out;
1521 1521
1522 misc = bev->header.misc & PERF_RECORD_MISC_CPUMODE_MASK; 1522 misc = bev->header.misc & PERF_RECORD_MISC_CPUMODE_MASK;
1523 1523
1524 switch (misc) { 1524 switch (misc) {
1525 case PERF_RECORD_MISC_KERNEL: 1525 case PERF_RECORD_MISC_KERNEL:
1526 dso_type = DSO_TYPE_KERNEL; 1526 dso_type = DSO_TYPE_KERNEL;
1527 head = &machine->kernel_dsos; 1527 head = &machine->kernel_dsos;
1528 break; 1528 break;
1529 case PERF_RECORD_MISC_GUEST_KERNEL: 1529 case PERF_RECORD_MISC_GUEST_KERNEL:
1530 dso_type = DSO_TYPE_GUEST_KERNEL; 1530 dso_type = DSO_TYPE_GUEST_KERNEL;
1531 head = &machine->kernel_dsos; 1531 head = &machine->kernel_dsos;
1532 break; 1532 break;
1533 case PERF_RECORD_MISC_USER: 1533 case PERF_RECORD_MISC_USER:
1534 case PERF_RECORD_MISC_GUEST_USER: 1534 case PERF_RECORD_MISC_GUEST_USER:
1535 dso_type = DSO_TYPE_USER; 1535 dso_type = DSO_TYPE_USER;
1536 head = &machine->user_dsos; 1536 head = &machine->user_dsos;
1537 break; 1537 break;
1538 default: 1538 default:
1539 goto out; 1539 goto out;
1540 } 1540 }
1541 1541
1542 dso = __dsos__findnew(head, filename); 1542 dso = __dsos__findnew(head, filename);
1543 if (dso != NULL) { 1543 if (dso != NULL) {
1544 char sbuild_id[BUILD_ID_SIZE * 2 + 1]; 1544 char sbuild_id[BUILD_ID_SIZE * 2 + 1];
1545 1545
1546 dso__set_build_id(dso, &bev->build_id); 1546 dso__set_build_id(dso, &bev->build_id);
1547 1547
1548 if (filename[0] == '[') 1548 if (filename[0] == '[')
1549 dso->kernel = dso_type; 1549 dso->kernel = dso_type;
1550 1550
1551 build_id__sprintf(dso->build_id, sizeof(dso->build_id), 1551 build_id__sprintf(dso->build_id, sizeof(dso->build_id),
1552 sbuild_id); 1552 sbuild_id);
1553 pr_debug("build id event received for %s: %s\n", 1553 pr_debug("build id event received for %s: %s\n",
1554 dso->long_name, sbuild_id); 1554 dso->long_name, sbuild_id);
1555 } 1555 }
1556 1556
1557 err = 0; 1557 err = 0;
1558 out: 1558 out:
1559 return err; 1559 return err;
1560 } 1560 }
1561 1561
1562 static int perf_header__read_build_ids_abi_quirk(struct perf_header *header, 1562 static int perf_header__read_build_ids_abi_quirk(struct perf_header *header,
1563 int input, u64 offset, u64 size) 1563 int input, u64 offset, u64 size)
1564 { 1564 {
1565 struct perf_session *session = container_of(header, struct perf_session, header); 1565 struct perf_session *session = container_of(header, struct perf_session, header);
1566 struct { 1566 struct {
1567 struct perf_event_header header; 1567 struct perf_event_header header;
1568 u8 build_id[PERF_ALIGN(BUILD_ID_SIZE, sizeof(u64))]; 1568 u8 build_id[PERF_ALIGN(BUILD_ID_SIZE, sizeof(u64))];
1569 char filename[0]; 1569 char filename[0];
1570 } old_bev; 1570 } old_bev;
1571 struct build_id_event bev; 1571 struct build_id_event bev;
1572 char filename[PATH_MAX]; 1572 char filename[PATH_MAX];
1573 u64 limit = offset + size; 1573 u64 limit = offset + size;
1574 1574
1575 while (offset < limit) { 1575 while (offset < limit) {
1576 ssize_t len; 1576 ssize_t len;
1577 1577
1578 if (read(input, &old_bev, sizeof(old_bev)) != sizeof(old_bev)) 1578 if (read(input, &old_bev, sizeof(old_bev)) != sizeof(old_bev))
1579 return -1; 1579 return -1;
1580 1580
1581 if (header->needs_swap) 1581 if (header->needs_swap)
1582 perf_event_header__bswap(&old_bev.header); 1582 perf_event_header__bswap(&old_bev.header);
1583 1583
1584 len = old_bev.header.size - sizeof(old_bev); 1584 len = old_bev.header.size - sizeof(old_bev);
1585 if (read(input, filename, len) != len) 1585 if (read(input, filename, len) != len)
1586 return -1; 1586 return -1;
1587 1587
1588 bev.header = old_bev.header; 1588 bev.header = old_bev.header;
1589 1589
1590 /* 1590 /*
1591 * As the pid is the missing value, we need to fill 1591 * As the pid is the missing value, we need to fill
1592 * it properly. The header.misc value give us nice hint. 1592 * it properly. The header.misc value give us nice hint.
1593 */ 1593 */
1594 bev.pid = HOST_KERNEL_ID; 1594 bev.pid = HOST_KERNEL_ID;
1595 if (bev.header.misc == PERF_RECORD_MISC_GUEST_USER || 1595 if (bev.header.misc == PERF_RECORD_MISC_GUEST_USER ||
1596 bev.header.misc == PERF_RECORD_MISC_GUEST_KERNEL) 1596 bev.header.misc == PERF_RECORD_MISC_GUEST_KERNEL)
1597 bev.pid = DEFAULT_GUEST_KERNEL_ID; 1597 bev.pid = DEFAULT_GUEST_KERNEL_ID;
1598 1598
1599 memcpy(bev.build_id, old_bev.build_id, sizeof(bev.build_id)); 1599 memcpy(bev.build_id, old_bev.build_id, sizeof(bev.build_id));
1600 __event_process_build_id(&bev, filename, session); 1600 __event_process_build_id(&bev, filename, session);
1601 1601
1602 offset += bev.header.size; 1602 offset += bev.header.size;
1603 } 1603 }
1604 1604
1605 return 0; 1605 return 0;
1606 } 1606 }
1607 1607
1608 static int perf_header__read_build_ids(struct perf_header *header, 1608 static int perf_header__read_build_ids(struct perf_header *header,
1609 int input, u64 offset, u64 size) 1609 int input, u64 offset, u64 size)
1610 { 1610 {
1611 struct perf_session *session = container_of(header, struct perf_session, header); 1611 struct perf_session *session = container_of(header, struct perf_session, header);
1612 struct build_id_event bev; 1612 struct build_id_event bev;
1613 char filename[PATH_MAX]; 1613 char filename[PATH_MAX];
1614 u64 limit = offset + size, orig_offset = offset; 1614 u64 limit = offset + size, orig_offset = offset;
1615 int err = -1; 1615 int err = -1;
1616 1616
1617 while (offset < limit) { 1617 while (offset < limit) {
1618 ssize_t len; 1618 ssize_t len;
1619 1619
1620 if (read(input, &bev, sizeof(bev)) != sizeof(bev)) 1620 if (read(input, &bev, sizeof(bev)) != sizeof(bev))
1621 goto out; 1621 goto out;
1622 1622
1623 if (header->needs_swap) 1623 if (header->needs_swap)
1624 perf_event_header__bswap(&bev.header); 1624 perf_event_header__bswap(&bev.header);
1625 1625
1626 len = bev.header.size - sizeof(bev); 1626 len = bev.header.size - sizeof(bev);
1627 if (read(input, filename, len) != len) 1627 if (read(input, filename, len) != len)
1628 goto out; 1628 goto out;
1629 /* 1629 /*
1630 * The a1645ce1 changeset: 1630 * The a1645ce1 changeset:
1631 * 1631 *
1632 * "perf: 'perf kvm' tool for monitoring guest performance from host" 1632 * "perf: 'perf kvm' tool for monitoring guest performance from host"
1633 * 1633 *
1634 * Added a field to struct build_id_event that broke the file 1634 * Added a field to struct build_id_event that broke the file
1635 * format. 1635 * format.
1636 * 1636 *
1637 * Since the kernel build-id is the first entry, process the 1637 * Since the kernel build-id is the first entry, process the
1638 * table using the old format if the well known 1638 * table using the old format if the well known
1639 * '[kernel.kallsyms]' string for the kernel build-id has the 1639 * '[kernel.kallsyms]' string for the kernel build-id has the
1640 * first 4 characters chopped off (where the pid_t sits). 1640 * first 4 characters chopped off (where the pid_t sits).
1641 */ 1641 */
1642 if (memcmp(filename, "nel.kallsyms]", 13) == 0) { 1642 if (memcmp(filename, "nel.kallsyms]", 13) == 0) {
1643 if (lseek(input, orig_offset, SEEK_SET) == (off_t)-1) 1643 if (lseek(input, orig_offset, SEEK_SET) == (off_t)-1)
1644 return -1; 1644 return -1;
1645 return perf_header__read_build_ids_abi_quirk(header, input, offset, size); 1645 return perf_header__read_build_ids_abi_quirk(header, input, offset, size);
1646 } 1646 }
1647 1647
1648 __event_process_build_id(&bev, filename, session); 1648 __event_process_build_id(&bev, filename, session);
1649 1649
1650 offset += bev.header.size; 1650 offset += bev.header.size;
1651 } 1651 }
1652 err = 0; 1652 err = 0;
1653 out: 1653 out:
1654 return err; 1654 return err;
1655 } 1655 }
1656 1656
1657 static int process_tracing_data(struct perf_file_section *section 1657 static int process_tracing_data(struct perf_file_section *section
1658 __maybe_unused, 1658 __maybe_unused,
1659 struct perf_header *ph __maybe_unused, 1659 struct perf_header *ph __maybe_unused,
1660 int feat __maybe_unused, int fd, void *data) 1660 int feat __maybe_unused, int fd, void *data)
1661 { 1661 {
1662 trace_report(fd, data, false); 1662 trace_report(fd, data, false);
1663 return 0; 1663 return 0;
1664 } 1664 }
1665 1665
1666 static int process_build_id(struct perf_file_section *section, 1666 static int process_build_id(struct perf_file_section *section,
1667 struct perf_header *ph, 1667 struct perf_header *ph,
1668 int feat __maybe_unused, int fd, 1668 int feat __maybe_unused, int fd,
1669 void *data __maybe_unused) 1669 void *data __maybe_unused)
1670 { 1670 {
1671 if (perf_header__read_build_ids(ph, fd, section->offset, section->size)) 1671 if (perf_header__read_build_ids(ph, fd, section->offset, section->size))
1672 pr_debug("Failed to read buildids, continuing...\n"); 1672 pr_debug("Failed to read buildids, continuing...\n");
1673 return 0; 1673 return 0;
1674 } 1674 }
1675 1675
1676 static char *read_cpuid(struct perf_header *ph, int fd)
1677 {
1678 return do_read_string(fd, ph);
1679 }
1680
1676 static struct perf_evsel * 1681 static struct perf_evsel *
1677 perf_evlist__find_by_index(struct perf_evlist *evlist, int idx) 1682 perf_evlist__find_by_index(struct perf_evlist *evlist, int idx)
1678 { 1683 {
1679 struct perf_evsel *evsel; 1684 struct perf_evsel *evsel;
1680 1685
1681 list_for_each_entry(evsel, &evlist->entries, node) { 1686 list_for_each_entry(evsel, &evlist->entries, node) {
1682 if (evsel->idx == idx) 1687 if (evsel->idx == idx)
1683 return evsel; 1688 return evsel;
1684 } 1689 }
1685 1690
1686 return NULL; 1691 return NULL;
1687 } 1692 }
1688 1693
1689 static void 1694 static void
1690 perf_evlist__set_event_name(struct perf_evlist *evlist, struct perf_evsel *event) 1695 perf_evlist__set_event_name(struct perf_evlist *evlist, struct perf_evsel *event)
1691 { 1696 {
1692 struct perf_evsel *evsel; 1697 struct perf_evsel *evsel;
1693 1698
1694 if (!event->name) 1699 if (!event->name)
1695 return; 1700 return;
1696 1701
1697 evsel = perf_evlist__find_by_index(evlist, event->idx); 1702 evsel = perf_evlist__find_by_index(evlist, event->idx);
1698 if (!evsel) 1703 if (!evsel)
1699 return; 1704 return;
1700 1705
1701 if (evsel->name) 1706 if (evsel->name)
1702 return; 1707 return;
1703 1708
1704 evsel->name = strdup(event->name); 1709 evsel->name = strdup(event->name);
1705 } 1710 }
1706 1711
1707 static int 1712 static int
1708 process_event_desc(struct perf_file_section *section __maybe_unused, 1713 process_event_desc(struct perf_file_section *section __maybe_unused,
1709 struct perf_header *header, int feat __maybe_unused, int fd, 1714 struct perf_header *header, int feat __maybe_unused, int fd,
1710 void *data __maybe_unused) 1715 void *data __maybe_unused)
1711 { 1716 {
1712 struct perf_session *session = container_of(header, struct perf_session, header); 1717 struct perf_session *session = container_of(header, struct perf_session, header);
1713 struct perf_evsel *evsel, *events = read_event_desc(header, fd); 1718 struct perf_evsel *evsel, *events = read_event_desc(header, fd);
1714 1719
1715 if (!events) 1720 if (!events)
1716 return 0; 1721 return 0;
1717 1722
1718 for (evsel = events; evsel->attr.size; evsel++) 1723 for (evsel = events; evsel->attr.size; evsel++)
1719 perf_evlist__set_event_name(session->evlist, evsel); 1724 perf_evlist__set_event_name(session->evlist, evsel);
1720 1725
1721 free_event_desc(events); 1726 free_event_desc(events);
1722 1727
1723 return 0; 1728 return 0;
1724 } 1729 }
1725 1730
1726 struct feature_ops { 1731 struct feature_ops {
1727 int (*write)(int fd, struct perf_header *h, struct perf_evlist *evlist); 1732 int (*write)(int fd, struct perf_header *h, struct perf_evlist *evlist);
1728 void (*print)(struct perf_header *h, int fd, FILE *fp); 1733 void (*print)(struct perf_header *h, int fd, FILE *fp);
1734 char *(*read)(struct perf_header *h, int fd);
1729 int (*process)(struct perf_file_section *section, 1735 int (*process)(struct perf_file_section *section,
1730 struct perf_header *h, int feat, int fd, void *data); 1736 struct perf_header *h, int feat, int fd, void *data);
1731 const char *name; 1737 const char *name;
1732 bool full_only; 1738 bool full_only;
1733 }; 1739 };
1734 1740
1735 #define FEAT_OPA(n, func) \ 1741 #define FEAT_OPA(n, func) \
1736 [n] = { .name = #n, .write = write_##func, .print = print_##func } 1742 [n] = { .name = #n, .write = write_##func, .print = print_##func }
1737 #define FEAT_OPP(n, func) \ 1743 #define FEAT_OPP(n, func) \
1738 [n] = { .name = #n, .write = write_##func, .print = print_##func, \ 1744 [n] = { .name = #n, .write = write_##func, .print = print_##func, \
1739 .process = process_##func } 1745 .process = process_##func }
1740 #define FEAT_OPF(n, func) \ 1746 #define FEAT_OPF(n, func) \
1741 [n] = { .name = #n, .write = write_##func, .print = print_##func, \ 1747 [n] = { .name = #n, .write = write_##func, .print = print_##func, \
1742 .full_only = true } 1748 .full_only = true }
1749 #define FEAT_OPA_R(n, func) \
1750 [n] = { .name = #n, .write = write_##func, .print = print_##func, \
1751 .read = read_##func }
1743 1752
1744 /* feature_ops not implemented: */ 1753 /* feature_ops not implemented: */
1745 #define print_tracing_data NULL 1754 #define print_tracing_data NULL
1746 #define print_build_id NULL 1755 #define print_build_id NULL
1747 1756
1748 static const struct feature_ops feat_ops[HEADER_LAST_FEATURE] = { 1757 static const struct feature_ops feat_ops[HEADER_LAST_FEATURE] = {
1749 FEAT_OPP(HEADER_TRACING_DATA, tracing_data), 1758 FEAT_OPP(HEADER_TRACING_DATA, tracing_data),
1750 FEAT_OPP(HEADER_BUILD_ID, build_id), 1759 FEAT_OPP(HEADER_BUILD_ID, build_id),
1751 FEAT_OPA(HEADER_HOSTNAME, hostname), 1760 FEAT_OPA(HEADER_HOSTNAME, hostname),
1752 FEAT_OPA(HEADER_OSRELEASE, osrelease), 1761 FEAT_OPA(HEADER_OSRELEASE, osrelease),
1753 FEAT_OPA(HEADER_VERSION, version), 1762 FEAT_OPA(HEADER_VERSION, version),
1754 FEAT_OPA(HEADER_ARCH, arch), 1763 FEAT_OPA(HEADER_ARCH, arch),
1755 FEAT_OPA(HEADER_NRCPUS, nrcpus), 1764 FEAT_OPA(HEADER_NRCPUS, nrcpus),
1756 FEAT_OPA(HEADER_CPUDESC, cpudesc), 1765 FEAT_OPA(HEADER_CPUDESC, cpudesc),
1757 FEAT_OPA(HEADER_CPUID, cpuid), 1766 FEAT_OPA_R(HEADER_CPUID, cpuid),
1758 FEAT_OPA(HEADER_TOTAL_MEM, total_mem), 1767 FEAT_OPA(HEADER_TOTAL_MEM, total_mem),
1759 FEAT_OPP(HEADER_EVENT_DESC, event_desc), 1768 FEAT_OPP(HEADER_EVENT_DESC, event_desc),
1760 FEAT_OPA(HEADER_CMDLINE, cmdline), 1769 FEAT_OPA(HEADER_CMDLINE, cmdline),
1761 FEAT_OPF(HEADER_CPU_TOPOLOGY, cpu_topology), 1770 FEAT_OPF(HEADER_CPU_TOPOLOGY, cpu_topology),
1762 FEAT_OPF(HEADER_NUMA_TOPOLOGY, numa_topology), 1771 FEAT_OPF(HEADER_NUMA_TOPOLOGY, numa_topology),
1763 FEAT_OPA(HEADER_BRANCH_STACK, branch_stack), 1772 FEAT_OPA(HEADER_BRANCH_STACK, branch_stack),
1764 FEAT_OPA(HEADER_PMU_MAPPINGS, pmu_mappings), 1773 FEAT_OPA(HEADER_PMU_MAPPINGS, pmu_mappings),
1765 }; 1774 };
1766 1775
1767 struct header_print_data { 1776 struct header_print_data {
1768 FILE *fp; 1777 FILE *fp;
1769 bool full; /* extended list of headers */ 1778 bool full; /* extended list of headers */
1770 }; 1779 };
1771 1780
1772 static int perf_file_section__fprintf_info(struct perf_file_section *section, 1781 static int perf_file_section__fprintf_info(struct perf_file_section *section,
1773 struct perf_header *ph, 1782 struct perf_header *ph,
1774 int feat, int fd, void *data) 1783 int feat, int fd, void *data)
1775 { 1784 {
1776 struct header_print_data *hd = data; 1785 struct header_print_data *hd = data;
1777 1786
1778 if (lseek(fd, section->offset, SEEK_SET) == (off_t)-1) { 1787 if (lseek(fd, section->offset, SEEK_SET) == (off_t)-1) {
1779 pr_debug("Failed to lseek to %" PRIu64 " offset for feature " 1788 pr_debug("Failed to lseek to %" PRIu64 " offset for feature "
1780 "%d, continuing...\n", section->offset, feat); 1789 "%d, continuing...\n", section->offset, feat);
1781 return 0; 1790 return 0;
1782 } 1791 }
1783 if (feat >= HEADER_LAST_FEATURE) { 1792 if (feat >= HEADER_LAST_FEATURE) {
1784 pr_warning("unknown feature %d\n", feat); 1793 pr_warning("unknown feature %d\n", feat);
1785 return 0; 1794 return 0;
1786 } 1795 }
1787 if (!feat_ops[feat].print) 1796 if (!feat_ops[feat].print)
1788 return 0; 1797 return 0;
1789 1798
1790 if (!feat_ops[feat].full_only || hd->full) 1799 if (!feat_ops[feat].full_only || hd->full)
1791 feat_ops[feat].print(ph, fd, hd->fp); 1800 feat_ops[feat].print(ph, fd, hd->fp);
1792 else 1801 else
1793 fprintf(hd->fp, "# %s info available, use -I to display\n", 1802 fprintf(hd->fp, "# %s info available, use -I to display\n",
1794 feat_ops[feat].name); 1803 feat_ops[feat].name);
1795 1804
1796 return 0; 1805 return 0;
1797 } 1806 }
1798 1807
1799 int perf_header__fprintf_info(struct perf_session *session, FILE *fp, bool full) 1808 int perf_header__fprintf_info(struct perf_session *session, FILE *fp, bool full)
1800 { 1809 {
1801 struct header_print_data hd; 1810 struct header_print_data hd;
1802 struct perf_header *header = &session->header; 1811 struct perf_header *header = &session->header;
1803 int fd = session->fd; 1812 int fd = session->fd;
1804 hd.fp = fp; 1813 hd.fp = fp;
1805 hd.full = full; 1814 hd.full = full;
1806 1815
1807 perf_header__process_sections(header, fd, &hd, 1816 perf_header__process_sections(header, fd, &hd,
1808 perf_file_section__fprintf_info); 1817 perf_file_section__fprintf_info);
1809 return 0; 1818 return 0;
1819 }
1820
1821 struct header_read_data {
1822 int feat;
1823 char *result;
1824 };
1825
1826 static int perf_file_section__read_feature(struct perf_file_section *section,
1827 struct perf_header *ph,
1828 int feat, int fd, void *data)
1829 {
1830 struct header_read_data *hd = data;
1831
1832 if (feat != hd->feat)
1833 return 0;
1834
1835 if (lseek(fd, section->offset, SEEK_SET) == (off_t)-1) {
1836 pr_debug("Failed to lseek to %" PRIu64 " offset for feature "
1837 "%d, continuing...\n", section->offset, feat);
1838 return 0;
1839 }
1840
1841 if (feat >= HEADER_LAST_FEATURE) {
1842 pr_warning("unknown feature %d\n", feat);
1843 return 0;
1844 }
1845
1846 if (!feat_ops[feat].read) {
1847 pr_warning("read is not supported for feature %d\n", feat);
1848 return 0;
1849 }
1850
1851 hd->result = feat_ops[feat].read(ph, fd);
1852 return 0;
1853 }
1854
1855 char *perf_header__read_feature(struct perf_session *session, int feat)
1856 {
1857 struct perf_header *header = &session->header;
1858 struct header_read_data hd;
1859 int fd = session->fd;
1860
1861 hd.feat = feat;
1862 hd.result = NULL;
1863
1864 perf_header__process_sections(header, fd, &hd,
1865 perf_file_section__read_feature);
1866 return hd.result;
1810 } 1867 }
1811 1868
1812 static int do_write_feat(int fd, struct perf_header *h, int type, 1869 static int do_write_feat(int fd, struct perf_header *h, int type,
1813 struct perf_file_section **p, 1870 struct perf_file_section **p,
1814 struct perf_evlist *evlist) 1871 struct perf_evlist *evlist)
1815 { 1872 {
1816 int err; 1873 int err;
1817 int ret = 0; 1874 int ret = 0;
1818 1875
1819 if (perf_header__has_feat(h, type)) { 1876 if (perf_header__has_feat(h, type)) {
1820 if (!feat_ops[type].write) 1877 if (!feat_ops[type].write)
1821 return -1; 1878 return -1;
1822 1879
1823 (*p)->offset = lseek(fd, 0, SEEK_CUR); 1880 (*p)->offset = lseek(fd, 0, SEEK_CUR);
1824 1881
1825 err = feat_ops[type].write(fd, h, evlist); 1882 err = feat_ops[type].write(fd, h, evlist);
1826 if (err < 0) { 1883 if (err < 0) {
1827 pr_debug("failed to write feature %d\n", type); 1884 pr_debug("failed to write feature %d\n", type);
1828 1885
1829 /* undo anything written */ 1886 /* undo anything written */
1830 lseek(fd, (*p)->offset, SEEK_SET); 1887 lseek(fd, (*p)->offset, SEEK_SET);
1831 1888
1832 return -1; 1889 return -1;
1833 } 1890 }
1834 (*p)->size = lseek(fd, 0, SEEK_CUR) - (*p)->offset; 1891 (*p)->size = lseek(fd, 0, SEEK_CUR) - (*p)->offset;
1835 (*p)++; 1892 (*p)++;
1836 } 1893 }
1837 return ret; 1894 return ret;
1838 } 1895 }
1839 1896
1840 static int perf_header__adds_write(struct perf_header *header, 1897 static int perf_header__adds_write(struct perf_header *header,
1841 struct perf_evlist *evlist, int fd) 1898 struct perf_evlist *evlist, int fd)
1842 { 1899 {
1843 int nr_sections; 1900 int nr_sections;
1844 struct perf_file_section *feat_sec, *p; 1901 struct perf_file_section *feat_sec, *p;
1845 int sec_size; 1902 int sec_size;
1846 u64 sec_start; 1903 u64 sec_start;
1847 int feat; 1904 int feat;
1848 int err; 1905 int err;
1849 1906
1850 nr_sections = bitmap_weight(header->adds_features, HEADER_FEAT_BITS); 1907 nr_sections = bitmap_weight(header->adds_features, HEADER_FEAT_BITS);
1851 if (!nr_sections) 1908 if (!nr_sections)
1852 return 0; 1909 return 0;
1853 1910
1854 feat_sec = p = calloc(sizeof(*feat_sec), nr_sections); 1911 feat_sec = p = calloc(sizeof(*feat_sec), nr_sections);
1855 if (feat_sec == NULL) 1912 if (feat_sec == NULL)
1856 return -ENOMEM; 1913 return -ENOMEM;
1857 1914
1858 sec_size = sizeof(*feat_sec) * nr_sections; 1915 sec_size = sizeof(*feat_sec) * nr_sections;
1859 1916
1860 sec_start = header->data_offset + header->data_size; 1917 sec_start = header->data_offset + header->data_size;
1861 lseek(fd, sec_start + sec_size, SEEK_SET); 1918 lseek(fd, sec_start + sec_size, SEEK_SET);
1862 1919
1863 for_each_set_bit(feat, header->adds_features, HEADER_FEAT_BITS) { 1920 for_each_set_bit(feat, header->adds_features, HEADER_FEAT_BITS) {
1864 if (do_write_feat(fd, header, feat, &p, evlist)) 1921 if (do_write_feat(fd, header, feat, &p, evlist))
1865 perf_header__clear_feat(header, feat); 1922 perf_header__clear_feat(header, feat);
1866 } 1923 }
1867 1924
1868 lseek(fd, sec_start, SEEK_SET); 1925 lseek(fd, sec_start, SEEK_SET);
1869 /* 1926 /*
1870 * may write more than needed due to dropped feature, but 1927 * may write more than needed due to dropped feature, but
1871 * this is okay, reader will skip the mising entries 1928 * this is okay, reader will skip the mising entries
1872 */ 1929 */
1873 err = do_write(fd, feat_sec, sec_size); 1930 err = do_write(fd, feat_sec, sec_size);
1874 if (err < 0) 1931 if (err < 0)
1875 pr_debug("failed to write feature section\n"); 1932 pr_debug("failed to write feature section\n");
1876 free(feat_sec); 1933 free(feat_sec);
1877 return err; 1934 return err;
1878 } 1935 }
1879 1936
1880 int perf_header__write_pipe(int fd) 1937 int perf_header__write_pipe(int fd)
1881 { 1938 {
1882 struct perf_pipe_file_header f_header; 1939 struct perf_pipe_file_header f_header;
1883 int err; 1940 int err;
1884 1941
1885 f_header = (struct perf_pipe_file_header){ 1942 f_header = (struct perf_pipe_file_header){
1886 .magic = PERF_MAGIC, 1943 .magic = PERF_MAGIC,
1887 .size = sizeof(f_header), 1944 .size = sizeof(f_header),
1888 }; 1945 };
1889 1946
1890 err = do_write(fd, &f_header, sizeof(f_header)); 1947 err = do_write(fd, &f_header, sizeof(f_header));
1891 if (err < 0) { 1948 if (err < 0) {
1892 pr_debug("failed to write perf pipe header\n"); 1949 pr_debug("failed to write perf pipe header\n");
1893 return err; 1950 return err;
1894 } 1951 }
1895 1952
1896 return 0; 1953 return 0;
1897 } 1954 }
1898 1955
1899 int perf_session__write_header(struct perf_session *session, 1956 int perf_session__write_header(struct perf_session *session,
1900 struct perf_evlist *evlist, 1957 struct perf_evlist *evlist,
1901 int fd, bool at_exit) 1958 int fd, bool at_exit)
1902 { 1959 {
1903 struct perf_file_header f_header; 1960 struct perf_file_header f_header;
1904 struct perf_file_attr f_attr; 1961 struct perf_file_attr f_attr;
1905 struct perf_header *header = &session->header; 1962 struct perf_header *header = &session->header;
1906 struct perf_evsel *evsel, *pair = NULL; 1963 struct perf_evsel *evsel, *pair = NULL;
1907 int err; 1964 int err;
1908 1965
1909 lseek(fd, sizeof(f_header), SEEK_SET); 1966 lseek(fd, sizeof(f_header), SEEK_SET);
1910 1967
1911 if (session->evlist != evlist) 1968 if (session->evlist != evlist)
1912 pair = perf_evlist__first(session->evlist); 1969 pair = perf_evlist__first(session->evlist);
1913 1970
1914 list_for_each_entry(evsel, &evlist->entries, node) { 1971 list_for_each_entry(evsel, &evlist->entries, node) {
1915 evsel->id_offset = lseek(fd, 0, SEEK_CUR); 1972 evsel->id_offset = lseek(fd, 0, SEEK_CUR);
1916 err = do_write(fd, evsel->id, evsel->ids * sizeof(u64)); 1973 err = do_write(fd, evsel->id, evsel->ids * sizeof(u64));
1917 if (err < 0) { 1974 if (err < 0) {
1918 out_err_write: 1975 out_err_write:
1919 pr_debug("failed to write perf header\n"); 1976 pr_debug("failed to write perf header\n");
1920 return err; 1977 return err;
1921 } 1978 }
1922 if (session->evlist != evlist) { 1979 if (session->evlist != evlist) {
1923 err = do_write(fd, pair->id, pair->ids * sizeof(u64)); 1980 err = do_write(fd, pair->id, pair->ids * sizeof(u64));
1924 if (err < 0) 1981 if (err < 0)
1925 goto out_err_write; 1982 goto out_err_write;
1926 evsel->ids += pair->ids; 1983 evsel->ids += pair->ids;
1927 pair = perf_evsel__next(pair); 1984 pair = perf_evsel__next(pair);
1928 } 1985 }
1929 } 1986 }
1930 1987
1931 header->attr_offset = lseek(fd, 0, SEEK_CUR); 1988 header->attr_offset = lseek(fd, 0, SEEK_CUR);
1932 1989
1933 list_for_each_entry(evsel, &evlist->entries, node) { 1990 list_for_each_entry(evsel, &evlist->entries, node) {
1934 f_attr = (struct perf_file_attr){ 1991 f_attr = (struct perf_file_attr){
1935 .attr = evsel->attr, 1992 .attr = evsel->attr,
1936 .ids = { 1993 .ids = {
1937 .offset = evsel->id_offset, 1994 .offset = evsel->id_offset,
1938 .size = evsel->ids * sizeof(u64), 1995 .size = evsel->ids * sizeof(u64),
1939 } 1996 }
1940 }; 1997 };
1941 err = do_write(fd, &f_attr, sizeof(f_attr)); 1998 err = do_write(fd, &f_attr, sizeof(f_attr));
1942 if (err < 0) { 1999 if (err < 0) {
1943 pr_debug("failed to write perf header attribute\n"); 2000 pr_debug("failed to write perf header attribute\n");
1944 return err; 2001 return err;
1945 } 2002 }
1946 } 2003 }
1947 2004
1948 header->event_offset = lseek(fd, 0, SEEK_CUR); 2005 header->event_offset = lseek(fd, 0, SEEK_CUR);
1949 header->event_size = trace_event_count * sizeof(struct perf_trace_event_type); 2006 header->event_size = trace_event_count * sizeof(struct perf_trace_event_type);
1950 if (trace_events) { 2007 if (trace_events) {
1951 err = do_write(fd, trace_events, header->event_size); 2008 err = do_write(fd, trace_events, header->event_size);
1952 if (err < 0) { 2009 if (err < 0) {
1953 pr_debug("failed to write perf header events\n"); 2010 pr_debug("failed to write perf header events\n");
1954 return err; 2011 return err;
1955 } 2012 }
1956 } 2013 }
1957 2014
1958 header->data_offset = lseek(fd, 0, SEEK_CUR); 2015 header->data_offset = lseek(fd, 0, SEEK_CUR);
1959 2016
1960 if (at_exit) { 2017 if (at_exit) {
1961 err = perf_header__adds_write(header, evlist, fd); 2018 err = perf_header__adds_write(header, evlist, fd);
1962 if (err < 0) 2019 if (err < 0)
1963 return err; 2020 return err;
1964 } 2021 }
1965 2022
1966 f_header = (struct perf_file_header){ 2023 f_header = (struct perf_file_header){
1967 .magic = PERF_MAGIC, 2024 .magic = PERF_MAGIC,
1968 .size = sizeof(f_header), 2025 .size = sizeof(f_header),
1969 .attr_size = sizeof(f_attr), 2026 .attr_size = sizeof(f_attr),
1970 .attrs = { 2027 .attrs = {
1971 .offset = header->attr_offset, 2028 .offset = header->attr_offset,
1972 .size = evlist->nr_entries * sizeof(f_attr), 2029 .size = evlist->nr_entries * sizeof(f_attr),
1973 }, 2030 },
1974 .data = { 2031 .data = {
1975 .offset = header->data_offset, 2032 .offset = header->data_offset,
1976 .size = header->data_size, 2033 .size = header->data_size,
1977 }, 2034 },
1978 .event_types = { 2035 .event_types = {
1979 .offset = header->event_offset, 2036 .offset = header->event_offset,
1980 .size = header->event_size, 2037 .size = header->event_size,
1981 }, 2038 },
1982 }; 2039 };
1983 2040
1984 memcpy(&f_header.adds_features, &header->adds_features, sizeof(header->adds_features)); 2041 memcpy(&f_header.adds_features, &header->adds_features, sizeof(header->adds_features));
1985 2042
1986 lseek(fd, 0, SEEK_SET); 2043 lseek(fd, 0, SEEK_SET);
1987 err = do_write(fd, &f_header, sizeof(f_header)); 2044 err = do_write(fd, &f_header, sizeof(f_header));
1988 if (err < 0) { 2045 if (err < 0) {
1989 pr_debug("failed to write perf header\n"); 2046 pr_debug("failed to write perf header\n");
1990 return err; 2047 return err;
1991 } 2048 }
1992 lseek(fd, header->data_offset + header->data_size, SEEK_SET); 2049 lseek(fd, header->data_offset + header->data_size, SEEK_SET);
1993 2050
1994 header->frozen = 1; 2051 header->frozen = 1;
1995 return 0; 2052 return 0;
1996 } 2053 }
1997 2054
1998 static int perf_header__getbuffer64(struct perf_header *header, 2055 static int perf_header__getbuffer64(struct perf_header *header,
1999 int fd, void *buf, size_t size) 2056 int fd, void *buf, size_t size)
2000 { 2057 {
2001 if (readn(fd, buf, size) <= 0) 2058 if (readn(fd, buf, size) <= 0)
2002 return -1; 2059 return -1;
2003 2060
2004 if (header->needs_swap) 2061 if (header->needs_swap)
2005 mem_bswap_64(buf, size); 2062 mem_bswap_64(buf, size);
2006 2063
2007 return 0; 2064 return 0;
2008 } 2065 }
2009 2066
2010 int perf_header__process_sections(struct perf_header *header, int fd, 2067 int perf_header__process_sections(struct perf_header *header, int fd,
2011 void *data, 2068 void *data,
2012 int (*process)(struct perf_file_section *section, 2069 int (*process)(struct perf_file_section *section,
2013 struct perf_header *ph, 2070 struct perf_header *ph,
2014 int feat, int fd, void *data)) 2071 int feat, int fd, void *data))
2015 { 2072 {
2016 struct perf_file_section *feat_sec, *sec; 2073 struct perf_file_section *feat_sec, *sec;
2017 int nr_sections; 2074 int nr_sections;
2018 int sec_size; 2075 int sec_size;
2019 int feat; 2076 int feat;
2020 int err; 2077 int err;
2021 2078
2022 nr_sections = bitmap_weight(header->adds_features, HEADER_FEAT_BITS); 2079 nr_sections = bitmap_weight(header->adds_features, HEADER_FEAT_BITS);
2023 if (!nr_sections) 2080 if (!nr_sections)
2024 return 0; 2081 return 0;
2025 2082
2026 feat_sec = sec = calloc(sizeof(*feat_sec), nr_sections); 2083 feat_sec = sec = calloc(sizeof(*feat_sec), nr_sections);
2027 if (!feat_sec) 2084 if (!feat_sec)
2028 return -1; 2085 return -1;
2029 2086
2030 sec_size = sizeof(*feat_sec) * nr_sections; 2087 sec_size = sizeof(*feat_sec) * nr_sections;
2031 2088
2032 lseek(fd, header->data_offset + header->data_size, SEEK_SET); 2089 lseek(fd, header->data_offset + header->data_size, SEEK_SET);
2033 2090
2034 err = perf_header__getbuffer64(header, fd, feat_sec, sec_size); 2091 err = perf_header__getbuffer64(header, fd, feat_sec, sec_size);
2035 if (err < 0) 2092 if (err < 0)
2036 goto out_free; 2093 goto out_free;
2037 2094
2038 for_each_set_bit(feat, header->adds_features, HEADER_LAST_FEATURE) { 2095 for_each_set_bit(feat, header->adds_features, HEADER_LAST_FEATURE) {
2039 err = process(sec++, header, feat, fd, data); 2096 err = process(sec++, header, feat, fd, data);
2040 if (err < 0) 2097 if (err < 0)
2041 goto out_free; 2098 goto out_free;
2042 } 2099 }
2043 err = 0; 2100 err = 0;
2044 out_free: 2101 out_free:
2045 free(feat_sec); 2102 free(feat_sec);
2046 return err; 2103 return err;
2047 } 2104 }
2048 2105
2049 static const int attr_file_abi_sizes[] = { 2106 static const int attr_file_abi_sizes[] = {
2050 [0] = PERF_ATTR_SIZE_VER0, 2107 [0] = PERF_ATTR_SIZE_VER0,
2051 [1] = PERF_ATTR_SIZE_VER1, 2108 [1] = PERF_ATTR_SIZE_VER1,
2052 [2] = PERF_ATTR_SIZE_VER2, 2109 [2] = PERF_ATTR_SIZE_VER2,
2053 [3] = PERF_ATTR_SIZE_VER3, 2110 [3] = PERF_ATTR_SIZE_VER3,
2054 0, 2111 0,
2055 }; 2112 };
2056 2113
2057 /* 2114 /*
2058 * In the legacy file format, the magic number is not used to encode endianness. 2115 * In the legacy file format, the magic number is not used to encode endianness.
2059 * hdr_sz was used to encode endianness. But given that hdr_sz can vary based 2116 * hdr_sz was used to encode endianness. But given that hdr_sz can vary based
2060 * on ABI revisions, we need to try all combinations for all endianness to 2117 * on ABI revisions, we need to try all combinations for all endianness to
2061 * detect the endianness. 2118 * detect the endianness.
2062 */ 2119 */
2063 static int try_all_file_abis(uint64_t hdr_sz, struct perf_header *ph) 2120 static int try_all_file_abis(uint64_t hdr_sz, struct perf_header *ph)
2064 { 2121 {
2065 uint64_t ref_size, attr_size; 2122 uint64_t ref_size, attr_size;
2066 int i; 2123 int i;
2067 2124
2068 for (i = 0 ; attr_file_abi_sizes[i]; i++) { 2125 for (i = 0 ; attr_file_abi_sizes[i]; i++) {
2069 ref_size = attr_file_abi_sizes[i] 2126 ref_size = attr_file_abi_sizes[i]
2070 + sizeof(struct perf_file_section); 2127 + sizeof(struct perf_file_section);
2071 if (hdr_sz != ref_size) { 2128 if (hdr_sz != ref_size) {
2072 attr_size = bswap_64(hdr_sz); 2129 attr_size = bswap_64(hdr_sz);
2073 if (attr_size != ref_size) 2130 if (attr_size != ref_size)
2074 continue; 2131 continue;
2075 2132
2076 ph->needs_swap = true; 2133 ph->needs_swap = true;
2077 } 2134 }
2078 pr_debug("ABI%d perf.data file detected, need_swap=%d\n", 2135 pr_debug("ABI%d perf.data file detected, need_swap=%d\n",
2079 i, 2136 i,
2080 ph->needs_swap); 2137 ph->needs_swap);
2081 return 0; 2138 return 0;
2082 } 2139 }
2083 /* could not determine endianness */ 2140 /* could not determine endianness */
2084 return -1; 2141 return -1;
2085 } 2142 }
2086 2143
2087 #define PERF_PIPE_HDR_VER0 16 2144 #define PERF_PIPE_HDR_VER0 16
2088 2145
2089 static const size_t attr_pipe_abi_sizes[] = { 2146 static const size_t attr_pipe_abi_sizes[] = {
2090 [0] = PERF_PIPE_HDR_VER0, 2147 [0] = PERF_PIPE_HDR_VER0,
2091 0, 2148 0,
2092 }; 2149 };
2093 2150
2094 /* 2151 /*
2095 * In the legacy pipe format, there is an implicit assumption that endiannesss 2152 * In the legacy pipe format, there is an implicit assumption that endiannesss
2096 * between host recording the samples, and host parsing the samples is the 2153 * between host recording the samples, and host parsing the samples is the
2097 * same. This is not always the case given that the pipe output may always be 2154 * same. This is not always the case given that the pipe output may always be
2098 * redirected into a file and analyzed on a different machine with possibly a 2155 * redirected into a file and analyzed on a different machine with possibly a
2099 * different endianness and perf_event ABI revsions in the perf tool itself. 2156 * different endianness and perf_event ABI revsions in the perf tool itself.
2100 */ 2157 */
2101 static int try_all_pipe_abis(uint64_t hdr_sz, struct perf_header *ph) 2158 static int try_all_pipe_abis(uint64_t hdr_sz, struct perf_header *ph)
2102 { 2159 {
2103 u64 attr_size; 2160 u64 attr_size;
2104 int i; 2161 int i;
2105 2162
2106 for (i = 0 ; attr_pipe_abi_sizes[i]; i++) { 2163 for (i = 0 ; attr_pipe_abi_sizes[i]; i++) {
2107 if (hdr_sz != attr_pipe_abi_sizes[i]) { 2164 if (hdr_sz != attr_pipe_abi_sizes[i]) {
2108 attr_size = bswap_64(hdr_sz); 2165 attr_size = bswap_64(hdr_sz);
2109 if (attr_size != hdr_sz) 2166 if (attr_size != hdr_sz)
2110 continue; 2167 continue;
2111 2168
2112 ph->needs_swap = true; 2169 ph->needs_swap = true;
2113 } 2170 }
2114 pr_debug("Pipe ABI%d perf.data file detected\n", i); 2171 pr_debug("Pipe ABI%d perf.data file detected\n", i);
2115 return 0; 2172 return 0;
2116 } 2173 }
2117 return -1; 2174 return -1;
2118 } 2175 }
2119 2176
2120 static int check_magic_endian(u64 magic, uint64_t hdr_sz, 2177 static int check_magic_endian(u64 magic, uint64_t hdr_sz,
2121 bool is_pipe, struct perf_header *ph) 2178 bool is_pipe, struct perf_header *ph)
2122 { 2179 {
2123 int ret; 2180 int ret;
2124 2181
2125 /* check for legacy format */ 2182 /* check for legacy format */
2126 ret = memcmp(&magic, __perf_magic1, sizeof(magic)); 2183 ret = memcmp(&magic, __perf_magic1, sizeof(magic));
2127 if (ret == 0) { 2184 if (ret == 0) {
2128 pr_debug("legacy perf.data format\n"); 2185 pr_debug("legacy perf.data format\n");
2129 if (is_pipe) 2186 if (is_pipe)
2130 return try_all_pipe_abis(hdr_sz, ph); 2187 return try_all_pipe_abis(hdr_sz, ph);
2131 2188
2132 return try_all_file_abis(hdr_sz, ph); 2189 return try_all_file_abis(hdr_sz, ph);
2133 } 2190 }
2134 /* 2191 /*
2135 * the new magic number serves two purposes: 2192 * the new magic number serves two purposes:
2136 * - unique number to identify actual perf.data files 2193 * - unique number to identify actual perf.data files
2137 * - encode endianness of file 2194 * - encode endianness of file
2138 */ 2195 */
2139 2196
2140 /* check magic number with one endianness */ 2197 /* check magic number with one endianness */
2141 if (magic == __perf_magic2) 2198 if (magic == __perf_magic2)
2142 return 0; 2199 return 0;
2143 2200
2144 /* check magic number with opposite endianness */ 2201 /* check magic number with opposite endianness */
2145 if (magic != __perf_magic2_sw) 2202 if (magic != __perf_magic2_sw)
2146 return -1; 2203 return -1;
2147 2204
2148 ph->needs_swap = true; 2205 ph->needs_swap = true;
2149 2206
2150 return 0; 2207 return 0;
2151 } 2208 }
2152 2209
2153 int perf_file_header__read(struct perf_file_header *header, 2210 int perf_file_header__read(struct perf_file_header *header,
2154 struct perf_header *ph, int fd) 2211 struct perf_header *ph, int fd)
2155 { 2212 {
2156 int ret; 2213 int ret;
2157 2214
2158 lseek(fd, 0, SEEK_SET); 2215 lseek(fd, 0, SEEK_SET);
2159 2216
2160 ret = readn(fd, header, sizeof(*header)); 2217 ret = readn(fd, header, sizeof(*header));
2161 if (ret <= 0) 2218 if (ret <= 0)
2162 return -1; 2219 return -1;
2163 2220
2164 if (check_magic_endian(header->magic, 2221 if (check_magic_endian(header->magic,
2165 header->attr_size, false, ph) < 0) { 2222 header->attr_size, false, ph) < 0) {
2166 pr_debug("magic/endian check failed\n"); 2223 pr_debug("magic/endian check failed\n");
2167 return -1; 2224 return -1;
2168 } 2225 }
2169 2226
2170 if (ph->needs_swap) { 2227 if (ph->needs_swap) {
2171 mem_bswap_64(header, offsetof(struct perf_file_header, 2228 mem_bswap_64(header, offsetof(struct perf_file_header,
2172 adds_features)); 2229 adds_features));
2173 } 2230 }
2174 2231
2175 if (header->size != sizeof(*header)) { 2232 if (header->size != sizeof(*header)) {
2176 /* Support the previous format */ 2233 /* Support the previous format */
2177 if (header->size == offsetof(typeof(*header), adds_features)) 2234 if (header->size == offsetof(typeof(*header), adds_features))
2178 bitmap_zero(header->adds_features, HEADER_FEAT_BITS); 2235 bitmap_zero(header->adds_features, HEADER_FEAT_BITS);
2179 else 2236 else
2180 return -1; 2237 return -1;
2181 } else if (ph->needs_swap) { 2238 } else if (ph->needs_swap) {
2182 /* 2239 /*
2183 * feature bitmap is declared as an array of unsigned longs -- 2240 * feature bitmap is declared as an array of unsigned longs --
2184 * not good since its size can differ between the host that 2241 * not good since its size can differ between the host that
2185 * generated the data file and the host analyzing the file. 2242 * generated the data file and the host analyzing the file.
2186 * 2243 *
2187 * We need to handle endianness, but we don't know the size of 2244 * We need to handle endianness, but we don't know the size of
2188 * the unsigned long where the file was generated. Take a best 2245 * the unsigned long where the file was generated. Take a best
2189 * guess at determining it: try 64-bit swap first (ie., file 2246 * guess at determining it: try 64-bit swap first (ie., file
2190 * created on a 64-bit host), and check if the hostname feature 2247 * created on a 64-bit host), and check if the hostname feature
2191 * bit is set (this feature bit is forced on as of fbe96f2). 2248 * bit is set (this feature bit is forced on as of fbe96f2).
2192 * If the bit is not, undo the 64-bit swap and try a 32-bit 2249 * If the bit is not, undo the 64-bit swap and try a 32-bit
2193 * swap. If the hostname bit is still not set (e.g., older data 2250 * swap. If the hostname bit is still not set (e.g., older data
2194 * file), punt and fallback to the original behavior -- 2251 * file), punt and fallback to the original behavior --
2195 * clearing all feature bits and setting buildid. 2252 * clearing all feature bits and setting buildid.
2196 */ 2253 */
2197 mem_bswap_64(&header->adds_features, 2254 mem_bswap_64(&header->adds_features,
2198 BITS_TO_U64(HEADER_FEAT_BITS)); 2255 BITS_TO_U64(HEADER_FEAT_BITS));
2199 2256
2200 if (!test_bit(HEADER_HOSTNAME, header->adds_features)) { 2257 if (!test_bit(HEADER_HOSTNAME, header->adds_features)) {
2201 /* unswap as u64 */ 2258 /* unswap as u64 */
2202 mem_bswap_64(&header->adds_features, 2259 mem_bswap_64(&header->adds_features,
2203 BITS_TO_U64(HEADER_FEAT_BITS)); 2260 BITS_TO_U64(HEADER_FEAT_BITS));
2204 2261
2205 /* unswap as u32 */ 2262 /* unswap as u32 */
2206 mem_bswap_32(&header->adds_features, 2263 mem_bswap_32(&header->adds_features,
2207 BITS_TO_U32(HEADER_FEAT_BITS)); 2264 BITS_TO_U32(HEADER_FEAT_BITS));
2208 } 2265 }
2209 2266
2210 if (!test_bit(HEADER_HOSTNAME, header->adds_features)) { 2267 if (!test_bit(HEADER_HOSTNAME, header->adds_features)) {
2211 bitmap_zero(header->adds_features, HEADER_FEAT_BITS); 2268 bitmap_zero(header->adds_features, HEADER_FEAT_BITS);
2212 set_bit(HEADER_BUILD_ID, header->adds_features); 2269 set_bit(HEADER_BUILD_ID, header->adds_features);
2213 } 2270 }
2214 } 2271 }
2215 2272
2216 memcpy(&ph->adds_features, &header->adds_features, 2273 memcpy(&ph->adds_features, &header->adds_features,
2217 sizeof(ph->adds_features)); 2274 sizeof(ph->adds_features));
2218 2275
2219 ph->event_offset = header->event_types.offset; 2276 ph->event_offset = header->event_types.offset;
2220 ph->event_size = header->event_types.size; 2277 ph->event_size = header->event_types.size;
2221 ph->data_offset = header->data.offset; 2278 ph->data_offset = header->data.offset;
2222 ph->data_size = header->data.size; 2279 ph->data_size = header->data.size;
2223 return 0; 2280 return 0;
2224 } 2281 }
2225 2282
2226 static int perf_file_section__process(struct perf_file_section *section, 2283 static int perf_file_section__process(struct perf_file_section *section,
2227 struct perf_header *ph, 2284 struct perf_header *ph,
2228 int feat, int fd, void *data) 2285 int feat, int fd, void *data)
2229 { 2286 {
2230 if (lseek(fd, section->offset, SEEK_SET) == (off_t)-1) { 2287 if (lseek(fd, section->offset, SEEK_SET) == (off_t)-1) {
2231 pr_debug("Failed to lseek to %" PRIu64 " offset for feature " 2288 pr_debug("Failed to lseek to %" PRIu64 " offset for feature "
2232 "%d, continuing...\n", section->offset, feat); 2289 "%d, continuing...\n", section->offset, feat);
2233 return 0; 2290 return 0;
2234 } 2291 }
2235 2292
2236 if (feat >= HEADER_LAST_FEATURE) { 2293 if (feat >= HEADER_LAST_FEATURE) {
2237 pr_debug("unknown feature %d, continuing...\n", feat); 2294 pr_debug("unknown feature %d, continuing...\n", feat);
2238 return 0; 2295 return 0;
2239 } 2296 }
2240 2297
2241 if (!feat_ops[feat].process) 2298 if (!feat_ops[feat].process)
2242 return 0; 2299 return 0;
2243 2300
2244 return feat_ops[feat].process(section, ph, feat, fd, data); 2301 return feat_ops[feat].process(section, ph, feat, fd, data);
2245 } 2302 }
2246 2303
2247 static int perf_file_header__read_pipe(struct perf_pipe_file_header *header, 2304 static int perf_file_header__read_pipe(struct perf_pipe_file_header *header,
2248 struct perf_header *ph, int fd, 2305 struct perf_header *ph, int fd,
2249 bool repipe) 2306 bool repipe)
2250 { 2307 {
2251 int ret; 2308 int ret;
2252 2309
2253 ret = readn(fd, header, sizeof(*header)); 2310 ret = readn(fd, header, sizeof(*header));
2254 if (ret <= 0) 2311 if (ret <= 0)
2255 return -1; 2312 return -1;
2256 2313
2257 if (check_magic_endian(header->magic, header->size, true, ph) < 0) { 2314 if (check_magic_endian(header->magic, header->size, true, ph) < 0) {
2258 pr_debug("endian/magic failed\n"); 2315 pr_debug("endian/magic failed\n");
2259 return -1; 2316 return -1;
2260 } 2317 }
2261 2318
2262 if (ph->needs_swap) 2319 if (ph->needs_swap)
2263 header->size = bswap_64(header->size); 2320 header->size = bswap_64(header->size);
2264 2321
2265 if (repipe && do_write(STDOUT_FILENO, header, sizeof(*header)) < 0) 2322 if (repipe && do_write(STDOUT_FILENO, header, sizeof(*header)) < 0)
2266 return -1; 2323 return -1;
2267 2324
2268 return 0; 2325 return 0;
2269 } 2326 }
2270 2327
2271 static int perf_header__read_pipe(struct perf_session *session, int fd) 2328 static int perf_header__read_pipe(struct perf_session *session, int fd)
2272 { 2329 {
2273 struct perf_header *header = &session->header; 2330 struct perf_header *header = &session->header;
2274 struct perf_pipe_file_header f_header; 2331 struct perf_pipe_file_header f_header;
2275 2332
2276 if (perf_file_header__read_pipe(&f_header, header, fd, 2333 if (perf_file_header__read_pipe(&f_header, header, fd,
2277 session->repipe) < 0) { 2334 session->repipe) < 0) {
2278 pr_debug("incompatible file format\n"); 2335 pr_debug("incompatible file format\n");
2279 return -EINVAL; 2336 return -EINVAL;
2280 } 2337 }
2281 2338
2282 session->fd = fd; 2339 session->fd = fd;
2283 2340
2284 return 0; 2341 return 0;
2285 } 2342 }
2286 2343
2287 static int read_attr(int fd, struct perf_header *ph, 2344 static int read_attr(int fd, struct perf_header *ph,
2288 struct perf_file_attr *f_attr) 2345 struct perf_file_attr *f_attr)
2289 { 2346 {
2290 struct perf_event_attr *attr = &f_attr->attr; 2347 struct perf_event_attr *attr = &f_attr->attr;
2291 size_t sz, left; 2348 size_t sz, left;
2292 size_t our_sz = sizeof(f_attr->attr); 2349 size_t our_sz = sizeof(f_attr->attr);
2293 int ret; 2350 int ret;
2294 2351
2295 memset(f_attr, 0, sizeof(*f_attr)); 2352 memset(f_attr, 0, sizeof(*f_attr));
2296 2353
2297 /* read minimal guaranteed structure */ 2354 /* read minimal guaranteed structure */
2298 ret = readn(fd, attr, PERF_ATTR_SIZE_VER0); 2355 ret = readn(fd, attr, PERF_ATTR_SIZE_VER0);
2299 if (ret <= 0) { 2356 if (ret <= 0) {
2300 pr_debug("cannot read %d bytes of header attr\n", 2357 pr_debug("cannot read %d bytes of header attr\n",
2301 PERF_ATTR_SIZE_VER0); 2358 PERF_ATTR_SIZE_VER0);
2302 return -1; 2359 return -1;
2303 } 2360 }
2304 2361
2305 /* on file perf_event_attr size */ 2362 /* on file perf_event_attr size */
2306 sz = attr->size; 2363 sz = attr->size;
2307 2364
2308 if (ph->needs_swap) 2365 if (ph->needs_swap)
2309 sz = bswap_32(sz); 2366 sz = bswap_32(sz);
2310 2367
2311 if (sz == 0) { 2368 if (sz == 0) {
2312 /* assume ABI0 */ 2369 /* assume ABI0 */
2313 sz = PERF_ATTR_SIZE_VER0; 2370 sz = PERF_ATTR_SIZE_VER0;
2314 } else if (sz > our_sz) { 2371 } else if (sz > our_sz) {
2315 pr_debug("file uses a more recent and unsupported ABI" 2372 pr_debug("file uses a more recent and unsupported ABI"
2316 " (%zu bytes extra)\n", sz - our_sz); 2373 " (%zu bytes extra)\n", sz - our_sz);
2317 return -1; 2374 return -1;
2318 } 2375 }
2319 /* what we have not yet read and that we know about */ 2376 /* what we have not yet read and that we know about */
2320 left = sz - PERF_ATTR_SIZE_VER0; 2377 left = sz - PERF_ATTR_SIZE_VER0;
2321 if (left) { 2378 if (left) {
2322 void *ptr = attr; 2379 void *ptr = attr;
2323 ptr += PERF_ATTR_SIZE_VER0; 2380 ptr += PERF_ATTR_SIZE_VER0;
2324 2381
2325 ret = readn(fd, ptr, left); 2382 ret = readn(fd, ptr, left);
2326 } 2383 }
2327 /* read perf_file_section, ids are read in caller */ 2384 /* read perf_file_section, ids are read in caller */
2328 ret = readn(fd, &f_attr->ids, sizeof(f_attr->ids)); 2385 ret = readn(fd, &f_attr->ids, sizeof(f_attr->ids));
2329 2386
2330 return ret <= 0 ? -1 : 0; 2387 return ret <= 0 ? -1 : 0;
2331 } 2388 }
2332 2389
2333 static int perf_evsel__prepare_tracepoint_event(struct perf_evsel *evsel, 2390 static int perf_evsel__prepare_tracepoint_event(struct perf_evsel *evsel,
2334 struct pevent *pevent) 2391 struct pevent *pevent)
2335 { 2392 {
2336 struct event_format *event; 2393 struct event_format *event;
2337 char bf[128]; 2394 char bf[128];
2338 2395
2339 /* already prepared */ 2396 /* already prepared */
2340 if (evsel->tp_format) 2397 if (evsel->tp_format)
2341 return 0; 2398 return 0;
2342 2399
2343 event = pevent_find_event(pevent, evsel->attr.config); 2400 event = pevent_find_event(pevent, evsel->attr.config);
2344 if (event == NULL) 2401 if (event == NULL)
2345 return -1; 2402 return -1;
2346 2403
2347 if (!evsel->name) { 2404 if (!evsel->name) {
2348 snprintf(bf, sizeof(bf), "%s:%s", event->system, event->name); 2405 snprintf(bf, sizeof(bf), "%s:%s", event->system, event->name);
2349 evsel->name = strdup(bf); 2406 evsel->name = strdup(bf);
2350 if (evsel->name == NULL) 2407 if (evsel->name == NULL)
2351 return -1; 2408 return -1;
2352 } 2409 }
2353 2410
2354 evsel->tp_format = event; 2411 evsel->tp_format = event;
2355 return 0; 2412 return 0;
2356 } 2413 }
2357 2414
2358 static int perf_evlist__prepare_tracepoint_events(struct perf_evlist *evlist, 2415 static int perf_evlist__prepare_tracepoint_events(struct perf_evlist *evlist,
2359 struct pevent *pevent) 2416 struct pevent *pevent)
2360 { 2417 {
2361 struct perf_evsel *pos; 2418 struct perf_evsel *pos;
2362 2419
2363 list_for_each_entry(pos, &evlist->entries, node) { 2420 list_for_each_entry(pos, &evlist->entries, node) {
2364 if (pos->attr.type == PERF_TYPE_TRACEPOINT && 2421 if (pos->attr.type == PERF_TYPE_TRACEPOINT &&
2365 perf_evsel__prepare_tracepoint_event(pos, pevent)) 2422 perf_evsel__prepare_tracepoint_event(pos, pevent))
2366 return -1; 2423 return -1;
2367 } 2424 }
2368 2425
2369 return 0; 2426 return 0;
2370 } 2427 }
2371 2428
2372 int perf_session__read_header(struct perf_session *session, int fd) 2429 int perf_session__read_header(struct perf_session *session, int fd)
2373 { 2430 {
2374 struct perf_header *header = &session->header; 2431 struct perf_header *header = &session->header;
2375 struct perf_file_header f_header; 2432 struct perf_file_header f_header;
2376 struct perf_file_attr f_attr; 2433 struct perf_file_attr f_attr;
2377 u64 f_id; 2434 u64 f_id;
2378 int nr_attrs, nr_ids, i, j; 2435 int nr_attrs, nr_ids, i, j;
2379 2436
2380 session->evlist = perf_evlist__new(NULL, NULL); 2437 session->evlist = perf_evlist__new(NULL, NULL);
2381 if (session->evlist == NULL) 2438 if (session->evlist == NULL)
2382 return -ENOMEM; 2439 return -ENOMEM;
2383 2440
2384 if (session->fd_pipe) 2441 if (session->fd_pipe)
2385 return perf_header__read_pipe(session, fd); 2442 return perf_header__read_pipe(session, fd);
2386 2443
2387 if (perf_file_header__read(&f_header, header, fd) < 0) 2444 if (perf_file_header__read(&f_header, header, fd) < 0)
2388 return -EINVAL; 2445 return -EINVAL;
2389 2446
2390 nr_attrs = f_header.attrs.size / f_header.attr_size; 2447 nr_attrs = f_header.attrs.size / f_header.attr_size;
2391 lseek(fd, f_header.attrs.offset, SEEK_SET); 2448 lseek(fd, f_header.attrs.offset, SEEK_SET);
2392 2449
2393 for (i = 0; i < nr_attrs; i++) { 2450 for (i = 0; i < nr_attrs; i++) {
2394 struct perf_evsel *evsel; 2451 struct perf_evsel *evsel;
2395 off_t tmp; 2452 off_t tmp;
2396 2453
2397 if (read_attr(fd, header, &f_attr) < 0) 2454 if (read_attr(fd, header, &f_attr) < 0)
2398 goto out_errno; 2455 goto out_errno;
2399 2456
2400 if (header->needs_swap) 2457 if (header->needs_swap)
2401 perf_event__attr_swap(&f_attr.attr); 2458 perf_event__attr_swap(&f_attr.attr);
2402 2459
2403 tmp = lseek(fd, 0, SEEK_CUR); 2460 tmp = lseek(fd, 0, SEEK_CUR);
2404 evsel = perf_evsel__new(&f_attr.attr, i); 2461 evsel = perf_evsel__new(&f_attr.attr, i);
2405 2462
2406 if (evsel == NULL) 2463 if (evsel == NULL)
2407 goto out_delete_evlist; 2464 goto out_delete_evlist;
2408 /* 2465 /*
2409 * Do it before so that if perf_evsel__alloc_id fails, this 2466 * Do it before so that if perf_evsel__alloc_id fails, this
2410 * entry gets purged too at perf_evlist__delete(). 2467 * entry gets purged too at perf_evlist__delete().
2411 */ 2468 */
2412 perf_evlist__add(session->evlist, evsel); 2469 perf_evlist__add(session->evlist, evsel);
2413 2470
2414 nr_ids = f_attr.ids.size / sizeof(u64); 2471 nr_ids = f_attr.ids.size / sizeof(u64);
2415 /* 2472 /*
2416 * We don't have the cpu and thread maps on the header, so 2473 * We don't have the cpu and thread maps on the header, so
2417 * for allocating the perf_sample_id table we fake 1 cpu and 2474 * for allocating the perf_sample_id table we fake 1 cpu and
2418 * hattr->ids threads. 2475 * hattr->ids threads.
2419 */ 2476 */
2420 if (perf_evsel__alloc_id(evsel, 1, nr_ids)) 2477 if (perf_evsel__alloc_id(evsel, 1, nr_ids))
2421 goto out_delete_evlist; 2478 goto out_delete_evlist;
2422 2479
2423 lseek(fd, f_attr.ids.offset, SEEK_SET); 2480 lseek(fd, f_attr.ids.offset, SEEK_SET);
2424 2481
2425 for (j = 0; j < nr_ids; j++) { 2482 for (j = 0; j < nr_ids; j++) {
2426 if (perf_header__getbuffer64(header, fd, &f_id, sizeof(f_id))) 2483 if (perf_header__getbuffer64(header, fd, &f_id, sizeof(f_id)))
2427 goto out_errno; 2484 goto out_errno;
2428 2485
2429 perf_evlist__id_add(session->evlist, evsel, 0, j, f_id); 2486 perf_evlist__id_add(session->evlist, evsel, 0, j, f_id);
2430 } 2487 }
2431 2488
2432 lseek(fd, tmp, SEEK_SET); 2489 lseek(fd, tmp, SEEK_SET);
2433 } 2490 }
2434 2491
2435 symbol_conf.nr_events = nr_attrs; 2492 symbol_conf.nr_events = nr_attrs;
2436 2493
2437 if (f_header.event_types.size) { 2494 if (f_header.event_types.size) {
2438 lseek(fd, f_header.event_types.offset, SEEK_SET); 2495 lseek(fd, f_header.event_types.offset, SEEK_SET);
2439 trace_events = malloc(f_header.event_types.size); 2496 trace_events = malloc(f_header.event_types.size);
2440 if (trace_events == NULL) 2497 if (trace_events == NULL)
2441 return -ENOMEM; 2498 return -ENOMEM;
2442 if (perf_header__getbuffer64(header, fd, trace_events, 2499 if (perf_header__getbuffer64(header, fd, trace_events,
2443 f_header.event_types.size)) 2500 f_header.event_types.size))
2444 goto out_errno; 2501 goto out_errno;
2445 trace_event_count = f_header.event_types.size / sizeof(struct perf_trace_event_type); 2502 trace_event_count = f_header.event_types.size / sizeof(struct perf_trace_event_type);
2446 } 2503 }
2447 2504
2448 perf_header__process_sections(header, fd, &session->pevent, 2505 perf_header__process_sections(header, fd, &session->pevent,
2449 perf_file_section__process); 2506 perf_file_section__process);
2450 2507
2451 lseek(fd, header->data_offset, SEEK_SET); 2508 lseek(fd, header->data_offset, SEEK_SET);
2452 2509
2453 if (perf_evlist__prepare_tracepoint_events(session->evlist, 2510 if (perf_evlist__prepare_tracepoint_events(session->evlist,
2454 session->pevent)) 2511 session->pevent))
2455 goto out_delete_evlist; 2512 goto out_delete_evlist;
2456 2513
2457 header->frozen = 1; 2514 header->frozen = 1;
2458 return 0; 2515 return 0;
2459 out_errno: 2516 out_errno:
2460 return -errno; 2517 return -errno;
2461 2518
2462 out_delete_evlist: 2519 out_delete_evlist:
2463 perf_evlist__delete(session->evlist); 2520 perf_evlist__delete(session->evlist);
2464 session->evlist = NULL; 2521 session->evlist = NULL;
2465 return -ENOMEM; 2522 return -ENOMEM;
2466 } 2523 }
2467 2524
2468 int perf_event__synthesize_attr(struct perf_tool *tool, 2525 int perf_event__synthesize_attr(struct perf_tool *tool,
2469 struct perf_event_attr *attr, u32 ids, u64 *id, 2526 struct perf_event_attr *attr, u32 ids, u64 *id,
2470 perf_event__handler_t process) 2527 perf_event__handler_t process)
2471 { 2528 {
2472 union perf_event *ev; 2529 union perf_event *ev;
2473 size_t size; 2530 size_t size;
2474 int err; 2531 int err;
2475 2532
2476 size = sizeof(struct perf_event_attr); 2533 size = sizeof(struct perf_event_attr);
2477 size = PERF_ALIGN(size, sizeof(u64)); 2534 size = PERF_ALIGN(size, sizeof(u64));
2478 size += sizeof(struct perf_event_header); 2535 size += sizeof(struct perf_event_header);
2479 size += ids * sizeof(u64); 2536 size += ids * sizeof(u64);
2480 2537
2481 ev = malloc(size); 2538 ev = malloc(size);
2482 2539
2483 if (ev == NULL) 2540 if (ev == NULL)
2484 return -ENOMEM; 2541 return -ENOMEM;
2485 2542
2486 ev->attr.attr = *attr; 2543 ev->attr.attr = *attr;
2487 memcpy(ev->attr.id, id, ids * sizeof(u64)); 2544 memcpy(ev->attr.id, id, ids * sizeof(u64));
2488 2545
2489 ev->attr.header.type = PERF_RECORD_HEADER_ATTR; 2546 ev->attr.header.type = PERF_RECORD_HEADER_ATTR;
2490 ev->attr.header.size = (u16)size; 2547 ev->attr.header.size = (u16)size;
2491 2548
2492 if (ev->attr.header.size == size) 2549 if (ev->attr.header.size == size)
2493 err = process(tool, ev, NULL, NULL); 2550 err = process(tool, ev, NULL, NULL);
2494 else 2551 else
2495 err = -E2BIG; 2552 err = -E2BIG;
2496 2553
2497 free(ev); 2554 free(ev);
2498 2555
2499 return err; 2556 return err;
2500 } 2557 }
2501 2558
2502 int perf_event__synthesize_attrs(struct perf_tool *tool, 2559 int perf_event__synthesize_attrs(struct perf_tool *tool,
2503 struct perf_session *session, 2560 struct perf_session *session,
2504 perf_event__handler_t process) 2561 perf_event__handler_t process)
2505 { 2562 {
2506 struct perf_evsel *evsel; 2563 struct perf_evsel *evsel;
2507 int err = 0; 2564 int err = 0;
2508 2565
2509 list_for_each_entry(evsel, &session->evlist->entries, node) { 2566 list_for_each_entry(evsel, &session->evlist->entries, node) {
2510 err = perf_event__synthesize_attr(tool, &evsel->attr, evsel->ids, 2567 err = perf_event__synthesize_attr(tool, &evsel->attr, evsel->ids,
2511 evsel->id, process); 2568 evsel->id, process);
2512 if (err) { 2569 if (err) {
2513 pr_debug("failed to create perf header attribute\n"); 2570 pr_debug("failed to create perf header attribute\n");
2514 return err; 2571 return err;
2515 } 2572 }
2516 } 2573 }
2517 2574
2518 return err; 2575 return err;
2519 } 2576 }
2520 2577
2521 int perf_event__process_attr(union perf_event *event, 2578 int perf_event__process_attr(union perf_event *event,
2522 struct perf_evlist **pevlist) 2579 struct perf_evlist **pevlist)
2523 { 2580 {
2524 u32 i, ids, n_ids; 2581 u32 i, ids, n_ids;
2525 struct perf_evsel *evsel; 2582 struct perf_evsel *evsel;
2526 struct perf_evlist *evlist = *pevlist; 2583 struct perf_evlist *evlist = *pevlist;
2527 2584
2528 if (evlist == NULL) { 2585 if (evlist == NULL) {
2529 *pevlist = evlist = perf_evlist__new(NULL, NULL); 2586 *pevlist = evlist = perf_evlist__new(NULL, NULL);
2530 if (evlist == NULL) 2587 if (evlist == NULL)
2531 return -ENOMEM; 2588 return -ENOMEM;
2532 } 2589 }
2533 2590
2534 evsel = perf_evsel__new(&event->attr.attr, evlist->nr_entries); 2591 evsel = perf_evsel__new(&event->attr.attr, evlist->nr_entries);
2535 if (evsel == NULL) 2592 if (evsel == NULL)
2536 return -ENOMEM; 2593 return -ENOMEM;
2537 2594
2538 perf_evlist__add(evlist, evsel); 2595 perf_evlist__add(evlist, evsel);
2539 2596
2540 ids = event->header.size; 2597 ids = event->header.size;
2541 ids -= (void *)&event->attr.id - (void *)event; 2598 ids -= (void *)&event->attr.id - (void *)event;
2542 n_ids = ids / sizeof(u64); 2599 n_ids = ids / sizeof(u64);
2543 /* 2600 /*
2544 * We don't have the cpu and thread maps on the header, so 2601 * We don't have the cpu and thread maps on the header, so
2545 * for allocating the perf_sample_id table we fake 1 cpu and 2602 * for allocating the perf_sample_id table we fake 1 cpu and
2546 * hattr->ids threads. 2603 * hattr->ids threads.
2547 */ 2604 */
2548 if (perf_evsel__alloc_id(evsel, 1, n_ids)) 2605 if (perf_evsel__alloc_id(evsel, 1, n_ids))
2549 return -ENOMEM; 2606 return -ENOMEM;
2550 2607
2551 for (i = 0; i < n_ids; i++) { 2608 for (i = 0; i < n_ids; i++) {
2552 perf_evlist__id_add(evlist, evsel, 0, i, event->attr.id[i]); 2609 perf_evlist__id_add(evlist, evsel, 0, i, event->attr.id[i]);
2553 } 2610 }
2554 2611
2555 return 0; 2612 return 0;
2556 } 2613 }
2557 2614
2558 int perf_event__synthesize_event_type(struct perf_tool *tool, 2615 int perf_event__synthesize_event_type(struct perf_tool *tool,
2559 u64 event_id, char *name, 2616 u64 event_id, char *name,
2560 perf_event__handler_t process, 2617 perf_event__handler_t process,
2561 struct machine *machine) 2618 struct machine *machine)
2562 { 2619 {
2563 union perf_event ev; 2620 union perf_event ev;
2564 size_t size = 0; 2621 size_t size = 0;
2565 int err = 0; 2622 int err = 0;
2566 2623
2567 memset(&ev, 0, sizeof(ev)); 2624 memset(&ev, 0, sizeof(ev));
2568 2625
2569 ev.event_type.event_type.event_id = event_id; 2626 ev.event_type.event_type.event_id = event_id;
2570 memset(ev.event_type.event_type.name, 0, MAX_EVENT_NAME); 2627 memset(ev.event_type.event_type.name, 0, MAX_EVENT_NAME);
2571 strncpy(ev.event_type.event_type.name, name, MAX_EVENT_NAME - 1); 2628 strncpy(ev.event_type.event_type.name, name, MAX_EVENT_NAME - 1);
2572 2629
2573 ev.event_type.header.type = PERF_RECORD_HEADER_EVENT_TYPE; 2630 ev.event_type.header.type = PERF_RECORD_HEADER_EVENT_TYPE;
2574 size = strlen(ev.event_type.event_type.name); 2631 size = strlen(ev.event_type.event_type.name);
2575 size = PERF_ALIGN(size, sizeof(u64)); 2632 size = PERF_ALIGN(size, sizeof(u64));
2576 ev.event_type.header.size = sizeof(ev.event_type) - 2633 ev.event_type.header.size = sizeof(ev.event_type) -
2577 (sizeof(ev.event_type.event_type.name) - size); 2634 (sizeof(ev.event_type.event_type.name) - size);
2578 2635
2579 err = process(tool, &ev, NULL, machine); 2636 err = process(tool, &ev, NULL, machine);
2580 2637
2581 return err; 2638 return err;
2582 } 2639 }
2583 2640
2584 int perf_event__synthesize_event_types(struct perf_tool *tool, 2641 int perf_event__synthesize_event_types(struct perf_tool *tool,
2585 perf_event__handler_t process, 2642 perf_event__handler_t process,
2586 struct machine *machine) 2643 struct machine *machine)
2587 { 2644 {
2588 struct perf_trace_event_type *type; 2645 struct perf_trace_event_type *type;
2589 int i, err = 0; 2646 int i, err = 0;
2590 2647
2591 for (i = 0; i < trace_event_count; i++) { 2648 for (i = 0; i < trace_event_count; i++) {
2592 type = &trace_events[i]; 2649 type = &trace_events[i];
2593 2650
2594 err = perf_event__synthesize_event_type(tool, type->event_id, 2651 err = perf_event__synthesize_event_type(tool, type->event_id,
2595 type->name, process, 2652 type->name, process,
2596 machine); 2653 machine);
2597 if (err) { 2654 if (err) {
2598 pr_debug("failed to create perf header event type\n"); 2655 pr_debug("failed to create perf header event type\n");
2599 return err; 2656 return err;
2600 } 2657 }
2601 } 2658 }
2602 2659
2603 return err; 2660 return err;
2604 } 2661 }
2605 2662
2606 int perf_event__process_event_type(struct perf_tool *tool __maybe_unused, 2663 int perf_event__process_event_type(struct perf_tool *tool __maybe_unused,
2607 union perf_event *event) 2664 union perf_event *event)
2608 { 2665 {
2609 if (perf_header__push_event(event->event_type.event_type.event_id, 2666 if (perf_header__push_event(event->event_type.event_type.event_id,
2610 event->event_type.event_type.name) < 0) 2667 event->event_type.event_type.name) < 0)
2611 return -ENOMEM; 2668 return -ENOMEM;
2612 2669
2613 return 0; 2670 return 0;
2614 } 2671 }
2615 2672
2616 int perf_event__synthesize_tracing_data(struct perf_tool *tool, int fd, 2673 int perf_event__synthesize_tracing_data(struct perf_tool *tool, int fd,
2617 struct perf_evlist *evlist, 2674 struct perf_evlist *evlist,
2618 perf_event__handler_t process) 2675 perf_event__handler_t process)
2619 { 2676 {
2620 union perf_event ev; 2677 union perf_event ev;
2621 struct tracing_data *tdata; 2678 struct tracing_data *tdata;
2622 ssize_t size = 0, aligned_size = 0, padding; 2679 ssize_t size = 0, aligned_size = 0, padding;
2623 int err __maybe_unused = 0; 2680 int err __maybe_unused = 0;
2624 2681
2625 /* 2682 /*
2626 * We are going to store the size of the data followed 2683 * We are going to store the size of the data followed
2627 * by the data contents. Since the fd descriptor is a pipe, 2684 * by the data contents. Since the fd descriptor is a pipe,
2628 * we cannot seek back to store the size of the data once 2685 * we cannot seek back to store the size of the data once
2629 * we know it. Instead we: 2686 * we know it. Instead we:
2630 * 2687 *
2631 * - write the tracing data to the temp file 2688 * - write the tracing data to the temp file
2632 * - get/write the data size to pipe 2689 * - get/write the data size to pipe
2633 * - write the tracing data from the temp file 2690 * - write the tracing data from the temp file
2634 * to the pipe 2691 * to the pipe
2635 */ 2692 */
2636 tdata = tracing_data_get(&evlist->entries, fd, true); 2693 tdata = tracing_data_get(&evlist->entries, fd, true);
2637 if (!tdata) 2694 if (!tdata)
2638 return -1; 2695 return -1;
2639 2696
2640 memset(&ev, 0, sizeof(ev)); 2697 memset(&ev, 0, sizeof(ev));
2641 2698
2642 ev.tracing_data.header.type = PERF_RECORD_HEADER_TRACING_DATA; 2699 ev.tracing_data.header.type = PERF_RECORD_HEADER_TRACING_DATA;
2643 size = tdata->size; 2700 size = tdata->size;
2644 aligned_size = PERF_ALIGN(size, sizeof(u64)); 2701 aligned_size = PERF_ALIGN(size, sizeof(u64));
2645 padding = aligned_size - size; 2702 padding = aligned_size - size;
2646 ev.tracing_data.header.size = sizeof(ev.tracing_data); 2703 ev.tracing_data.header.size = sizeof(ev.tracing_data);
2647 ev.tracing_data.size = aligned_size; 2704 ev.tracing_data.size = aligned_size;
2648 2705
2649 process(tool, &ev, NULL, NULL); 2706 process(tool, &ev, NULL, NULL);
2650 2707
2651 /* 2708 /*
2652 * The put function will copy all the tracing data 2709 * The put function will copy all the tracing data
2653 * stored in temp file to the pipe. 2710 * stored in temp file to the pipe.
2654 */ 2711 */
2655 tracing_data_put(tdata); 2712 tracing_data_put(tdata);
2656 2713
2657 write_padded(fd, NULL, 0, padding); 2714 write_padded(fd, NULL, 0, padding);
2658 2715
2659 return aligned_size; 2716 return aligned_size;
2660 } 2717 }
2661 2718
2662 int perf_event__process_tracing_data(union perf_event *event, 2719 int perf_event__process_tracing_data(union perf_event *event,
2663 struct perf_session *session) 2720 struct perf_session *session)
2664 { 2721 {
2665 ssize_t size_read, padding, size = event->tracing_data.size; 2722 ssize_t size_read, padding, size = event->tracing_data.size;
2666 off_t offset = lseek(session->fd, 0, SEEK_CUR); 2723 off_t offset = lseek(session->fd, 0, SEEK_CUR);
2667 char buf[BUFSIZ]; 2724 char buf[BUFSIZ];
2668 2725
2669 /* setup for reading amidst mmap */ 2726 /* setup for reading amidst mmap */
2670 lseek(session->fd, offset + sizeof(struct tracing_data_event), 2727 lseek(session->fd, offset + sizeof(struct tracing_data_event),
2671 SEEK_SET); 2728 SEEK_SET);
2672 2729
2673 size_read = trace_report(session->fd, &session->pevent, 2730 size_read = trace_report(session->fd, &session->pevent,
2674 session->repipe); 2731 session->repipe);
2675 padding = PERF_ALIGN(size_read, sizeof(u64)) - size_read; 2732 padding = PERF_ALIGN(size_read, sizeof(u64)) - size_read;
2676 2733
2677 if (read(session->fd, buf, padding) < 0) 2734 if (read(session->fd, buf, padding) < 0)
2678 die("reading input file"); 2735 die("reading input file");
2679 if (session->repipe) { 2736 if (session->repipe) {
2680 int retw = write(STDOUT_FILENO, buf, padding); 2737 int retw = write(STDOUT_FILENO, buf, padding);
2681 if (retw <= 0 || retw != padding) 2738 if (retw <= 0 || retw != padding)
2682 die("repiping tracing data padding"); 2739 die("repiping tracing data padding");
2683 } 2740 }
2684 2741
2685 if (size_read + padding != size) 2742 if (size_read + padding != size)
2686 die("tracing data size mismatch"); 2743 die("tracing data size mismatch");
2687 2744
2688 perf_evlist__prepare_tracepoint_events(session->evlist, 2745 perf_evlist__prepare_tracepoint_events(session->evlist,
2689 session->pevent); 2746 session->pevent);
2690 2747
2691 return size_read + padding; 2748 return size_read + padding;
2692 } 2749 }
2693 2750
2694 int perf_event__synthesize_build_id(struct perf_tool *tool, 2751 int perf_event__synthesize_build_id(struct perf_tool *tool,
2695 struct dso *pos, u16 misc, 2752 struct dso *pos, u16 misc,
2696 perf_event__handler_t process, 2753 perf_event__handler_t process,
2697 struct machine *machine) 2754 struct machine *machine)
2698 { 2755 {
2699 union perf_event ev; 2756 union perf_event ev;
2700 size_t len; 2757 size_t len;
2701 int err = 0; 2758 int err = 0;
2702 2759
2703 if (!pos->hit) 2760 if (!pos->hit)
2704 return err; 2761 return err;
2705 2762
2706 memset(&ev, 0, sizeof(ev)); 2763 memset(&ev, 0, sizeof(ev));
2707 2764
2708 len = pos->long_name_len + 1; 2765 len = pos->long_name_len + 1;
2709 len = PERF_ALIGN(len, NAME_ALIGN); 2766 len = PERF_ALIGN(len, NAME_ALIGN);
2710 memcpy(&ev.build_id.build_id, pos->build_id, sizeof(pos->build_id)); 2767 memcpy(&ev.build_id.build_id, pos->build_id, sizeof(pos->build_id));
2711 ev.build_id.header.type = PERF_RECORD_HEADER_BUILD_ID; 2768 ev.build_id.header.type = PERF_RECORD_HEADER_BUILD_ID;
2712 ev.build_id.header.misc = misc; 2769 ev.build_id.header.misc = misc;
2713 ev.build_id.pid = machine->pid; 2770 ev.build_id.pid = machine->pid;
2714 ev.build_id.header.size = sizeof(ev.build_id) + len; 2771 ev.build_id.header.size = sizeof(ev.build_id) + len;
2715 memcpy(&ev.build_id.filename, pos->long_name, pos->long_name_len); 2772 memcpy(&ev.build_id.filename, pos->long_name, pos->long_name_len);
2716 2773
2717 err = process(tool, &ev, NULL, machine); 2774 err = process(tool, &ev, NULL, machine);
2718 2775
2719 return err; 2776 return err;
2720 } 2777 }
2721 2778
2722 int perf_event__process_build_id(struct perf_tool *tool __maybe_unused, 2779 int perf_event__process_build_id(struct perf_tool *tool __maybe_unused,
2723 union perf_event *event, 2780 union perf_event *event,
2724 struct perf_session *session) 2781 struct perf_session *session)
2725 { 2782 {
2726 __event_process_build_id(&event->build_id, 2783 __event_process_build_id(&event->build_id,
2727 event->build_id.filename, 2784 event->build_id.filename,
2728 session); 2785 session);
2729 return 0; 2786 return 0;
2730 } 2787 }
2731 2788
2732 void disable_buildid_cache(void) 2789 void disable_buildid_cache(void)
2733 { 2790 {
2734 no_buildid_cache = true; 2791 no_buildid_cache = true;
2735 } 2792 }
2736 2793
tools/perf/util/header.h
1 #ifndef __PERF_HEADER_H 1 #ifndef __PERF_HEADER_H
2 #define __PERF_HEADER_H 2 #define __PERF_HEADER_H
3 3
4 #include "../../../include/linux/perf_event.h" 4 #include "../../../include/linux/perf_event.h"
5 #include <sys/types.h> 5 #include <sys/types.h>
6 #include <stdbool.h> 6 #include <stdbool.h>
7 #include "types.h" 7 #include "types.h"
8 #include "event.h" 8 #include "event.h"
9 9
10 #include <linux/bitmap.h> 10 #include <linux/bitmap.h>
11 11
12 enum { 12 enum {
13 HEADER_RESERVED = 0, /* always cleared */ 13 HEADER_RESERVED = 0, /* always cleared */
14 HEADER_FIRST_FEATURE = 1, 14 HEADER_FIRST_FEATURE = 1,
15 HEADER_TRACING_DATA = 1, 15 HEADER_TRACING_DATA = 1,
16 HEADER_BUILD_ID, 16 HEADER_BUILD_ID,
17 17
18 HEADER_HOSTNAME, 18 HEADER_HOSTNAME,
19 HEADER_OSRELEASE, 19 HEADER_OSRELEASE,
20 HEADER_VERSION, 20 HEADER_VERSION,
21 HEADER_ARCH, 21 HEADER_ARCH,
22 HEADER_NRCPUS, 22 HEADER_NRCPUS,
23 HEADER_CPUDESC, 23 HEADER_CPUDESC,
24 HEADER_CPUID, 24 HEADER_CPUID,
25 HEADER_TOTAL_MEM, 25 HEADER_TOTAL_MEM,
26 HEADER_CMDLINE, 26 HEADER_CMDLINE,
27 HEADER_EVENT_DESC, 27 HEADER_EVENT_DESC,
28 HEADER_CPU_TOPOLOGY, 28 HEADER_CPU_TOPOLOGY,
29 HEADER_NUMA_TOPOLOGY, 29 HEADER_NUMA_TOPOLOGY,
30 HEADER_BRANCH_STACK, 30 HEADER_BRANCH_STACK,
31 HEADER_PMU_MAPPINGS, 31 HEADER_PMU_MAPPINGS,
32 HEADER_LAST_FEATURE, 32 HEADER_LAST_FEATURE,
33 HEADER_FEAT_BITS = 256, 33 HEADER_FEAT_BITS = 256,
34 }; 34 };
35 35
36 struct perf_file_section { 36 struct perf_file_section {
37 u64 offset; 37 u64 offset;
38 u64 size; 38 u64 size;
39 }; 39 };
40 40
41 struct perf_file_header { 41 struct perf_file_header {
42 u64 magic; 42 u64 magic;
43 u64 size; 43 u64 size;
44 u64 attr_size; 44 u64 attr_size;
45 struct perf_file_section attrs; 45 struct perf_file_section attrs;
46 struct perf_file_section data; 46 struct perf_file_section data;
47 struct perf_file_section event_types; 47 struct perf_file_section event_types;
48 DECLARE_BITMAP(adds_features, HEADER_FEAT_BITS); 48 DECLARE_BITMAP(adds_features, HEADER_FEAT_BITS);
49 }; 49 };
50 50
51 struct perf_pipe_file_header { 51 struct perf_pipe_file_header {
52 u64 magic; 52 u64 magic;
53 u64 size; 53 u64 size;
54 }; 54 };
55 55
56 struct perf_header; 56 struct perf_header;
57 57
58 int perf_file_header__read(struct perf_file_header *header, 58 int perf_file_header__read(struct perf_file_header *header,
59 struct perf_header *ph, int fd); 59 struct perf_header *ph, int fd);
60 60
61 struct perf_header { 61 struct perf_header {
62 int frozen; 62 int frozen;
63 bool needs_swap; 63 bool needs_swap;
64 s64 attr_offset; 64 s64 attr_offset;
65 u64 data_offset; 65 u64 data_offset;
66 u64 data_size; 66 u64 data_size;
67 u64 event_offset; 67 u64 event_offset;
68 u64 event_size; 68 u64 event_size;
69 DECLARE_BITMAP(adds_features, HEADER_FEAT_BITS); 69 DECLARE_BITMAP(adds_features, HEADER_FEAT_BITS);
70 }; 70 };
71 71
72 struct perf_evlist; 72 struct perf_evlist;
73 struct perf_session; 73 struct perf_session;
74 74
75 int perf_session__read_header(struct perf_session *session, int fd); 75 int perf_session__read_header(struct perf_session *session, int fd);
76 int perf_session__write_header(struct perf_session *session, 76 int perf_session__write_header(struct perf_session *session,
77 struct perf_evlist *evlist, 77 struct perf_evlist *evlist,
78 int fd, bool at_exit); 78 int fd, bool at_exit);
79 int perf_header__write_pipe(int fd); 79 int perf_header__write_pipe(int fd);
80 80
81 int perf_header__push_event(u64 id, const char *name); 81 int perf_header__push_event(u64 id, const char *name);
82 char *perf_header__find_event(u64 id); 82 char *perf_header__find_event(u64 id);
83 83
84 void perf_header__set_feat(struct perf_header *header, int feat); 84 void perf_header__set_feat(struct perf_header *header, int feat);
85 void perf_header__clear_feat(struct perf_header *header, int feat); 85 void perf_header__clear_feat(struct perf_header *header, int feat);
86 bool perf_header__has_feat(const struct perf_header *header, int feat); 86 bool perf_header__has_feat(const struct perf_header *header, int feat);
87 87
88 int perf_header__set_cmdline(int argc, const char **argv); 88 int perf_header__set_cmdline(int argc, const char **argv);
89 89
90 int perf_header__process_sections(struct perf_header *header, int fd, 90 int perf_header__process_sections(struct perf_header *header, int fd,
91 void *data, 91 void *data,
92 int (*process)(struct perf_file_section *section, 92 int (*process)(struct perf_file_section *section,
93 struct perf_header *ph, 93 struct perf_header *ph,
94 int feat, int fd, void *data)); 94 int feat, int fd, void *data));
95 95
96 int perf_header__fprintf_info(struct perf_session *s, FILE *fp, bool full); 96 int perf_header__fprintf_info(struct perf_session *s, FILE *fp, bool full);
97 char *perf_header__read_feature(struct perf_session *session, int feat);
97 98
98 int build_id_cache__add_s(const char *sbuild_id, const char *debugdir, 99 int build_id_cache__add_s(const char *sbuild_id, const char *debugdir,
99 const char *name, bool is_kallsyms, bool is_vdso); 100 const char *name, bool is_kallsyms, bool is_vdso);
100 int build_id_cache__remove_s(const char *sbuild_id, const char *debugdir); 101 int build_id_cache__remove_s(const char *sbuild_id, const char *debugdir);
101 102
102 int perf_event__synthesize_attr(struct perf_tool *tool, 103 int perf_event__synthesize_attr(struct perf_tool *tool,
103 struct perf_event_attr *attr, u32 ids, u64 *id, 104 struct perf_event_attr *attr, u32 ids, u64 *id,
104 perf_event__handler_t process); 105 perf_event__handler_t process);
105 int perf_event__synthesize_attrs(struct perf_tool *tool, 106 int perf_event__synthesize_attrs(struct perf_tool *tool,
106 struct perf_session *session, 107 struct perf_session *session,
107 perf_event__handler_t process); 108 perf_event__handler_t process);
108 int perf_event__process_attr(union perf_event *event, struct perf_evlist **pevlist); 109 int perf_event__process_attr(union perf_event *event, struct perf_evlist **pevlist);
109 110
110 int perf_event__synthesize_event_type(struct perf_tool *tool, 111 int perf_event__synthesize_event_type(struct perf_tool *tool,
111 u64 event_id, char *name, 112 u64 event_id, char *name,
112 perf_event__handler_t process, 113 perf_event__handler_t process,
113 struct machine *machine); 114 struct machine *machine);
114 int perf_event__synthesize_event_types(struct perf_tool *tool, 115 int perf_event__synthesize_event_types(struct perf_tool *tool,
115 perf_event__handler_t process, 116 perf_event__handler_t process,
116 struct machine *machine); 117 struct machine *machine);
117 int perf_event__process_event_type(struct perf_tool *tool, 118 int perf_event__process_event_type(struct perf_tool *tool,
118 union perf_event *event); 119 union perf_event *event);
119 120
120 int perf_event__synthesize_tracing_data(struct perf_tool *tool, 121 int perf_event__synthesize_tracing_data(struct perf_tool *tool,
121 int fd, struct perf_evlist *evlist, 122 int fd, struct perf_evlist *evlist,
122 perf_event__handler_t process); 123 perf_event__handler_t process);
123 int perf_event__process_tracing_data(union perf_event *event, 124 int perf_event__process_tracing_data(union perf_event *event,
124 struct perf_session *session); 125 struct perf_session *session);
125 126
126 int perf_event__synthesize_build_id(struct perf_tool *tool, 127 int perf_event__synthesize_build_id(struct perf_tool *tool,
127 struct dso *pos, u16 misc, 128 struct dso *pos, u16 misc,
128 perf_event__handler_t process, 129 perf_event__handler_t process,
129 struct machine *machine); 130 struct machine *machine);
130 int perf_event__process_build_id(struct perf_tool *tool, 131 int perf_event__process_build_id(struct perf_tool *tool,
131 union perf_event *event, 132 union perf_event *event,
132 struct perf_session *session); 133 struct perf_session *session);
133 134
134 /* 135 /*
135 * arch specific callback 136 * arch specific callback
136 */ 137 */
137 int get_cpuid(char *buffer, size_t sz); 138 int get_cpuid(char *buffer, size_t sz);
138 139
139 #endif /* __PERF_HEADER_H */ 140 #endif /* __PERF_HEADER_H */
140 141
tools/perf/util/thread.h
1 #ifndef __PERF_THREAD_H 1 #ifndef __PERF_THREAD_H
2 #define __PERF_THREAD_H 2 #define __PERF_THREAD_H
3 3
4 #include <linux/rbtree.h> 4 #include <linux/rbtree.h>
5 #include <unistd.h> 5 #include <unistd.h>
6 #include "symbol.h" 6 #include "symbol.h"
7 7
8 struct thread { 8 struct thread {
9 union { 9 union {
10 struct rb_node rb_node; 10 struct rb_node rb_node;
11 struct list_head node; 11 struct list_head node;
12 }; 12 };
13 struct map_groups mg; 13 struct map_groups mg;
14 pid_t pid; 14 pid_t pid;
15 char shortname[3]; 15 char shortname[3];
16 bool comm_set; 16 bool comm_set;
17 char *comm; 17 char *comm;
18 int comm_len; 18 int comm_len;
19
20 void *priv;
19 }; 21 };
20 22
21 struct machine; 23 struct machine;
22 24
23 void thread__delete(struct thread *self); 25 void thread__delete(struct thread *self);
24 26
25 int thread__set_comm(struct thread *self, const char *comm); 27 int thread__set_comm(struct thread *self, const char *comm);
26 int thread__comm_len(struct thread *self); 28 int thread__comm_len(struct thread *self);
27 void thread__insert_map(struct thread *self, struct map *map); 29 void thread__insert_map(struct thread *self, struct map *map);
28 int thread__fork(struct thread *self, struct thread *parent); 30 int thread__fork(struct thread *self, struct thread *parent);
29 31
30 static inline struct map *thread__find_map(struct thread *self, 32 static inline struct map *thread__find_map(struct thread *self,
31 enum map_type type, u64 addr) 33 enum map_type type, u64 addr)
32 { 34 {
33 return self ? map_groups__find(&self->mg, type, addr) : NULL; 35 return self ? map_groups__find(&self->mg, type, addr) : NULL;
34 } 36 }
35 37
36 void thread__find_addr_map(struct thread *thread, struct machine *machine, 38 void thread__find_addr_map(struct thread *thread, struct machine *machine,
37 u8 cpumode, enum map_type type, u64 addr, 39 u8 cpumode, enum map_type type, u64 addr,
38 struct addr_location *al); 40 struct addr_location *al);
39 41
40 void thread__find_addr_location(struct thread *thread, struct machine *machine, 42 void thread__find_addr_location(struct thread *thread, struct machine *machine,
41 u8 cpumode, enum map_type type, u64 addr, 43 u8 cpumode, enum map_type type, u64 addr,
42 struct addr_location *al, 44 struct addr_location *al,
43 symbol_filter_t filter); 45 symbol_filter_t filter);
44 #endif /* __PERF_THREAD_H */ 46 #endif /* __PERF_THREAD_H */
45 47