Skip to content

Commit 6b69b66

Browse files
Thomas RichterKernel Patches Daemon
authored andcommitted
perf/s390: Regression: Move uid filtering to BPF filters
V1 --> V2: Added Jiri Olsa's suggestion and introduced member bpf_perf_event_opts::no_ioctl_enable. On linux-next commit b4c658d ("perf target: Remove uid from target") introduces a regression on s390. In fact the regression exists on all platforms when the event supports auxiliary data gathering. Command # ./perf record -u 0 -aB --synth=no -- ./perf test -w thloop [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.011 MB perf.data ] # ./perf report --stats | grep SAMPLE # does not generate samples in the perf.data file. On x86 command # sudo perf record -e intel_pt// -u 0 ls is broken too. Looking at the sequence of calls in 'perf record' reveals this behavior: 1. The event 'cycles' is created and enabled: record__open() +-> evlist__apply_filters() +-> perf_bpf_filter__prepare() +-> bpf_program.attach_perf_event() +-> bpf_program.attach_perf_event_opts() +-> __GI___ioctl(..., PERF_EVENT_IOC_ENABLE, ...) The event 'cycles' is enabled and active now. However the event's ring-buffer to store the samples generated by hardware is not allocated yet. This happens now after enabling the event: 2. The event's fd is mmap() to create the ring buffer: record__open() +-> record__mmap() +-> record__mmap_evlist() +-> evlist__mmap_ex() +-> perf_evlist__mmap_ops() +-> mmap_per_cpu() +-> mmap_per_evsel() +-> mmap__mmap() +-> perf_mmap__mmap() +-> mmap() This allocates the ring-buffer for the event 'cycles'. With mmap() the kernel creates the ring buffer: perf_mmap(): kernel function to create the event's ring | buffer to save the sampled data. | +-> ring_buffer_attach(): Allocates memory for ring buffer. | The PMU has auxiliary data setup function. The | has_aux(event) condition is true and the PMU's | stop() is called to stop sampling. It is not | restarted: | if (has_aux(event)) | perf_event_stop(event, 0); | +-> cpumsf_pmu_stop(): Hardware sampling is stopped. No samples are generated and saved anymore. 3. After the event 'cycles' has been mapped, the event is enabled a second time in: __cmd_record() +-> evlist__enable() +-> __evlist__enable() +-> evsel__enable_cpu() +-> perf_evsel__enable_cpu() +-> perf_evsel__run_ioctl() +-> perf_evsel__ioctl() +-> __GI___ioctl(., PERF_EVENT_IOC_ENABLE, .) The second ioctl(fd, PERF_EVENT_IOC_ENABLE, 0); is just a NOP in this case. The first invocation in (1.) sets the event::state to PERF_EVENT_STATE_ACTIVE. The kernel functions perf_ioctl() +-> _perf_ioctl() +-> _perf_event_enable() +-> __perf_event_enable() returns immediately because event::state is already set to PERF_EVENT_STATE_ACTIVE. This happens on s390, because the event 'cycles' offers the possibility to save auxilary data. The PMU call backs setup_aux() and free_aux() are defined. Without both call back functions, cpumsf_pmu_stop() is not invoked and sampling continues. To remedy this, remove the first invocation of ioctl(..., PERF_EVENT_IOC_ENABLE, ...). in step (1.) Create the event in step (1.) and enable it in step (3.) after the ring buffer has been mapped. Make the change backward compatible and introduce a new structure member bpf_perf_event_opts::no_ioctl_enable. It defaults to false and only bpf_program__attach_perf_event() sets it to true. This way only perf tool invocation do not enable the sampling event. Output after: # ./perf record -aB --synth=no -u 0 -- ./perf test -w thloop 2 [ perf record: Woken up 3 times to write data ] [ perf record: Captured and wrote 0.876 MB perf.data ] # ./perf report --stats | grep SAMPLE SAMPLE events: 16200 (99.5%) SAMPLE events: 16200 # The software event succeeded before and after the patch: # ./perf record -e cpu-clock -aB --synth=no -u 0 -- \ ./perf test -w thloop 2 [ perf record: Woken up 7 times to write data ] [ perf record: Captured and wrote 2.870 MB perf.data ] # ./perf report --stats | grep SAMPLE SAMPLE events: 53506 (99.8%) SAMPLE events: 53506 # Fixes: 63f2f5e ("libbpf: add ability to attach/detach BPF program to perf event") To: Andrii Nakryiko <[email protected]> To: Ian Rogers <[email protected]> To: Ilya Leoshkevich <[email protected]> Signed-off-by: Thomas Richter <[email protected]> Suggested-by: Jiri Olsa <[email protected]>
1 parent b5fabf3 commit 6b69b66

File tree

2 files changed

+15
-7
lines changed

2 files changed

+15
-7
lines changed

tools/lib/bpf/libbpf.c

Lines changed: 13 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -10911,6 +10911,7 @@ struct bpf_link *bpf_program__attach_perf_event_opts(const struct bpf_program *p
1091110911
struct bpf_link_perf *link;
1091210912
int prog_fd, link_fd = -1, err;
1091310913
bool force_ioctl_attach;
10914+
bool no_ioctl_enable;
1091410915

1091510916
if (!OPTS_VALID(opts, bpf_perf_event_opts))
1091610917
return libbpf_err_ptr(-EINVAL);
@@ -10965,11 +10966,14 @@ struct bpf_link *bpf_program__attach_perf_event_opts(const struct bpf_program *p
1096510966
}
1096610967
link->link.fd = pfd;
1096710968
}
10968-
if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) {
10969-
err = -errno;
10970-
pr_warn("prog '%s': failed to enable perf_event FD %d: %s\n",
10971-
prog->name, pfd, errstr(err));
10972-
goto err_out;
10969+
no_ioctl_enable = OPTS_GET(opts, no_ioctl_enable, false);
10970+
if (!no_ioctl_enable) {
10971+
if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) {
10972+
err = -errno;
10973+
pr_warn("prog '%s': failed to enable perf_event FD %d: %s\n",
10974+
prog->name, pfd, errstr(err));
10975+
goto err_out;
10976+
}
1097310977
}
1097410978

1097510979
return &link->link;
@@ -10982,7 +10986,10 @@ struct bpf_link *bpf_program__attach_perf_event_opts(const struct bpf_program *p
1098210986

1098310987
struct bpf_link *bpf_program__attach_perf_event(const struct bpf_program *prog, int pfd)
1098410988
{
10985-
return bpf_program__attach_perf_event_opts(prog, pfd, NULL);
10989+
DECLARE_LIBBPF_OPTS(bpf_perf_event_opts, pe_opts);
10990+
10991+
pe_opts.no_ioctl_enable = true;
10992+
return bpf_program__attach_perf_event_opts(prog, pfd, &pe_opts);
1098610993
}
1098710994

1098810995
/*

tools/lib/bpf/libbpf.h

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -499,9 +499,10 @@ struct bpf_perf_event_opts {
499499
__u64 bpf_cookie;
500500
/* don't use BPF link when attach BPF program */
501501
bool force_ioctl_attach;
502+
bool no_ioctl_enable;
502503
size_t :0;
503504
};
504-
#define bpf_perf_event_opts__last_field force_ioctl_attach
505+
#define bpf_perf_event_opts__last_field no_ioctl_enable
505506

506507
LIBBPF_API struct bpf_link *
507508
bpf_program__attach_perf_event(const struct bpf_program *prog, int pfd);

0 commit comments

Comments
 (0)