Age | Commit message (Collapse) | Author |
|
For memcpy, the source pages are memset to zero only when --cycles is
used. This leads to wildly different results with or without --cycles,
since all sources pages are likely to be mapped to the same zero page
without explicit writes.
Before this fix:
$ export cmd="./perf stat -e LLC-loads -- ./perf bench \
mem memcpy -s 1024MB -l 100 -f default"
$ $cmd
2,935,826 LLC-loads
3.821677452 seconds time elapsed
$ $cmd --cycles
217,533,436 LLC-loads
8.616725985 seconds time elapsed
After this fix:
$ $cmd
214,459,686 LLC-loads
8.674301124 seconds time elapsed
$ $cmd --cycles
214,758,651 LLC-loads
8.644480006 seconds time elapsed
Fixes: 47b5757bac03c338 ("perf bench mem: Move boilerplate memory allocation to the infrastructure")
Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel@axis.com
Link: http://lore.kernel.org/lkml/20200810133404.30829-1-vincent.whitchurch@axis.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Commit fbd705a0c618 ("sched: Introduce the 'trace_sched_waking'
tracepoint") added sched_waking tracepoint which should be preferred
over sched_wakeup when analyzing scheduling delays.
Update 'perf sched record' to collect sched_waking events if it exists
and fallback to sched_wakeup if it does not. Similarly, update timehist
command to skip sched_wakeup events if the session includes sched_waking
(ie., sched_waking is preferred over sched_wakeup).
Signed-off-by: David Ahern <dsahern@kernel.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Jiri Olsa <jolsa@kernel.org>
Link: http://lore.kernel.org/lkml/20200807164844.44870-1-dsahern@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
There are a couple of spelling mistakes in the text. Fix these.
Signed-off-by: Colin King <colin.king@canonical.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: kernel-janitors@vger.kernel.org
Link: http://lore.kernel.org/lkml/20200812064647.200132-1-colin.king@canonical.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Standard benchmark names let users know the tests specifics. For
example "2x1-bw-process" name tells that two processes one thread each
are run and the RAM bandwidth is measured.
Several benchmarks names do not correspond to their actual running
configuration. Fix that and also some whitespace and comment
inconsistencies.
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/6b6f2084f132ee8e9203dc7c32f9deb209b87a68.1597004831.git.agordeev@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/d949f5f48e17fc816f3beecf8479f1b2480345e4.1597004831.git.agordeev@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
That helps us not to lose new protocol families when they are
introduced, replacing that hardcoded, dated family->string table.
To recap what this allows us to do:
# perf trace -e syscalls:sys_enter_socket/max-stack=10/ --filter=family==INET --max-events=1
0.000 fetchmail/41097 syscalls:sys_enter_socket(family: INET, type: DGRAM|CLOEXEC|NONBLOCK, protocol: IP)
__GI___socket (inlined)
reopen (/usr/lib64/libresolv-2.31.so)
send_dg (/usr/lib64/libresolv-2.31.so)
__res_context_send (/usr/lib64/libresolv-2.31.so)
__GI___res_context_query (inlined)
__GI___res_context_search (inlined)
_nss_dns_gethostbyname4_r (/usr/lib64/libnss_dns-2.31.so)
gaih_inet.constprop.0 (/usr/lib64/libc-2.31.so)
__GI_getaddrinfo (inlined)
[0x15cb2] (/usr/bin/fetchmail)
#
More work is still needed to allow for the more natura strace-like
syscall name usage instead of the trace event name:
# perf trace -e socket/max-stack=10,family==INET/ --max-events=1
I.e. to allow for modifiers to follow the syscall name and for logical
expressions to be accepted as filters to use with that syscall, be it as
trace event filters or BPF based ones.
Using -v we can see how the trace event filter is built:
# perf trace -v -e syscalls:sys_enter_socket/call-graph=dwarf/ --filter=family==INET --max-events=2
<SNIP>
New filter for syscalls:sys_enter_socket: (family==0x2) && (common_pid != 41384 && common_pid != 2836)
<SNIP>
$ tools/perf/trace/beauty/socket.sh | grep -w 2
[2] = "INET",
$
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
To use with 'perf trace', to convert the protocol families to strings,
e.g:
$ tools/perf/trace/beauty/socket.sh
static const char *socket_families[] = {
[0] = "UNSPEC",
[1] = "LOCAL",
[2] = "INET",
[3] = "AX25",
[4] = "IPX",
[5] = "APPLETALK",
[6] = "NETROM",
[7] = "BRIDGE",
[8] = "ATMPVC",
[9] = "X25",
[10] = "INET6",
[11] = "ROSE",
[12] = "DECnet",
[13] = "NETBEUI",
[14] = "SECURITY",
[15] = "KEY",
[16] = "NETLINK",
[17] = "PACKET",
[18] = "ASH",
[19] = "ECONET",
[20] = "ATMSVC",
[21] = "RDS",
[22] = "SNA",
[23] = "IRDA",
[24] = "PPPOX",
[25] = "WANPIPE",
[26] = "LLC",
[27] = "IB",
[28] = "MPLS",
[29] = "CAN",
[30] = "TIPC",
[31] = "BLUETOOTH",
[32] = "IUCV",
[33] = "RXRPC",
[34] = "ISDN",
[35] = "PHONET",
[36] = "IEEE802154",
[37] = "CAIF",
[38] = "ALG",
[39] = "NFC",
[40] = "VSOCK",
[41] = "KCM",
[42] = "QIPCRTR",
[43] = "SMC",
[44] = "XDP",
};
$
This uses a copy of include/linux/socket.h that is kept in a directory
to be used just for these table generation scripts and for checking if
the kernel has a new file that maybe gets something new for these
tables.
This allows us to:
- Avoid accessing files outside tools/, in the kernel sources, that may
be changed in unexpected ways and thus break these scripts.
- Notice when those files change and thus check if the changes don't
break those scripts, update them to automatically get the new
definitions, a new socket family, for instance.
- Not add then to the tools/include/ where it may end up used while
building the tools and end up requiring dragging yet more stuff from
the kernel or plain break the build in some of the myriad environments
where perf may be built.
This will replace the previous static array in tools/perf/ that was
dated and was already missing the AF_KCM, AF_QIPCRTR, AF_SMC and AF_XDP
families.
The next cset will wire this up to the perf build process.
At some point this must be made into a library to be used in places such
as libtraceevent, bpftrace, etc.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux
Pull perf tools updates from Arnaldo Carvalho de Melo:
"New features:
- Introduce controlling how 'perf stat' and 'perf record' works via a
control file descriptor, allowing starting with events configured
but disabled until commands are received via the control file
descriptor. This allows, for instance for tools such as Intel VTune
to make further use of perf as its Linux platform driver.
- Improve 'perf record' to to register in a perf.data file header the
clockid used to help later correlate things like syslog files and
perf events recorded.
- Add basic syscall and find_next_bit benchmarks to 'perf bench'.
- Allow using computed metrics in calculating other metrics. For
instance:
{
.metric_expr = "l2_rqsts.demand_data_rd_hit + l2_rqsts.pf_hit + l2_rqsts.rfo_hit",
.metric_name = "DCache_L2_All_Hits",
},
{
.metric_expr = "max(l2_rqsts.all_demand_data_rd - l2_rqsts.demand_data_rd_hit, 0) + l2_rqsts.pf_miss + l2_rqsts.rfo_miss",
.metric_name = "DCache_L2_All_Miss",
},
{
.metric_expr = "dcache_l2_all_hits + dcache_l2_all_miss",
.metric_name = "DCache_L2_All",
}
- Add suport for 'd_ratio', '>' and '<' operators to the expression
resolver used in calculating metrics in 'perf stat'.
Support for new kernel features:
- Support TEXT_POKE and KSYMBOL_TYPE_OOL perf metadata events to cope
with things like ftrace, trampolines, i.e. changes in the kernel
text that gets in the way of properly decoding Intel PT hardware
traces, for instance.
Intel PT:
- Add various knobs to reduce the volume of Intel PT traces by
reducing the level of details such as decoding just some types of
packets (e.g., FUP/TIP, PSB+), also filtering by time range.
- Add new itrace options (log flags to the 'd' option, error flags to
the 'e' one, etc), controlling how Intel PT is transformed into
perf events, document some missing options (e.g., how to synthesize
callchains).
BPF:
- Properly report BPF errors when parsing events.
- Do not setup side-band events if LIBBPF is not linked, fixing a
segfault.
Libraries:
- Improvements to the libtraceevent plugin mechanism.
- Improve libtracevent support for KVM trace events SVM exit reasons.
- Add a libtracevent plugins for decoding syscalls/sys_enter_futex
and for tlb_flush.
- Ensure sample_period is set libpfm4 events in 'perf test'.
- Fixup libperf namespacing, to make sure what is in libperf has the
perf_ namespace while what is now only in tools/perf/ doesn't use
that prefix.
Arch specific:
- Improve the testing of vendor events and metrics in 'perf test'.
- Allow no ARM CoreSight hardware tracer sink to be specified on
command line.
- Fix arm_spe_x recording when mixed with other perf events.
- Add s390 idle functions 'psw_idle' and 'psw_idle_exit' to list of
idle symbols.
- List kernel supplied event aliases for arm64 in 'perf list'.
- Add support for extended register capability in PowerPC 9 and 10.
- Added nest IMC power9 metric events.
Miscellaneous:
- No need to setup sample_regs_intr/sample_regs_user for dummy
events.
- Update various copies of kernel headers, some causing perf to
handle new syscalls, MSRs, etc.
- Improve usage of flex and yacc, enabling warnings and addressing
the fallout.
- Add missing '--output' option to 'perf kmem' so that it can pass it
along to 'perf record'.
- 'perf probe' fixes related to adding multiple probes on the same
address for the same event.
- Make 'perf probe' warn if the target function is a GNU indirect
function.
- Remove //anon mmap events from 'perf inject jit' to fix supporting
both using ELF files for generated functions and the perf-PID.map
approaches"
* tag 'perf-tools-2020-08-10' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: (144 commits)
perf record: Skip side-band event setup if HAVE_LIBBPF_SUPPORT is not set
perf tools powerpc: Add support for extended regs in power10
perf tools powerpc: Add support for extended register capability
tools headers UAPI: Sync drm/i915_drm.h with the kernel sources
tools arch x86: Sync asm/cpufeatures.h with the kernel sources
tools arch x86: Sync the msr-index.h copy with the kernel sources
tools headers UAPI: update linux/in.h copy
tools headers API: Update close_range affected files
perf script: Add 'tod' field to display time of day
perf script: Change the 'enum perf_output_field' enumerators to be 64 bits
perf data: Add support to store time of day in CTF data conversion
perf tools: Move clockid_res_ns under clock struct
perf header: Store clock references for -k/--clockid option
perf tools: Add clockid_name function
perf clockid: Move parse_clockid() to new clockid object
tools lib traceevent: Handle possible strdup() error in tep_add_plugin_path() API
libtraceevent: Fixed description of tep_add_plugin_path() API
libtraceevent: Fixed type in PRINT_FMT_STING
libtraceevent: Fixed broken indentation in parse_ip4_print_args()
libtraceevent: Improve error handling of tep_plugin_add_option() API
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc updates from Michael Ellerman:
- Add support for (optionally) using queued spinlocks & rwlocks.
- Support for a new faster system call ABI using the scv instruction on
Power9 or later.
- Drop support for the PROT_SAO mmap/mprotect flag as it will be
unsupported on Power10 and future processors, leaving us with no way
to implement the functionality it requests. This risks breaking
userspace, though we believe it is unused in practice.
- A bug fix for, and then the removal of, our custom stack expansion
checking. We now allow stack expansion up to the rlimit, like other
architectures.
- Remove the remnants of our (previously disabled) topology update
code, which tried to react to NUMA layout changes on virtualised
systems, but was prone to crashes and other problems.
- Add PMU support for Power10 CPUs.
- A change to our signal trampoline so that we don't unbalance the link
stack (branch return predictor) in the signal delivery path.
- Lots of other cleanups, refactorings, smaller features and so on as
usual.
Thanks to: Abhishek Goel, Alastair D'Silva, Alexander A. Klimov, Alexey
Kardashevskiy, Alistair Popple, Andrew Donnellan, Aneesh Kumar K.V, Anju
T Sudhakar, Anton Blanchard, Arnd Bergmann, Athira Rajeev, Balamuruhan
S, Bharata B Rao, Bill Wendling, Bin Meng, Cédric Le Goater, Chris
Packham, Christophe Leroy, Christoph Hellwig, Daniel Axtens, Dan
Williams, David Lamparter, Desnes A. Nunes do Rosario, Erhard F., Finn
Thain, Frederic Barrat, Ganesh Goudar, Gautham R. Shenoy, Geoff Levand,
Greg Kurz, Gustavo A. R. Silva, Hari Bathini, Harish, Imre Kaloz, Joel
Stanley, Joe Perches, John Crispin, Jordan Niethe, Kajol Jain, Kamalesh
Babulal, Kees Cook, Laurent Dufour, Leonardo Bras, Li RongQing, Madhavan
Srinivasan, Mahesh Salgaonkar, Mark Cave-Ayland, Michal Suchanek, Milton
Miller, Mimi Zohar, Murilo Opsfelder Araujo, Nathan Chancellor, Nathan
Lynch, Naveen N. Rao, Nayna Jain, Nicholas Piggin, Oliver O'Halloran,
Palmer Dabbelt, Pedro Miraglia Franco de Carvalho, Philippe Bergheaud,
Pingfan Liu, Pratik Rajesh Sampat, Qian Cai, Qinglang Miao, Randy
Dunlap, Ravi Bangoria, Sachin Sant, Sam Bobroff, Sandipan Das, Santosh
Sivaraj, Satheesh Rajendran, Shirisha Ganta, Sourabh Jain, Srikar
Dronamraju, Stan Johnson, Stephen Rothwell, Thadeu Lima de Souza
Cascardo, Thiago Jung Bauermann, Tom Lane, Vaibhav Jain, Vladis Dronov,
Wei Yongjun, Wen Xiong, YueHaibing.
* tag 'powerpc-5.9-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (337 commits)
selftests/powerpc: Fix pkey syscall redefinitions
powerpc: Fix circular dependency between percpu.h and mmu.h
powerpc/powernv/sriov: Fix use of uninitialised variable
selftests/powerpc: Skip vmx/vsx/tar/etc tests on older CPUs
powerpc/40x: Fix assembler warning about r0
powerpc/papr_scm: Add support for fetching nvdimm 'fuel-gauge' metric
powerpc/papr_scm: Fetch nvdimm performance stats from PHYP
cpuidle: pseries: Fixup exit latency for CEDE(0)
cpuidle: pseries: Add function to parse extended CEDE records
cpuidle: pseries: Set the latency-hint before entering CEDE
selftests/powerpc: Fix online CPU selection
powerpc/perf: Consolidate perf_callchain_user_[64|32]()
powerpc/pseries/hotplug-cpu: Remove double free in error path
powerpc/pseries/mobility: Add pr_debug() for device tree changes
powerpc/pseries/mobility: Set pr_fmt()
powerpc/cacheinfo: Warn if cache object chain becomes unordered
powerpc/cacheinfo: Improve diagnostics about malformed cache lists
powerpc/cacheinfo: Use name@unit instead of full DT path in debug messages
powerpc/cacheinfo: Set pr_fmt()
powerpc: fix function annotations to avoid section mismatch warnings with gcc-10
...
|
|
We received an error report that perf-record caused 'Segmentation fault'
on a newly system (e.g. on the new installed ubuntu).
(gdb) backtrace
#0 __read_once_size (size=4, res=<synthetic pointer>, p=0x14) at /root/0-jinyao/acme/tools/include/linux/compiler.h:139
#1 atomic_read (v=0x14) at /root/0-jinyao/acme/tools/include/asm/../../arch/x86/include/asm/atomic.h:28
#2 refcount_read (r=0x14) at /root/0-jinyao/acme/tools/include/linux/refcount.h:65
#3 perf_mmap__read_init (map=map@entry=0x0) at mmap.c:177
#4 0x0000561ce5c0de39 in perf_evlist__poll_thread (arg=0x561ce68584d0) at util/sideband_evlist.c:62
#5 0x00007fad78491609 in start_thread (arg=<optimized out>) at pthread_create.c:477
#6 0x00007fad7823c103 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
The root cause is, evlist__add_bpf_sb_event() just returns 0 if
HAVE_LIBBPF_SUPPORT is not defined (inline function path). So it will
not create a valid evsel for side-band event.
But perf-record still creates BPF side band thread to process the
side-band event, then the error happpens.
We can reproduce this issue by removing the libelf-dev. e.g.
1. apt-get remove libelf-dev
2. perf record -a -- sleep 1
root@test:~# ./perf record -a -- sleep 1
perf: Segmentation fault
Obtained 6 stack frames.
./perf(+0x28eee8) [0x5562d6ef6ee8]
/lib/x86_64-linux-gnu/libc.so.6(+0x46210) [0x7fbfdc65f210]
./perf(+0x342e74) [0x5562d6faae74]
./perf(+0x257e39) [0x5562d6ebfe39]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x9609) [0x7fbfdc990609]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x43) [0x7fbfdc73b103]
Segmentation fault (core dumped)
To fix this issue,
1. We either install the missing libraries to let HAVE_LIBBPF_SUPPORT
be defined.
e.g. apt-get install libelf-dev and install other related libraries.
2. Use this patch to skip the side-band event setup if HAVE_LIBBPF_SUPPORT
is not set.
Committer notes:
The side band thread is not used just with BPF, it is also used with
--switch-output-event, so narrow the ifdef to the BPF specific part.
Fixes: 23cbb41c939a ("perf record: Move side band evlist setup to separate routine")
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200805022937.29184-1-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Added support for supported regs which are new in power10 ( MMCR3,
SIER2, SIER3 ) to sample_reg_mask in the tool side to use with `-I?`
option. Also added PVR check to send extended mask for power10 at kernel
while capturing extended regs in each sample.
Signed-off-by: Athira Jajeev <atrajeev@linux.vnet.ibm.com>
Reviewed-by: Kajol Jain <kjain@linux.ibm.com>
Reviewed-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michael Neuling <mikey@neuling.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add extended regs to sample_reg_mask in the tool side to use with `-I?`
option. Perf tools side uses extended mask to display the platform
supported register names (with -I? option) to the user and also send
this mask to the kernel to capture the extended registers in each
sample. Hence decide the mask value based on the processor version.
Currently definitions for `mfspr`, `SPRN_PVR` are part of
`arch/powerpc/util/header.c`. Move this to a header file so that these
definitions can be re-used in other source files as well.
Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Reviewed-by: Kajol Jain <kjain@linux.ibm.com>
Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Reviewed--by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Tested-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michael Neuling <mikey@neuling.org> <mikey@neuling.org>
Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
[Decide extended mask at run time based on platform]
Signed-off-by: Athira Jajeev <atrajeev@linux.vnet.ibm.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
To pick the changes from:
55db9c0e8534 ("net: remove compat_sys_{get,set}sockopt")
9b4feb630e8e ("arch: wire-up close_range()")
That automagically add the 'close_range' syscall to tools such as 'perf
trace'.
Before:
# perf trace -e close_range
event syntax error: 'close_range'
\___ parser error
Run 'perf list' for a list of valid events
Usage: perf trace [<options>] [<command>]
or: perf trace [<options>] -- <command> [<options>]
or: perf trace record [<options>] [<command>]
or: perf trace record [<options>] -- <command> [<options>]
-e, --event <event> event/syscall selector. use 'perf list' to list available events
#
After, system wide strace like tracing for this syscall:
# perf trace -e close_range
^C#
No calls, I need some test proggie :-)
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Christian Brauner <christian.brauner@ubuntu.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add a 'tod' field to display time of day column with time of date
(wallclock) time.
# perf record -k CLOCK_MONOTONIC kill
kill: not enough arguments
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.033 MB perf.data (8 samples) ]
# perf script
perf 261340 152919.481538: 1 cycles: ffffffff8106d104 ...
perf 261340 152919.481543: 1 cycles: ffffffff8106d104 ...
perf 261340 152919.481545: 7 cycles: ffffffff8106d104 ...
...
# perf script --ns
perf 261340 152919.481538922: 1 cycles: ffffffff8106d ...
perf 261340 152919.481543286: 1 cycles: ffffffff8106d ...
perf 261340 152919.481545397: 7 cycles: ffffffff8106d ...
...
# perf script -F+tod
perf 261340 2020-07-13 18:26:55.620971 152919.481538: ...
perf 261340 2020-07-13 18:26:55.620975 152919.481543: ...
perf 261340 2020-07-13 18:26:55.620978 152919.481545: ...
...
# perf script -F+tod --ns
perf 261340 2020-07-13 18:26:55.620971621 152919.481538922: ...
perf 261340 2020-07-13 18:26:55.620975985 152919.481543286: ...
perf 261340 2020-07-13 18:26:55.620978096 152919.481545397: ...
...
It's available only for recording with clockid specified, because it's
the only case where we can get reference time to wallclock time. It's
can't do that with perf clock yet.
Error is display if you want to use --tod on data without clockid
specified:
# perf script -F+tod
Can't provide 'tod' time, missing clock data. Please record with -k/--clockid option.
Original-patch-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Geneviève Bastien <gbastien@versatic.net>
Cc: Ian Rogers <irogers@google.com>
Cc: Jeremie Galarneau <jgalar@efficios.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lore.kernel.org/lkml/20200805093444.314999-8-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
So it's possible to add new values. I did not find any place where the
enum values are passed through some number type, so it's safe to make
this change.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Geneviève Bastien <gbastien@versatic.net>
Cc: Ian Rogers <irogers@google.com>
Cc: Jeremie Galarneau <jgalar@efficios.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lore.kernel.org/lkml/20200805093444.314999-7-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Adad support to convert and store time of day in CTF data conversion for
'perf data convert' subcommand.
The perf.data used for conversion needs to have clock data information -
must be recorded with -k/--clockid option).
New --tod option is added to 'perf data convert' subcommand to convert
data with timestamps converted to wall clock time.
Record data with clockid set:
# perf record -k CLOCK_MONOTONIC kill
kill: not enough arguments
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.033 MB perf.data (8 samples) ]
Convert data with TOD timestamps:
# perf data convert --tod --to-ctf ./ctf
[ perf data convert: Converted 'perf.data' into CTF data './ctf' ]
[ perf data convert: Converted and wrote 0.000 MB (8 samples) ]
Display data in perf script:
# perf script -F+tod --ns
perf 262150 2020-07-13 18:38:50.097678523 153633.958246159: 1 cycles: ...
perf 262150 2020-07-13 18:38:50.097682941 153633.958250577: 1 cycles: ...
perf 262150 2020-07-13 18:38:50.097684997 153633.958252633: 7 cycles: ...
...
Display data in babeltrace:
# babeltrace --clock-date ./ctf
[2020-07-13 18:38:50.097678523] (+?.?????????) cycles: { cpu_id = 0 }, { perf_ip = 0xFFF ...
[2020-07-13 18:38:50.097682941] (+0.000004418) cycles: { cpu_id = 0 }, { perf_ip = 0xFFF ...
[2020-07-13 18:38:50.097684997] (+0.000002056) cycles: { cpu_id = 0 }, { perf_ip = 0xFFF ...
...
It's available only for recording with clockid specified, because it's
the only case where we can get reference time to wallclock time. It's
can't do that with perf clock yet.
Error is display if you want to use --tod on data without clockid
specified:
# perf data convert --tod --to-ctf ./ctf
Can't provide --tod time, missing clock data. Please record with -k/--clockid option.
Failed to setup CTF writer.
Error during conversion setup.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Geneviève Bastien <gbastien@versatic.net>
Cc: Ian Rogers <irogers@google.com>
Cc: Jeremie Galarneau <jgalar@efficios.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lore.kernel.org/lkml/20200805093444.314999-6-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Move the clockid_res_ns struct member to the clock struct, so we have
the clock related stuff in one place.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Geneviève Bastien <gbastien@versatic.net>
Cc: Ian Rogers <irogers@google.com>
Cc: Jeremie Galarneau <jgalar@efficios.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lore.kernel.org/lkml/20200805093444.314999-5-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add a new CLOCK_DATA feature that stores reference times when
-k/--clockid option is specified.
It contains the clock id and its reference time together with wall clock
time taken at the 'same time', both values are in nanoseconds.
The format of data is as below:
struct {
u32 version; /* version = 1 */
u32 clockid;
u64 wall_clock_ns;
u64 clockid_time_ns;
};
This clock reference times will be used in following changes to display
wall clock for perf events.
It's available only for recording with clockid specified, because it's
the only case where we can get reference time to wallclock time. It's
can't do that with perf clock yet.
Committer testing:
$ perf record -h -k
Usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-k, --clockid <clockid>
clockid to use for events, see clock_gettime()
$ perf record -k monotonic sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.017 MB perf.data (8 samples) ]
$ perf report --header-only | grep clockid -A1
# event : name = cycles:u, , id = { 88815, 88816, 88817, 88818, 88819, 88820, 88821, 88822 }, size = 120, { sample_period, sample_freq } = 4000, sample_type = IP|TID|TIME|PERIOD, read_format = ID, disabled = 1, inherit = 1, exclude_kernel = 1, mmap = 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 3, sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1, use_clockid = 1, ksymbol = 1, bpf_event = 1, clockid = 1
# CPU_TOPOLOGY info available, use -I to display
--
# clockid frequency: 1000 MHz
# cpu pmu capabilities: branches=32, max_precise=3, pmu_name=skylake
# clockid: monotonic (1)
# reference time: 2020-08-06 09:40:21.619290 = 1596717621.619290 (TOD) = 21931.077673635 (monotonic)
$
Original-patch-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Geneviève Bastien <gbastien@versatic.net>
Cc: Ian Rogers <irogers@google.com>
Cc: Jeremie Galarneau <jgalar@efficios.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lore.kernel.org/lkml/20200805093444.314999-4-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add the clockid_name() function to get the clock name based on its
clockid. It will be used in the following changes.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Geneviève Bastien <gbastien@versatic.net>
Cc: Ian Rogers <irogers@google.com>
Cc: Jeremie Galarneau <jgalar@efficios.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lore.kernel.org/lkml/20200805093444.314999-3-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Move parse_clockid and all needed clcckid related stuff into clockid
object. We are going to add clockid_name function in following change,
so it's better it's placed in separated object and not in
builtin-record.c.
No functional change is intended.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Geneviève Bastien <gbastien@versatic.net>
Cc: Ian Rogers <irogers@google.com>
Cc: Jeremie Galarneau <jgalar@efficios.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lore.kernel.org/lkml/20200805093444.314999-2-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Adjust limited access message to mention CAP_SYS_PTRACE capability for
processes of unprivileged users. Add link to perf security document in
the end of the section about capabilities.
The change has been inspired by this discussion:
https://lore.kernel.org/lkml/20200722113007.GI77866@kernel.org/
Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/6f8a7425-6e7d-19aa-1605-e59836b9e2a6@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
A single q option decodes ip from only FUP/TIP packets. Make it so that
repeating the q option (i.e. qq) decodes only PSB+, getting ip if there
is a FUP packet within PSB+ (i.e. between PSB and PSBEND).
Example:
$ perf record -e intel_pt//u grep -rI pudding drivers
[ perf record: Woken up 52 times to write data ]
[ perf record: Captured and wrote 57.870 MB perf.data ]
$ time perf script --itrace=bi | wc -l
58948289
real 1m23.863s
user 1m23.251s
sys 0m7.452s
$ time perf script --itrace=biq | wc -l
3385694
real 0m4.453s
user 0m4.455s
sys 0m0.328s
$ time perf script --itrace=biqq | wc -l
1883
real 0m0.047s
user 0m0.043s
sys 0m0.009s
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200710151104.15137-13-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Use the new itrace 'q' option to add support for a mode of decoding that
ignores TNT, does not walk object code, but gets the ip from FUP and TIP
packets.
Example:
$ perf record -e intel_pt//u grep -rI pudding drivers
[ perf record: Woken up 52 times to write data ]
[ perf record: Captured and wrote 57.870 MB perf.data ]
$ time perf script --itrace=bi | wc -l
58948289
real 1m23.863s
user 1m23.251s
sys 0m7.452s
$ time perf script --itrace=biq | wc -l
3385694
real 0m4.453s
user 0m4.455s
sys 0m0.328s
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200710151104.15137-12-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
The 'q' option is for modes of decoding that are quicker because they
skip or omit decoding some aspects of trace data.
If supported, the 'q' option may be repeated to increase the effect.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200710151104.15137-11-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Change the debug logging (when used with the --time option) to time
filter logged perf events, but allow that to be overridden by using
"d+a" instead of plain "d".
That can reduce the size of the log file.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200710151104.15137-10-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
The "d" option may be followed by flags which affect what debug messages
will or will not be logged. Each flag must be preceded by either '+' or
'-'. The flags support by Intel PT are:
-a Suppress logging of perf events
Suppressing perf events is useful for decreasing the size of the log.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200710151104.15137-9-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Allow the 'd' option to be followed by flags which will affect what debug
messages will or will not be reported. Each flag must be preceded by either
'+' or '-'. The flags are:
a all perf events
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200710151104.15137-8-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
The itrace "e" option may be followed by flags which affect what errors
will or will not be reported. Each flag must be preceded by either '+' or '-'.
The flags supported by Intel PT are:
-o Suppress overflow errors
-l Suppress trace data lost errors
For example, for errors but not overflow or data lost errors:
--itrace=e-o-l
Suppressing those errors can be useful for testing and debugging because
they are not due to decoding.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200710151104.15137-7-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Allow the 'e' option to be followed by flags which will affect what errors
will or will not be reported. Each flag must be preceded by either '+' or
'-'. The flags are:
o overflow
l trace data lost
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200710151104.15137-6-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Add missing itrace options o, G and L.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200710151104.15137-5-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
For example:
Before:
$ perf record -e '{intel_pt/branch=0/,branch-loads/aux-output/ppp}' -- ls -l
Error:
branch-loads: PMU Hardware doesn't support sampling/overflow-interrupts. Try 'perf stat'
After:
$ perf record -e '{intel_pt/branch=0/,branch-loads/aux-output/ppp}' -- ls -l
Error:
branch-loads: PMU Hardware doesn't support 'aux_output' feature
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lore.kernel.org/lkml/20200710151104.15137-4-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
CBR events can result in a duplicate branch event, because the state
type defaults to a branch. Fix by clearing the state type.
Example: trace 'sleep' and hope for a frequency change
Before:
$ perf record -e intel_pt//u sleep 0.1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.034 MB perf.data ]
$ perf script --itrace=bpe > before.txt
After:
$ perf script --itrace=bpe > after.txt
$ diff -u before.txt after.txt
--- before.txt 2020-07-07 14:42:18.191508098 +0300
+++ after.txt 2020-07-07 14:42:36.587891753 +0300
@@ -29673,7 +29673,6 @@
sleep 93431 [007] 15411.619905: 1 branches:u: 0 [unknown] ([unknown]) => 7f0818abb2e0 clock_nanosleep@@GLIBC_2.17+0x0 (/usr/lib/x86_64-linux-gnu/libc-2.31.so)
sleep 93431 [007] 15411.619905: 1 branches:u: 7f0818abb30c clock_nanosleep@@GLIBC_2.17+0x2c (/usr/lib/x86_64-linux-gnu/libc-2.31.so) => 0 [unknown] ([unknown])
sleep 93431 [007] 15411.720069: cbr: cbr: 15 freq: 1507 MHz ( 56%) 7f0818abb30c clock_nanosleep@@GLIBC_2.17+0x2c (/usr/lib/x86_64-linux-gnu/libc-2.31.so)
- sleep 93431 [007] 15411.720069: 1 branches:u: 7f0818abb30c clock_nanosleep@@GLIBC_2.17+0x2c (/usr/lib/x86_64-linux-gnu/libc-2.31.so) => 0 [unknown] ([unknown])
sleep 93431 [007] 15411.720076: 1 branches:u: 0 [unknown] ([unknown]) => 7f0818abb30e clock_nanosleep@@GLIBC_2.17+0x2e (/usr/lib/x86_64-linux-gnu/libc-2.31.so)
sleep 93431 [007] 15411.720077: 1 branches:u: 7f0818abb323 clock_nanosleep@@GLIBC_2.17+0x43 (/usr/lib/x86_64-linux-gnu/libc-2.31.so) => 7f0818ac0eb7 __nanosleep+0x17 (/usr/lib/x86_64-linux-gnu/libc-2.31.so)
sleep 93431 [007] 15411.720077: 1 branches:u: 7f0818ac0ebf __nanosleep+0x1f (/usr/lib/x86_64-linux-gnu/libc-2.31.so) => 55cb7e4c2827 rpl_nanosleep+0x97 (/usr/bin/sleep)
Fixes: 91de8684f1cff ("perf intel-pt: Cater for CBR change in PSB+")
Fixes: abe5a1d3e4bee ("perf intel-pt: Decoder to output CBR changes immediately")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: stable@vger.kernel.org
Link: http://lore.kernel.org/lkml/20200710151104.15137-3-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
While walking code towards a FUP ip, the packet state is
INTEL_PT_STATE_FUP or INTEL_PT_STATE_FUP_NO_TIP. That was mishandled
resulting in the state becoming INTEL_PT_STATE_IN_SYNC prematurely. The
result was an occasional lost EXSTOP event.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: stable@vger.kernel.org
Link: http://lore.kernel.org/lkml/20200710151104.15137-2-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
To sync headers, for instance, in this case tools/perf was ahead of
upstream till Linus merged tip/perf/core to get the
PERF_RECORD_TEXT_POKE changes:
Warning: Kernel ABI header at 'tools/include/uapi/linux/perf_event.h' differs from latest version at 'include/uapi/linux/perf_event.h'
diff -u tools/include/uapi/linux/perf_event.h include/uapi/linux/perf_event.h
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Since commit 0a892c1c9472 ("perf record: Add dummy event during system wide synthesis"),
a dummy event is added to capture mmaps.
But if we run perf-record as,
# perf record -e cycles:p -IXMM0 -a -- sleep 1
Error:
dummy:HG: PMU Hardware doesn't support sampling/overflow-interrupts. Try 'perf stat'
The issue is, if we enable the extended regs (-IXMM0), but the
pmu->capabilities is not set with PERF_PMU_CAP_EXTENDED_REGS, the kernel
will return -EOPNOTSUPP error.
See following code:
/* in kernel/events/core.c */
static int perf_try_init_event(struct pmu *pmu, struct perf_event *event)
{
....
if (!(pmu->capabilities & PERF_PMU_CAP_EXTENDED_REGS) &&
has_extended_regs(event))
ret = -EOPNOTSUPP;
....
}
For software dummy event, the PMU should not be set with
PERF_PMU_CAP_EXTENDED_REGS. But unfortunately now, the dummy
event has possibility to be set with PERF_REG_EXTENDED_MASK bit.
In evsel__config, /* tools/perf/util/evsel.c */
if (opts->sample_intr_regs) {
attr->sample_regs_intr = opts->sample_intr_regs;
}
If we use -IXMM0, the attr>sample_regs_intr will be set with
PERF_REG_EXTENDED_MASK bit.
It doesn't make sense to set attr->sample_regs_intr for a
software dummy event.
This patch adds dummy event checking before setting
attr->sample_regs_intr and attr->sample_regs_user.
After:
# ./perf record -e cycles:p -IXMM0 -a -- sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.413 MB perf.data (45 samples) ]
Committer notes:
Adrian said this when providing his Acked-by:
"
This is fine. It will not break PT.
no_aux_samples is useful for evsels that have been added by the code rather
than requested by the user. For old kernels PT adds sched_switch tracepoint
to track context switches (before the current context switch event was
added) and having auxiliary sample information unnecessarily uses up space
in the perf buffer.
"
Fixes: 0a892c1c9472 ("perf record: Add dummy event during system wide synthesis")
Signed-off-by: Jin Yao <yao.jin@linux.intel.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: Jin Yao <yao.jin@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/20200720010013.18238-1-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Introduce --control fd:ctl-fd[,ack-fd] options to pass open file
descriptors numbers from command line.
Extend perf-record.txt file with --control fd:ctl-fd[,ack-fd] options
description.
Document possible usage model introduced by --control fd:ctl-fd[,ack-fd]
options by providing example bash shell script.
Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/8dc01e1a-3a80-3f67-5385-4bc7112b0dd3@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Implement handling of 'enable' and 'disable' control commands coming
from control file descriptor.
Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/f0fde590-1320-dca1-39ff-da3322704d3b@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Extend -D,--delay option with -1 to start collection with events
disabled to be enabled later by 'enable' command provided via control
file descriptor.
Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/3e7d362c-7973-ee5d-e81e-c60ea22432c3@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Introduce --control fd:ctl-fd[,ack-fd] options to pass open file
descriptors numbers from command line. Extend perf-stat.txt file with
--control fd:ctl-fd[,ack-fd] options description. Document possible
usage model introduced by --control fd:ctl-fd[,ack-fd] options by
providing example bash shell script.
Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lore.kernel.org/lkml/feabd5cf-0155-fb0a-4587-c71571f2d517@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Minor conflict in tools/perf/arch/arm/util/auxtrace.c as one fix there
was cherry-picked for the last perf/urgent pull req to Linus, so was
already there.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Resolved kernel/bpf/btf.c using instructions from merge commit
69138b34a7248d2396ab85c8652e20c0c39beaba
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
for_each_set_bit, or similar functions like for_each_cpu, may be hot
within the kernel. If many bits were set then one could imagine on Intel
a "bt" instruction with every bit may be faster than the function call
and word length find_next_bit logic. Add a benchmark to measure this.
This benchmark on AMD rome and Intel skylakex shows "bt" is not a good
option except for very small bitmaps.
Committer testing:
# perf bench
Usage:
perf bench [<common options>] <collection> <benchmark> [<options>]
# List of all available benchmark collections:
sched: Scheduler and IPC benchmarks
syscall: System call benchmarks
mem: Memory access benchmarks
numa: NUMA scheduling and MM benchmarks
futex: Futex stressing benchmarks
epoll: Epoll stressing benchmarks
internals: Perf-internals benchmarks
all: All benchmarks
# perf bench mem
# List of available benchmarks for collection 'mem':
memcpy: Benchmark for memcpy() functions
memset: Benchmark for memset() functions
find_bit: Benchmark for find_bit() functions
all: Run all memory access benchmarks
# perf bench mem find_bit
# Running 'mem/find_bit' benchmark:
100000 operations 1 bits set of 1 bits
Average for_each_set_bit took: 730.200 usec (+- 6.468 usec)
Average test_bit loop took: 366.200 usec (+- 4.652 usec)
100000 operations 1 bits set of 2 bits
Average for_each_set_bit took: 781.000 usec (+- 24.247 usec)
Average test_bit loop took: 550.200 usec (+- 4.152 usec)
100000 operations 2 bits set of 2 bits
Average for_each_set_bit took: 1113.400 usec (+- 112.340 usec)
Average test_bit loop took: 1098.500 usec (+- 182.834 usec)
100000 operations 1 bits set of 4 bits
Average for_each_set_bit took: 843.800 usec (+- 8.772 usec)
Average test_bit loop took: 948.800 usec (+- 10.278 usec)
100000 operations 2 bits set of 4 bits
Average for_each_set_bit took: 1185.800 usec (+- 114.345 usec)
Average test_bit loop took: 1473.200 usec (+- 175.498 usec)
100000 operations 4 bits set of 4 bits
Average for_each_set_bit took: 1769.667 usec (+- 233.177 usec)
Average test_bit loop took: 1864.933 usec (+- 187.470 usec)
100000 operations 1 bits set of 8 bits
Average for_each_set_bit took: 898.000 usec (+- 21.755 usec)
Average test_bit loop took: 1768.400 usec (+- 23.672 usec)
100000 operations 2 bits set of 8 bits
Average for_each_set_bit took: 1244.900 usec (+- 116.396 usec)
Average test_bit loop took: 2201.800 usec (+- 145.398 usec)
100000 operations 4 bits set of 8 bits
Average for_each_set_bit took: 1822.533 usec (+- 231.554 usec)
Average test_bit loop took: 2569.467 usec (+- 168.453 usec)
100000 operations 8 bits set of 8 bits
Average for_each_set_bit took: 2845.100 usec (+- 441.365 usec)
Average test_bit loop took: 3023.300 usec (+- 219.575 usec)
100000 operations 1 bits set of 16 bits
Average for_each_set_bit took: 923.400 usec (+- 17.560 usec)
Average test_bit loop took: 3240.000 usec (+- 16.492 usec)
100000 operations 2 bits set of 16 bits
Average for_each_set_bit took: 1264.300 usec (+- 114.034 usec)
Average test_bit loop took: 3714.400 usec (+- 158.898 usec)
100000 operations 4 bits set of 16 bits
Average for_each_set_bit took: 1817.867 usec (+- 222.199 usec)
Average test_bit loop took: 4015.333 usec (+- 154.162 usec)
100000 operations 8 bits set of 16 bits
Average for_each_set_bit took: 2826.350 usec (+- 433.457 usec)
Average test_bit loop took: 4460.350 usec (+- 210.762 usec)
100000 operations 16 bits set of 16 bits
Average for_each_set_bit took: 4615.600 usec (+- 809.350 usec)
Average test_bit loop took: 5129.960 usec (+- 320.821 usec)
100000 operations 1 bits set of 32 bits
Average for_each_set_bit took: 904.400 usec (+- 14.250 usec)
Average test_bit loop took: 6194.000 usec (+- 29.254 usec)
100000 operations 2 bits set of 32 bits
Average for_each_set_bit took: 1252.700 usec (+- 116.432 usec)
Average test_bit loop took: 6652.400 usec (+- 154.352 usec)
100000 operations 4 bits set of 32 bits
Average for_each_set_bit took: 1824.200 usec (+- 229.133 usec)
Average test_bit loop took: 6961.733 usec (+- 154.682 usec)
100000 operations 8 bits set of 32 bits
Average for_each_set_bit took: 2823.950 usec (+- 432.296 usec)
Average test_bit loop took: 7351.900 usec (+- 193.626 usec)
100000 operations 16 bits set of 32 bits
Average for_each_set_bit took: 4552.560 usec (+- 785.141 usec)
Average test_bit loop took: 7998.360 usec (+- 305.629 usec)
100000 operations 32 bits set of 32 bits
Average for_each_set_bit took: 7557.067 usec (+- 1407.702 usec)
Average test_bit loop took: 9072.400 usec (+- 513.209 usec)
100000 operations 1 bits set of 64 bits
Average for_each_set_bit took: 896.800 usec (+- 14.389 usec)
Average test_bit loop took: 11927.200 usec (+- 68.862 usec)
100000 operations 2 bits set of 64 bits
Average for_each_set_bit took: 1230.400 usec (+- 111.731 usec)
Average test_bit loop took: 12478.600 usec (+- 189.382 usec)
100000 operations 4 bits set of 64 bits
Average for_each_set_bit took: 1844.733 usec (+- 244.826 usec)
Average test_bit loop took: 12911.467 usec (+- 206.246 usec)
100000 operations 8 bits set of 64 bits
Average for_each_set_bit took: 2779.300 usec (+- 413.612 usec)
Average test_bit loop took: 13372.650 usec (+- 239.623 usec)
100000 operations 16 bits set of 64 bits
Average for_each_set_bit took: 4423.920 usec (+- 748.240 usec)
Average test_bit loop took: 13995.800 usec (+- 318.427 usec)
100000 operations 32 bits set of 64 bits
Average for_each_set_bit took: 7580.600 usec (+- 1462.407 usec)
Average test_bit loop took: 15063.067 usec (+- 516.477 usec)
100000 operations 64 bits set of 64 bits
Average for_each_set_bit took: 13391.514 usec (+- 2765.371 usec)
Average test_bit loop took: 16974.914 usec (+- 916.936 usec)
100000 operations 1 bits set of 128 bits
Average for_each_set_bit took: 1153.800 usec (+- 124.245 usec)
Average test_bit loop took: 26959.000 usec (+- 714.047 usec)
100000 operations 2 bits set of 128 bits
Average for_each_set_bit took: 1445.200 usec (+- 113.587 usec)
Average test_bit loop took: 25798.800 usec (+- 512.908 usec)
100000 operations 4 bits set of 128 bits
Average for_each_set_bit took: 1990.933 usec (+- 219.362 usec)
Average test_bit loop took: 25589.400 usec (+- 348.288 usec)
100000 operations 8 bits set of 128 bits
Average for_each_set_bit took: 2963.000 usec (+- 419.487 usec)
Average test_bit loop took: 25690.050 usec (+- 262.025 usec)
100000 operations 16 bits set of 128 bits
Average for_each_set_bit took: 4585.200 usec (+- 741.734 usec)
Average test_bit loop took: 26125.040 usec (+- 274.127 usec)
100000 operations 32 bits set of 128 bits
Average for_each_set_bit took: 7626.200 usec (+- 1404.950 usec)
Average test_bit loop took: 27038.867 usec (+- 442.554 usec)
100000 operations 64 bits set of 128 bits
Average for_each_set_bit took: 13343.371 usec (+- 2686.460 usec)
Average test_bit loop took: 28936.543 usec (+- 883.257 usec)
100000 operations 128 bits set of 128 bits
Average for_each_set_bit took: 23442.950 usec (+- 4880.541 usec)
Average test_bit loop took: 32484.125 usec (+- 1691.931 usec)
100000 operations 1 bits set of 256 bits
Average for_each_set_bit took: 1183.000 usec (+- 32.073 usec)
Average test_bit loop took: 50114.600 usec (+- 198.880 usec)
100000 operations 2 bits set of 256 bits
Average for_each_set_bit took: 1550.000 usec (+- 124.550 usec)
Average test_bit loop took: 50334.200 usec (+- 128.425 usec)
100000 operations 4 bits set of 256 bits
Average for_each_set_bit took: 2164.333 usec (+- 246.359 usec)
Average test_bit loop took: 49959.867 usec (+- 188.035 usec)
100000 operations 8 bits set of 256 bits
Average for_each_set_bit took: 3211.200 usec (+- 454.829 usec)
Average test_bit loop took: 50140.850 usec (+- 176.046 usec)
100000 operations 16 bits set of 256 bits
Average for_each_set_bit took: 5181.640 usec (+- 882.726 usec)
Average test_bit loop took: 51003.160 usec (+- 419.601 usec)
100000 operations 32 bits set of 256 bits
Average for_each_set_bit took: 8369.333 usec (+- 1513.150 usec)
Average test_bit loop took: 52096.700 usec (+- 573.022 usec)
100000 operations 64 bits set of 256 bits
Average for_each_set_bit took: 13866.857 usec (+- 2649.393 usec)
Average test_bit loop took: 53989.600 usec (+- 938.808 usec)
100000 operations 128 bits set of 256 bits
Average for_each_set_bit took: 23588.350 usec (+- 4724.222 usec)
Average test_bit loop took: 57300.625 usec (+- 1625.962 usec)
100000 operations 256 bits set of 256 bits
Average for_each_set_bit took: 42752.200 usec (+- 9202.084 usec)
Average test_bit loop took: 64426.933 usec (+- 3402.326 usec)
100000 operations 1 bits set of 512 bits
Average for_each_set_bit took: 1632.000 usec (+- 229.954 usec)
Average test_bit loop took: 98090.000 usec (+- 1120.435 usec)
100000 operations 2 bits set of 512 bits
Average for_each_set_bit took: 1937.700 usec (+- 148.902 usec)
Average test_bit loop took: 100364.100 usec (+- 1433.219 usec)
100000 operations 4 bits set of 512 bits
Average for_each_set_bit took: 2528.000 usec (+- 243.654 usec)
Average test_bit loop took: 99932.067 usec (+- 955.868 usec)
100000 operations 8 bits set of 512 bits
Average for_each_set_bit took: 3734.100 usec (+- 512.359 usec)
Average test_bit loop took: 98944.750 usec (+- 812.070 usec)
100000 operations 16 bits set of 512 bits
Average for_each_set_bit took: 5551.400 usec (+- 846.605 usec)
Average test_bit loop took: 98691.600 usec (+- 654.753 usec)
100000 operations 32 bits set of 512 bits
Average for_each_set_bit took: 8594.500 usec (+- 1446.072 usec)
Average test_bit loop took: 99176.867 usec (+- 579.990 usec)
100000 operations 64 bits set of 512 bits
Average for_each_set_bit took: 13840.743 usec (+- 2527.055 usec)
Average test_bit loop took: 100758.743 usec (+- 833.865 usec)
100000 operations 128 bits set of 512 bits
Average for_each_set_bit took: 23185.925 usec (+- 4532.910 usec)
Average test_bit loop took: 103786.700 usec (+- 1475.276 usec)
100000 operations 256 bits set of 512 bits
Average for_each_set_bit took: 40322.400 usec (+- 8341.802 usec)
Average test_bit loop took: 109433.378 usec (+- 2742.615 usec)
100000 operations 512 bits set of 512 bits
Average for_each_set_bit took: 71804.540 usec (+- 15436.546 usec)
Average test_bit loop took: 120255.440 usec (+- 5252.777 usec)
100000 operations 1 bits set of 1024 bits
Average for_each_set_bit took: 1859.600 usec (+- 27.969 usec)
Average test_bit loop took: 187676.000 usec (+- 1337.770 usec)
100000 operations 2 bits set of 1024 bits
Average for_each_set_bit took: 2273.600 usec (+- 139.420 usec)
Average test_bit loop took: 188176.000 usec (+- 684.357 usec)
100000 operations 4 bits set of 1024 bits
Average for_each_set_bit took: 2940.400 usec (+- 268.213 usec)
Average test_bit loop took: 189172.600 usec (+- 593.295 usec)
100000 operations 8 bits set of 1024 bits
Average for_each_set_bit took: 4224.200 usec (+- 547.933 usec)
Average test_bit loop took: 190257.250 usec (+- 621.021 usec)
100000 operations 16 bits set of 1024 bits
Average for_each_set_bit took: 6090.560 usec (+- 877.975 usec)
Average test_bit loop took: 190143.880 usec (+- 503.753 usec)
100000 operations 32 bits set of 1024 bits
Average for_each_set_bit took: 9178.800 usec (+- 1475.136 usec)
Average test_bit loop took: 190757.100 usec (+- 494.757 usec)
100000 operations 64 bits set of 1024 bits
Average for_each_set_bit took: 14441.457 usec (+- 2545.497 usec)
Average test_bit loop took: 192299.486 usec (+- 795.251 usec)
100000 operations 128 bits set of 1024 bits
Average for_each_set_bit took: 23623.825 usec (+- 4481.182 usec)
Average test_bit loop took: 194885.550 usec (+- 1300.817 usec)
100000 operations 256 bits set of 1024 bits
Average for_each_set_bit took: 40194.956 usec (+- 8109.056 usec)
Average test_bit loop took: 200259.311 usec (+- 2566.085 usec)
100000 operations 512 bits set of 1024 bits
Average for_each_set_bit took: 70983.560 usec (+- 15074.982 usec)
Average test_bit loop took: 210527.460 usec (+- 4968.980 usec)
100000 operations 1024 bits set of 1024 bits
Average for_each_set_bit took: 136530.345 usec (+- 31584.400 usec)
Average test_bit loop took: 233329.691 usec (+- 10814.036 usec)
100000 operations 1 bits set of 2048 bits
Average for_each_set_bit took: 3077.600 usec (+- 76.376 usec)
Average test_bit loop took: 402154.400 usec (+- 518.571 usec)
100000 operations 2 bits set of 2048 bits
Average for_each_set_bit took: 3508.600 usec (+- 148.350 usec)
Average test_bit loop took: 403814.500 usec (+- 1133.027 usec)
100000 operations 4 bits set of 2048 bits
Average for_each_set_bit took: 4219.333 usec (+- 285.844 usec)
Average test_bit loop took: 404312.533 usec (+- 985.751 usec)
100000 operations 8 bits set of 2048 bits
Average for_each_set_bit took: 5670.550 usec (+- 615.238 usec)
Average test_bit loop took: 405321.800 usec (+- 1038.487 usec)
100000 operations 16 bits set of 2048 bits
Average for_each_set_bit took: 7785.080 usec (+- 992.522 usec)
Average test_bit loop took: 406746.160 usec (+- 1015.478 usec)
100000 operations 32 bits set of 2048 bits
Average for_each_set_bit took: 11163.800 usec (+- 1627.320 usec)
Average test_bit loop took: 406124.267 usec (+- 898.785 usec)
100000 operations 64 bits set of 2048 bits
Average for_each_set_bit took: 16964.629 usec (+- 2806.130 usec)
Average test_bit loop took: 406618.514 usec (+- 798.356 usec)
100000 operations 128 bits set of 2048 bits
Average for_each_set_bit took: 27219.625 usec (+- 4988.458 usec)
Average test_bit loop took: 410149.325 usec (+- 1705.641 usec)
100000 operations 256 bits set of 2048 bits
Average for_each_set_bit took: 45138.578 usec (+- 8831.021 usec)
Average test_bit loop took: 415462.467 usec (+- 2725.418 usec)
100000 operations 512 bits set of 2048 bits
Average for_each_set_bit took: 77450.540 usec (+- 15962.238 usec)
Average test_bit loop took: 426089.180 usec (+- 5171.788 usec)
100000 operations 1024 bits set of 2048 bits
Average for_each_set_bit took: 138023.636 usec (+- 29826.959 usec)
Average test_bit loop took: 446346.636 usec (+- 9904.417 usec)
100000 operations 2048 bits set of 2048 bits
Average for_each_set_bit took: 251072.600 usec (+- 55947.692 usec)
Average test_bit loop took: 484855.983 usec (+- 18970.431 usec)
#
Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lore.kernel.org/lkml/20200729220034.1337168-1-irogers@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
When recording with cache-misses and arm_spe_x event, I found that it
will just fail without showing any error info if i put cache-misses
after 'arm_spe_x' event.
[root@localhost 0620]# perf record -e cache-misses \
-e arm_spe_0/ts_enable=1,pct_enable=1,pa_enable=1,load_filter=1,jitter=1,store_filter=1,min_latency=0/ sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.067 MB perf.data ]
[root@localhost 0620]#
[root@localhost 0620]# perf record -e arm_spe_0/ts_enable=1,pct_enable=1,pa_enable=1,load_filter=1,jitter=1,store_filter=1,min_latency=0/ \
-e cache-misses sleep 1
[root@localhost 0620]#
The current code can only work if the only event to be traced is an
'arm_spe_x', or if it is the last event to be specified. Otherwise the
last event type will be checked against all the arm_spe_pmus[i]->types,
none will match and an out of bound 'i' index will be used in
arm_spe_recording_init().
We don't support concurrent multiple arm_spe_x events currently, that
is checked in arm_spe_recording_options(), and it will show the relevant
info. So add the check and record of the first found 'arm_spe_pmu' to
fix this issue here.
Fixes: ffd3d18c20b8 ("perf tools: Add ARM Statistical Profiling Extensions (SPE) support")
Signed-off-by: Wei Li <liwei391@huawei.com>
Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Tested-by-by: Leo Yan <leo.yan@linaro.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Hanjun Guo <guohanjun@huawei.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Kim Phillips <kim.phillips@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mike Leach <mike.leach@linaro.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Suzuki Poulouse <suzuki.poulose@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lore.kernel.org/lkml/20200724071111.35593-2-liwei391@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Commit 5aa98879efe7 ("s390/cpum_sf: prohibit callchain data collection")
prohibits call graph sampling for hardware events on s390. The
information recorded is out of context and does not match.
On s390 this commit now breaks test case 68 Zstd perf.data
compression/decompression.
Therefore omit call graph sampling on s390 in this test.
Output before:
[root@t35lp46 perf]# ./perf test -Fv 68
68: Zstd perf.data compression/decompression :
--- start ---
Collecting compressed record file:
Error:
cycles: PMU Hardware doesn't support sampling/overflow-interrupts.
Try 'perf stat'
---- end ----
Zstd perf.data compression/decompression: FAILED!
[root@t35lp46 perf]#
Output after:
[root@t35lp46 perf]# ./perf test -Fv 68
68: Zstd perf.data compression/decompression :
--- start ---
Collecting compressed record file:
500+0 records in
500+0 records out
256000 bytes (256 kB, 250 KiB) copied, 0.00615638 s, 41.6 MB/s
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.004 MB /tmp/perf.data.X3M,
compressed (original 0.002 MB, ratio is 3.609) ]
Checking compressed events stats:
# compressed : Zstd, level = 1, ratio = 4
COMPRESSED events: 1
2ELIFREPh---- end ----
Zstd perf.data compression/decompression: Ok
[root@t35lp46 perf]#
Signed-off-by: Thomas Richter <tmricht@linux.ibm.com>
Reviewed-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Link: http://lore.kernel.org/lkml/20200729135314.91281-1-tmricht@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Following the previous change that rename egroup to metric, there's no
reason to call the list 'group_list' anymore, renaming it to
metric_list.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Reviewed-by: Kajol Jain <kjain@linux.ibm.com>
Acked-by: Ian Rogers <irogers@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200719181320.785305-20-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Renaming struct egroup to metric, because it seems to make more sense.
Plus renaming all the variables that hold egroup to appropriate names.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Reviewed-by: Kajol Jain <kjain@linux.ibm.com>
Acked-by: Ian Rogers <irogers@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200719181320.785305-19-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Adding test for metric group plus compute_metric_group function to get
metrics values within the group.
Committer notes:
Fixed this;
tests/parse-metric.c:327:7: error: missing field 'val' initializer [-Werror,-Wmissing-field-initializers]
{ 0 },
^
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Reviewed-by: Kajol Jain <kjain@linux.ibm.com>
Acked-by: Ian Rogers <irogers@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200719181320.785305-18-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
So far compute_single function relies on the fact, that there's only
single metric defined within evlist in all tests. In following patch we
will add test for metric group, so we need to be able to compute metric
by given name.
Adding the name argument to compute_single and iterating evlist and
evsel's expression to find the given metric.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ian Rogers <irogers@google.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Kajol Jain <kjain@linux.ibm.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200719181320.785305-17-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Keeping the stack of nested metrics via 'struct expr_id' objects
and checking if we are in recursion via already processed metric.
The stack is implemented as static array within the struct egroup
with 100 entries, which should be enough nesting depth for any
metric we have or plan to have at the moment.
Adding test that simulates the recursion and checks we can
detect it.
Committer notes:
Bumped RECURSION_ID_MAX to 1000 as per Jiri's reply to Paul Clark on the
patch series e-mail discussion.
Fixed these:
tests/parse-metric.c:308:7: error: missing field 'val' initializer [-Werror,-Wmissing-field-initializers]
{ 0 },
^
util/metricgroup.c:924:28: error: missing field 'parent' initializer [-Werror,-Wmissing-field-initializers]
struct expr_ids ids = { 0 };
^
util/metricgroup.c:924:26: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
struct expr_ids ids = { 0 };
^
{}
util/metricgroup.c:924:26: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces]
struct expr_ids ids = { 0 };
^
{}
util/metricgroup.c:924:28: error: missing field 'cnt' initializer [-Werror,-Wmissing-field-initializers]
struct expr_ids ids = { 0 };
^
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Reviewed-by: Kajol Jain <kjain@linux.ibm.com>
Acked-by: Ian Rogers <irogers@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200719181320.785305-16-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|
|
Adding test that compute DCache_L2 metrics with other related metrics in it.
Committer notes:
Fixed up this:
tests/parse-metric.c:285:7: error: missing field 'val' initializer [-Werror,-Wmissing-field-initializers]
{ 0 },
^
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Reviewed-by: Kajol Jain <kjain@linux.ibm.com>
Acked-by: Ian Rogers <irogers@google.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Michael Petlan <mpetlan@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Clarke <pc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20200719181320.785305-15-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
|