summaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)Author
2025-01-12Merge tag 'perf_urgent_for_v6.13_rc7' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fix from Borislav Petkov: - Fix a #GP in the perf user callchain code caused by a race between uprobe freeing the task and the bpf profiler unwinding the task's user stack * tag 'perf_urgent_for_v6.13_rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: uprobes: Fix race in uprobe_free_utask
2025-01-11Merge tag 'probes-fixes-v6.13-rc6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace Pull probes fix from Masami Hiramatsu: "Fix to free trace_kprobe objects at a failure path in __trace_kprobe_create() function. This fixes a memory leak" * tag 'probes-fixes-v6.13-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: tracing/kprobes: Fix to free objects when failed to copy a symbol
2025-01-11mm/slab: Move kvfree_rcu() into SLABUladzislau Rezki (Sony)
Move kvfree_rcu() functionality to the slab_common.c file. The reason to have kvfree_rcu() functionality as part of SLAB is that there is a clear trend and need of closer integration. One of the recent example is creating a barrier function for SLAB caches. Another reason is to prevent of having several implementations of RCU machinery for reclaiming objects after a GP. As future steps, it can be more integrated(easier) with SLAB internals. Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Acked-by: Hyeonggon Yoo <hyeonggon.yoo@sk.com> Tested-by: Hyeonggon Yoo <hyeonggon.yoo@sk.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2025-01-11rcu/kvfree: Adjust a shrinker nameUladzislau Rezki (Sony)
Rename "rcu-kfree" to "slab-kvfree-rcu" since it goes to the slab_common.c file soon. Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Acked-by: Hyeonggon Yoo <hyeonggon.yoo@sk.com> Tested-by: Hyeonggon Yoo <hyeonggon.yoo@sk.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2025-01-11rcu/kvfree: Adjust names passed into trace functionsUladzislau Rezki (Sony)
Currently trace functions are supplied with "rcu_state.name" member which is located in the structure. The problem is that the "rcu_state" structure variable is local and can not be accessed from another place. To address this, this preparation patch passes "slab" string as a first argument. Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Acked-by: Hyeonggon Yoo <hyeonggon.yoo@sk.com> Tested-by: Hyeonggon Yoo <hyeonggon.yoo@sk.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2025-01-11rcu/kvfree: Move some functions under CONFIG_TINY_RCUUladzislau Rezki (Sony)
Currently when a tiny RCU is enabled, the tree.c file is not compiled, thus duplicating function names do not conflict with each other. Because of moving of kvfree_rcu() functionality to the SLAB, we have to reorder some functions and place them together under CONFIG_TINY_RCU macro definition. Therefore, those functions name will not conflict when a kernel is compiled for CONFIG_TINY_RCU flavor. Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Acked-by: Hyeonggon Yoo <hyeonggon.yoo@sk.com> Tested-by: Hyeonggon Yoo <hyeonggon.yoo@sk.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2025-01-11rcu/kvfree: Initialize kvfree_rcu() separatelyUladzislau Rezki (Sony)
Introduce a separate initialization of kvfree_rcu() functionality. For such purpose a kfree_rcu_batch_init() is renamed to a kvfree_rcu_init() and it is invoked from the main.c right after rcu_init() is done. Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Acked-by: Hyeonggon Yoo <hyeonggon.yoo@sk.com> Tested-by: Hyeonggon Yoo <hyeonggon.yoo@sk.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2025-01-10Merge tag 'sched_ext-for-6.13-rc6-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext Pull sched_ext fixes from Tejun Heo: - Fix corner case bug where ops.dispatch() couldn't extend the execution of the current task if SCX_OPS_ENQ_LAST is set. - Fix ops.cpu_release() not being called when a SCX task is preempted by a higher priority sched class task. - Fix buitin idle mask being incorrectly left as busy after an idle CPU is picked and kicked. - scx_ops_bypass() was unnecessarily using rq_lock() which comes with rq pinning related sanity checks which could trigger spuriously. Switch to raw_spin_rq_lock(). * tag 'sched_ext-for-6.13-rc6-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext: sched_ext: idle: Refresh idle masks during idle-to-idle transitions sched_ext: switch class when preempted by higher priority scheduler sched_ext: Replace rq_lock() to raw_spin_rq_lock() in scx_ops_bypass() sched_ext: keep running prev when prev->scx.slice != 0
2025-01-10Merge tag 'cgroup-for-6.13-rc6-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup Pull cgroup fixes from Tejun Heo: "Cpuset fixes: - Fix isolated CPUs leaking into sched domains - Remove now unnecessary kernfs active break which can trigger a warning - Comment updates" * tag 'cgroup-for-6.13-rc6-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: cgroup/cpuset: remove kernfs active break cgroup/cpuset: Prevent leakage of isolated CPUs into sched domains cgroup/cpuset: Remove stale text
2025-01-10Merge tag 'wq-for-6.13-rc6-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq Pull workqueue fix from Tejun Heo: - Add a WARN_ON_ONCE() on queue_delayed_work_on() on an offline CPU as such work items won't get executed till the CPU comes back online * tag 'wq-for-6.13-rc6-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: workqueue: warn if delayed_work is queued to an offlined cpu.
2025-01-10sched_ext: idle: Refresh idle masks during idle-to-idle transitionsAndrea Righi
With the consolidation of put_prev_task/set_next_task(), see commit 436f3eed5c69 ("sched: Combine the last put_prev_task() and the first set_next_task()"), we are now skipping the transition between these two functions when the previous and the next tasks are the same. As a result, the scx idle state of a CPU is updated only when transitioning to or from the idle thread. While this is generally correct, it can lead to uneven and inefficient core utilization in certain scenarios [1]. A typical scenario involves proactive wake-ups: scx_bpf_pick_idle_cpu() selects and marks an idle CPU as busy, followed by a wake-up via scx_bpf_kick_cpu(), without dispatching any tasks. In this case, the CPU continues running the idle thread, returns to idle, but remains marked as busy, preventing it from being selected again as an idle CPU (until a task eventually runs on it and releases the CPU). For example, running a workload that uses 20% of each CPU, combined with an scx scheduler using proactive wake-ups, results in the following core utilization: CPU 0: 25.7% CPU 1: 29.3% CPU 2: 26.5% CPU 3: 25.5% CPU 4: 0.0% CPU 5: 25.5% CPU 6: 0.0% CPU 7: 10.5% To address this, refresh the idle state also in pick_task_idle(), during idle-to-idle transitions, but only trigger ops.update_idle() on actual state changes to prevent unnecessary updates to the scx scheduler and maintain balanced state transitions. With this change in place, the core utilization in the previous example becomes the following: CPU 0: 18.8% CPU 1: 19.4% CPU 2: 18.0% CPU 3: 18.7% CPU 4: 19.3% CPU 5: 18.9% CPU 6: 18.7% CPU 7: 19.3% [1] https://github.com/sched-ext/scx/pull/1139 Fixes: 7c65ae81ea86 ("sched_ext: Don't call put_prev_task_scx() before picking the next task") Signed-off-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2025-01-10workqueue: warn if delayed_work is queued to an offlined cpu.Imran Khan
delayed_work submitted to an offlined cpu, will not get executed, after the specified delay if the cpu remains offline. If the cpu never comes online the work will never get executed. checking for online cpu in __queue_delayed_work, does not sound like a good idea because to do this reliably we need hotplug lock and since work may be submitted from atomic contexts, we would have to use cpus_read_trylock. But if trylock fails we would queue the work on any cpu and this may not be optimal because our intended cpu might still be online. Putting a WARN_ON_ONCE for an already offlined cpu, will indicate users of queue_delayed_work_on, if they are (wrongly) trying to queue delayed_work on offlined cpu. Also indicate the problem of using offlined cpu with queue_delayed_work_on, in its description. Signed-off-by: Imran Khan <imran.f.khan@oracle.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2025-01-10sched_ext: Implement scx_bpf_now()Changwoo Min
Returns a high-performance monotonically non-decreasing clock for the current CPU. The clock returned is in nanoseconds. It provides the following properties: 1) High performance: Many BPF schedulers call bpf_ktime_get_ns() frequently to account for execution time and track tasks' runtime properties. Unfortunately, in some hardware platforms, bpf_ktime_get_ns() -- which eventually reads a hardware timestamp counter -- is neither performant nor scalable. scx_bpf_now() aims to provide a high-performance clock by using the rq clock in the scheduler core whenever possible. 2) High enough resolution for the BPF scheduler use cases: In most BPF scheduler use cases, the required clock resolution is lower than the most accurate hardware clock (e.g., rdtsc in x86). scx_bpf_now() basically uses the rq clock in the scheduler core whenever it is valid. It considers that the rq clock is valid from the time the rq clock is updated (update_rq_clock) until the rq is unlocked (rq_unpin_lock). 3) Monotonically non-decreasing clock for the same CPU: scx_bpf_now() guarantees the clock never goes backward when comparing them in the same CPU. On the other hand, when comparing clocks in different CPUs, there is no such guarantee -- the clock can go backward. It provides a monotonically *non-decreasing* clock so that it would provide the same clock values in two different scx_bpf_now() calls in the same CPU during the same period of when the rq clock is valid. An rq clock becomes valid when it is updated using update_rq_clock() and invalidated when the rq is unlocked using rq_unpin_lock(). Let's suppose the following timeline in the scheduler core: T1. rq_lock(rq) T2. update_rq_clock(rq) T3. a sched_ext BPF operation T4. rq_unlock(rq) T5. a sched_ext BPF operation T6. rq_lock(rq) T7. update_rq_clock(rq) For [T2, T4), we consider that rq clock is valid (SCX_RQ_CLK_VALID is set), so scx_bpf_now() calls during [T2, T4) (including T3) will return the rq clock updated at T2. For duration [T4, T7), when a BPF scheduler can still call scx_bpf_now() (T5), we consider the rq clock is invalid (SCX_RQ_CLK_VALID is unset at T4). So when calling scx_bpf_now() at T5, we will return a fresh clock value by calling sched_clock_cpu() internally. Also, to prevent getting outdated rq clocks from a previous scx scheduler, invalidate all the rq clocks when unloading a BPF scheduler. One example of calling scx_bpf_now(), when the rq clock is invalid (like T5), is in scx_central [1]. The scx_central scheduler uses a BPF timer for preemptive scheduling. In every msec, the timer callback checks if the currently running tasks exceed their timeslice. At the beginning of the BPF timer callback (central_timerfn in scx_central.bpf.c), scx_central gets the current time. When the BPF timer callback runs, the rq clock could be invalid, the same as T5. In this case, scx_bpf_now() returns a fresh clock value rather than returning the old one (T2). [1] https://github.com/sched-ext/scx/blob/main/scheds/c/scx_central.bpf.c Signed-off-by: Changwoo Min <changwoo@igalia.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2025-01-10sched_ext: Relocate scx_enabled() related codeChangwoo Min
scx_enabled() will be used in scx_rq_clock_update/invalidate() in the following patch, so relocate the scx_enabled() related code to the proper location. Signed-off-by: Changwoo Min <changwoo@igalia.com> Acked-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2025-01-11modpost: Allow extended modversions without basic MODVERSIONSMatthew Maurer
If you know that your kernel modules will only ever be loaded by a newer kernel, you can disable BASIC_MODVERSIONS to save space. This also allows easy creation of test modules to see how tooling will respond to modules that only have the new format. Signed-off-by: Matthew Maurer <mmaurer@google.com> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-01-10perf: map pages in advanceLorenzo Stoakes
We are adjusting struct page to make it smaller, removing unneeded fields which correctly belong to struct folio. Two of those fields are page->index and page->mapping. Perf is currently making use of both of these. This is unnecessary. This patch eliminates this. Perf establishes its own internally controlled memory-mapped pages using vm_ops hooks. The first page in the mapping is the read/write user control page, and the rest of the mapping consists of read-only pages. The VMA is backed by kernel memory either from the buddy allocator or vmalloc depending on configuration. It is intended to be mapped read/write, but because it has a page_mkwrite() hook, vma_wants_writenotify() indicates that it should be mapped read-only. When a write fault occurs, the provided page_mkwrite() hook, perf_mmap_fault() (doing double duty handing faults as well) uses the vmf->pgoff field to determine if this is the first page, allowing for the desired read/write first page, read-only rest mapping. For this to work the implementation has to carefully work around faulting logic. When a page is write-faulted, the fault() hook is called first, then its page_mkwrite() hook is called (to allow for dirty tracking in file systems). On fault we set the folio's mapping in perf_mmap_fault(), this is because when do_page_mkwrite() is subsequently invoked, it treats a missing mapping as an indicator that the fault should be retried. We also set the folio's index so, given the folio is being treated as faux user memory, it correctly references its offset within the VMA. This explains why the mapping and index fields are used - but it's not necessary. We preallocate pages when perf_mmap() is called for the first time via rb_alloc(), and further allocate auxiliary pages via rb_aux_alloc() as needed if the mapping requires it. This allocation is done in the f_ops->mmap() hook provided in perf_mmap(), and so we can instead simply map all the memory right away here - there's no point in handling (read) page faults when we don't demand page nor need to be notified about them (perf does not). This patch therefore changes this logic to map everything when the mmap() hook is called, establishing a PFN map. It implements vm_ops->pfn_mkwrite() to provide the required read/write vs. read-only behaviour, which does not require the previously implemented workarounds. While it is not ideal to use a VM_PFNMAP here, doing anything else will result in the page_mkwrite() hook need to be provided, which requires the same page->mapping hack this patch seeks to undo. It will also result in the pages being treated as folios and placed on the rmap, which really does not make sense for these mappings. Semantically it makes sense to establish this as some kind of special mapping, as the pages are managed by perf and are not strictly user pages, but currently the only means by which we can do so functionally while maintaining the required R/W and R/O behaviour is a PFN map. There should be no change to actual functionality as a result of this change. Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20250103153151.124163-1-lorenzo.stoakes@oracle.com
2025-01-11modpost: Produce extended MODVERSIONS informationMatthew Maurer
Generate both the existing modversions format and the new extended one when running modpost. Presence of this metadata in the final .ko is guarded by CONFIG_EXTENDED_MODVERSIONS. We no longer generate an error on long symbols in modpost if CONFIG_EXTENDED_MODVERSIONS is set, as they can now be appropriately encoded in the extended section. These symbols will be skipped in the previous encoding. An error will still be generated if CONFIG_EXTENDED_MODVERSIONS is not set. Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Matthew Maurer <mmaurer@google.com> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-01-11modules: Support extended MODVERSIONS infoMatthew Maurer
Adds a new format for MODVERSIONS which stores each field in a separate ELF section. This initially adds support for variable length names, but could later be used to add additional fields to MODVERSIONS in a backwards compatible way if needed. Any new fields will be ignored by old user tooling, unlike the current format where user tooling cannot tolerate adjustments to the format (for example making the name field longer). Since PPC munges its version records to strip leading dots, we reproduce the munging for the new format. Other architectures do not appear to have architecture-specific usage of this information. Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Matthew Maurer <mmaurer@google.com> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-01-11kbuild: Add gendwarfksyms as an alternative to genksymsSami Tolvanen
When MODVERSIONS is enabled, allow selecting gendwarfksyms as the implementation, but default to genksyms. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-01-11tools: Add gendwarfksymsSami Tolvanen
Add a basic DWARF parser, which uses libdw to traverse the debugging information in an object file and looks for functions and variables. In follow-up patches, this will be expanded to produce symbol versions for CONFIG_MODVERSIONS from DWARF. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Petr Pavlu <petr.pavlu@suse.com> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-01-10module: get symbol CRC back to unsignedMasahiro Yamada
Commit 71810db27c1c ("modversions: treat symbol CRCs as 32 bit quantities") changed the CRC fields to s32 because the __kcrctab and __kcrctab_gpl sections contained relative references to the actual CRC values stored in the .rodata section when CONFIG_MODULE_REL_CRCS=y. Commit 7b4537199a4a ("kbuild: link symbol CRCs at final link, removing CONFIG_MODULE_REL_CRCS") removed this complexity. Now, the __kcrctab and __kcrctab_gpl sections directly contain the CRC values in all cases. The genksyms tool outputs unsigned 32-bit CRC values, so u32 is preferred over s32. No functional changes are intended. Regardless of this change, the CRC value is assigned to the u32 variable 'crcval' before the comparison, as seen in kernel/module/version.c: crcval = *crc; It was previously mandatory (but now optional) in order to avoid sign extension because the following line previously compared 'unsigned long' and 's32': if (versions[i].crc == crcval) return 1; versions[i].crc is still 'unsigned long' for backward compatibility. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Reviewed-by: Petr Pavlu <petr.pavlu@suse.com>
2025-01-10kheaders: prevent `find` from seeing perl temp filesHONG Yifan
Symptom: The command find ... | xargs ... perl -i occasionally triggers error messages like the following, with the build still succeeding: Can't open <redacted>/kernel/.tmp_dir/include/dt-bindings/clock/XXNX4nW9: No such file or directory. Analysis: With strace, the root cause has been identified to be `perl -i` creating temporary files inside ${tmpdir}, which causes `find` to see the temporary files and emit the names. `find` is likely implemented with readdir. POSIX `readdir` says: If a file is removed from or added to the directory after the most recent call to opendir() or rewinddir(), whether a subsequent call to readdir() returns an entry for that file is unspecified. So if the libc that `find` links against choose to return that entry in readdir(), a possible sequence of events is the following: 1. find emits foo.h 2. xargs executes `perl -i foo.h` 3. perl (pid=100) creates temporary file `XXXXXXXX` 4. find sees file `XXXXXXXX` and emit it 5. PID 100 exits, cleaning up the temporary file `XXXXXXXX` 6. xargs executes `perl -i XXXXXXXX` 7. perl (pid=200) tries to read the file, but it doesn't exist any more. ... triggering the error message. One can reproduce the bug with the following command (assuming PWD contains the list of headers in kheaders.tar.xz) for i in $(seq 100); do find -type f -print0 | xargs -0 -P8 -n1 perl -pi -e 'BEGIN {undef $/;}; s/\/\*((?!SPDX).)*?\*\///smg;'; done With a `find` linking against musl libc, the error message is emitted 6/100 times. The fix: This change stores the results of `find` before feeding them into xargs. find and xargs will no longer be able to see temporary files that perl creates after this change. Signed-off-by: HONG Yifan <elsk@google.com> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-01-10kheaders: use 'tar' instead of 'cpio' for copying filesMasahiro Yamada
The 'cpio' command is used solely for copying header files to the temporary directory. However, there is no strong reason to use 'cpio' for this purpose. For example, scripts/package/install-extmod-build uses the 'tar' command to copy files. This commit replaces the use of 'cpio' with 'tar' because 'tar' is already used in this script to generate kheaders_data.tar.xz anyway. Performance-wide, there is no significant difference between 'cpio' and 'tar'. [Before] $ rm -fr kheaders; mkdir kheaders $ time sh -c ' for f in include arch/x86/include do find "$f" -name "*.h" done | cpio --quiet -pd kheaders ' real 0m0.148s user 0m0.021s sys 0m0.140s [After] $ rm -fr kheaders; mkdir kheaders $ time sh -c ' for f in include arch/x86/include do find "$f" -name "*.h" done | tar -c -f - -T - | tar -xf - -C kheaders ' real 0m0.098s user 0m0.024s sys 0m0.131s Revert commit 69ef0920bdd3 ("Docs: Add cpio requirement to changes.rst") because 'cpio' is not used anywhere else during the kernel build. Please note that the built-in initramfs is created by the in-tree tool, usr/gen_init_cpio, so it does not rely on the external 'cpio' command at all. Remove 'cpio' from the package build dependencies as well. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-01-10kheaders: rename the 'cpio_dir' variable to 'tmpdir'Masahiro Yamada
The next commit will get rid of the use of 'cpio' command, as there is no strong reason to use it just for copying files. Before that, this commit renames the 'cpio_dir' variable to 'tmpdir'. No functional changes are intended. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-01-10kheaders: avoid unnecessary process forks of grepMasahiro Yamada
Exclude include/generated/{utsversion.h,autoconf.h} by using the -path option to reduce the cost of forking new processes. No functional changes are intended. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-01-10kheaders: exclude include/generated/utsversion.h from kheaders_data.tar.xzMasahiro Yamada
CONFIG_IKHEADERS has a reproducibility issue because the contents of kernel/kheaders_data.tar.xz can vary depending on how you build the kernel. If you build the kernel with CONFIG_IKHEADERS enabled from a pristine state, the tarball does not include include/generated/utsversion.h. $ make -s mrproper $ make -s defconfig $ scripts/config -e CONFIG_IKHEADERS $ make -s $ tar Jtf kernel/kheaders_data.tar.xz | grep utsversion However, if you build the kernel with CONFIG_IKHEADERS disabled first and then enable it later, the tarball does include include/generated/utsversion.h. $ make -s mrproper $ make -s defconfig $ make -s $ scripts/config -e CONFIG_IKHEADERS $ make -s $ tar Jtf kernel/kheaders_data.tar.xz | grep utsversion ./include/generated/utsversion.h It is not predictable whether a stale include/generated/utsversion.h remains when kheaders_data.tar.xz is generated. For better reproducibility, include/generated/utsversions.h should always be omitted. It is not necessary for the kheaders anyway. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2025-01-10uprobes: Fix race in uprobe_free_utaskJiri Olsa
Max Makarov reported kernel panic [1] in perf user callchain code. The reason for that is the race between uprobe_free_utask and bpf profiler code doing the perf user stack unwind and is triggered within uprobe_free_utask function: - after current->utask is freed and - before current->utask is set to NULL general protection fault, probably for non-canonical address 0x9e759c37ee555c76: 0000 [#1] SMP PTI RIP: 0010:is_uprobe_at_func_entry+0x28/0x80 ... ? die_addr+0x36/0x90 ? exc_general_protection+0x217/0x420 ? asm_exc_general_protection+0x26/0x30 ? is_uprobe_at_func_entry+0x28/0x80 perf_callchain_user+0x20a/0x360 get_perf_callchain+0x147/0x1d0 bpf_get_stackid+0x60/0x90 bpf_prog_9aac297fb833e2f5_do_perf_event+0x434/0x53b ? __smp_call_single_queue+0xad/0x120 bpf_overflow_handler+0x75/0x110 ... asm_sysvec_apic_timer_interrupt+0x1a/0x20 RIP: 0010:__kmem_cache_free+0x1cb/0x350 ... ? uprobe_free_utask+0x62/0x80 ? acct_collect+0x4c/0x220 uprobe_free_utask+0x62/0x80 mm_release+0x12/0xb0 do_exit+0x26b/0xaa0 __x64_sys_exit+0x1b/0x20 do_syscall_64+0x5a/0x80 It can be easily reproduced by running following commands in separate terminals: # while :; do bpftrace -e 'uprobe:/bin/ls:_start { printf("hit\n"); }' -c ls; done # bpftrace -e 'profile:hz:100000 { @[ustack()] = count(); }' Fixing this by making sure current->utask pointer is set to NULL before we start to release the utask object. [1] https://github.com/grafana/pyroscope/issues/3673 Fixes: cfa7f3d2c526 ("perf,x86: avoid missing caller address in stack traces captured in uprobe") Reported-by: Max Makarov <maxpain@linux.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Oleg Nesterov <oleg@redhat.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20250109141440.2692173-1-jolsa@kernel.org
2025-01-09Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski
Cross-merge networking fixes after downstream PR (net-6.13-rc7). Conflicts: a42d71e322a8 ("net_sched: sch_cake: Add drop reasons") 737d4d91d35b ("sched: sch_cake: add bounds checks to host bulk flow fairness counts") Adjacent changes: drivers/net/ethernet/meta/fbnic/fbnic.h 3a856ab34726 ("eth: fbnic: add IRQ reuse support") 95978931d55f ("eth: fbnic: Revert "eth: fbnic: Add hardware monitoring support via HWMON interface"") Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-01-10tracing/kprobes: Simplify __trace_kprobe_create() by removing gotosMasami Hiramatsu (Google)
Simplify __trace_kprobe_create() by removing gotos. Link: https://lore.kernel.org/all/173643301102.1514810.6149004416601259466.stgit@devnote2/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-01-10tracing: Use __free() for kprobe events to cleanupMasami Hiramatsu (Google)
Use __free() in trace_kprobe.c to cleanup code. Link: https://lore.kernel.org/all/173643299989.1514810.2924926552980462072.stgit@devnote2/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-01-10tracing: Use __free() in trace_probe for cleanupMasami Hiramatsu (Google)
Use __free() in trace_probe to cleanup some gotos. Link: https://lore.kernel.org/all/173643298860.1514810.7267350121047606213.stgit@devnote2/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-01-10kprobes: Remove remaining gotosMasami Hiramatsu (Google)
Remove remaining gotos from kprobes.c to clean up the code. This does not use cleanup macros, but changes code flow for avoiding gotos. Link: https://lore.kernel.org/all/173371212474.480397.5684523564137819115.stgit@devnote2/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2025-01-10kprobes: Remove unneeded gotoMasami Hiramatsu (Google)
Remove unneeded gotos. Since the labels referred by these gotos have only one reference for each, we can replace those gotos with the referred code. Link: https://lore.kernel.org/all/173371211203.480397.13988907319659165160.stgit@devnote2/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2025-01-10kprobes: Use guard for rcu_read_lockMasami Hiramatsu (Google)
Use guard(rcu) for rcu_read_lock so that it can remove unneeded gotos and make it more structured. Link: https://lore.kernel.org/all/173371209846.480397.3852648910271029695.stgit@devnote2/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2025-01-10kprobes: Use guard() for external locksMasami Hiramatsu (Google)
Use guard() for text_mutex, cpu_read_lock, and jump_label_lock in the kprobes. Link: https://lore.kernel.org/all/173371208663.480397.7535769878667655223.stgit@devnote2/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2025-01-10tracing/eprobe: Adopt guard() and scoped_guard()Masami Hiramatsu (Google)
Use guard() or scoped_guard() in eprobe events for critical sections rather than discrete lock/unlock pairs. Link: https://lore.kernel.org/all/173289890996.73724.17421347964110362029.stgit@devnote2/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2025-01-10tracing/uprobe: Adopt guard() and scoped_guard()Masami Hiramatsu (Google)
Use guard() or scoped_guard() in uprobe events for critical sections rather than discrete lock/unlock pairs. Link: https://lore.kernel.org/all/173289889911.73724.12457932738419630525.stgit@devnote2/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2025-01-10tracing/kprobe: Adopt guard() and scoped_guard()Masami Hiramatsu (Google)
Use guard() or scoped_guard() in kprobe events for critical sections rather than discrete lock/unlock pairs. Link: https://lore.kernel.org/all/173289888883.73724.6586200652276577583.stgit@devnote2/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2025-01-10kprobes: Adopt guard() and scoped_guard()Masami Hiramatsu (Google)
Use guard() or scoped_guard() for critical sections rather than discrete lock/unlock pairs. Link: https://lore.kernel.org/all/173289887835.73724.608223217359025939.stgit@devnote2/ Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2025-01-10kprobes: Reduce preempt disable scope in check_kprobe_access_safe()Thomas Weißschuh
Commit a189d0350f387 ("kprobes: disable preempt for module_text_address() and kernel_text_address()") introduced a preempt_disable() region to protect against concurrent module unloading. However this region also includes the call to jump_label_text_reserved() which takes a long time; up to 400us, iterating over approx 6000 jump tables. The scope protected by preempt_disable() is largen than necessary. core_kernel_text() does not need to be protected as it does not interact with module code at all. Only the scope from __module_text_address() to try_module_get() needs to be protected. By limiting the critical section to __module_text_address() and try_module_get() the function responsible for the latency spike remains preemptible. This works fine even when !CONFIG_MODULES as in that case try_module_get() will always return true and that block can be optimized away. Limit the critical section to __module_text_address() and try_module_get(). Use guard(preempt)() for easier error handling. While at it also remove a spurious *probed_mod = NULL in an error path. On errors the output parameter is never inspected by the caller. Some error paths were clearing the parameters, some didn't. Align them for clarity. Link: https://lore.kernel.org/all/20241121-kprobes-preempt-v1-1-fd581ee7fcbb@linutronix.de/ Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2025-01-10tracing/kprobes: Fix to free objects when failed to copy a symbolMasami Hiramatsu (Google)
In __trace_kprobe_create(), if something fails it must goto error block to free objects. But when strdup() a symbol, it returns without that. Fix it to goto the error block to free objects correctly. Link: https://lore.kernel.org/all/173643297743.1514810.2408159540454241947.stgit@devnote2/ Fixes: 6212dd29683e ("tracing/kprobes: Use dyn_event framework for kprobe events") Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2025-01-09sched/fair: Fix EEVDF entity placement bug causing scheduling lagPeter Zijlstra
I noticed this in my traces today: turbostat-1222 [006] d..2. 311.935649: reweight_entity: (ffff888108f13e00-ffff88885ef38440-6) { weight: 1048576 avg_vruntime: 3184159639071 vruntime: 3184159640194 (-1123) deadline: 3184162621107 } -> { weight: 2 avg_vruntime: 3184177463330 vruntime: 3184748414495 (-570951165) deadline: 4747605329439 } turbostat-1222 [006] d..2. 311.935651: reweight_entity: (ffff888108f13e00-ffff88885ef38440-6) { weight: 2 avg_vruntime: 3184177463330 vruntime: 3184748414495 (-570951165) deadline: 4747605329439 } -> { weight: 1048576 avg_vruntime: 3184176414812 vruntime: 3184177464419 (-1049607) deadline: 3184180445332 } Which is a weight transition: 1048576 -> 2 -> 1048576. One would expect the lag to shoot out *AND* come back, notably: -1123*1048576/2 = -588775424 -588775424*2/1048576 = -1123 Except the trace shows it is all off. Worse, subsequent cycles shoot it out further and further. This made me have a very hard look at reweight_entity(), and specifically the ->on_rq case, which is more prominent with DELAY_DEQUEUE. And indeed, it is all sorts of broken. While the computation of the new lag is correct, the computation for the new vruntime, using the new lag is broken for it does not consider the logic set out in place_entity(). With the below patch, I now see things like: migration/12-55 [012] d..3. 309.006650: reweight_entity: (ffff8881e0e6f600-ffff88885f235f40-12) { weight: 977582 avg_vruntime: 4860513347366 vruntime: 4860513347908 (-542) deadline: 4860516552475 } -> { weight: 2 avg_vruntime: 4860528915984 vruntime: 4860793840706 (-264924722) deadline: 6427157349203 } migration/14-62 [014] d..3. 309.006698: reweight_entity: (ffff8881e0e6cc00-ffff88885f3b5f40-15) { weight: 2 avg_vruntime: 4874472992283 vruntime: 4939833828823 (-65360836540) deadline: 6316614641111 } -> { weight: 967149 avg_vruntime: 4874217684324 vruntime: 4874217688559 (-4235) deadline: 4874220535650 } Which isn't perfect yet, but much closer. Reported-by: Doug Smythies <dsmythies@telus.net> Reported-by: Ingo Molnar <mingo@kernel.org> Tested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Fixes: eab03c23c2a1 ("sched/eevdf: Fix vruntime adjustment on reweight") Link: https://lore.kernel.org/r/20250109105959.GA2981@noisy.programming.kicks-ass.net
2025-01-09btf: Switch module BTF attribute to sysfs_bin_attr_simple_read()Thomas Weißschuh
The generic function from the sysfs core can replace the custom one. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20241228-sysfs-const-bin_attr-simple-v2-3-7c6f3f1767a3@weissschuh.net Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-01-09btf: Switch vmlinux BTF attribute to sysfs_bin_attr_simple_read()Thomas Weißschuh
The generic function from the sysfs core can replace the custom one. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20241228-sysfs-const-bin_attr-simple-v2-2-7c6f3f1767a3@weissschuh.net Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-01-09sysfs: constify bin_attribute argument of sysfs_bin_attr_simple_read()Thomas Weißschuh
Most users use this function through the BIN_ATTR_SIMPLE* macros, they can handle the switch transparently. Also adapt the two non-macro users in the same change. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Acked-by: Madhavan Srinivasan <maddy@linux.ibm.com> Reviewed-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Tested-by: Aditya Gupta <adityag@linux.ibm.com> Link: https://lore.kernel.org/r/20241228-sysfs-const-bin_attr-simple-v2-1-7c6f3f1767a3@weissschuh.net Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2025-01-08bpf: Remove migrate_{disable|enable} from bpf_selem_free()Hou Tao
bpf_selem_free() has the following three callers: (1) bpf_local_storage_update It will be invoked through ->map_update_elem syscall or helpers for storage map. Migration has already been disabled in these running contexts. (2) bpf_sk_storage_clone It has already disabled migration before invoking bpf_selem_free(). (3) bpf_selem_free_list bpf_selem_free_list() has three callers: bpf_selem_unlink_storage(), bpf_local_storage_update() and bpf_local_storage_destroy(). The callers of bpf_selem_unlink_storage() includes: storage map ->map_delete_elem syscall, storage map delete helpers and bpf_local_storage_map_free(). These contexts have already disabled migration when invoking bpf_selem_unlink() which invokes bpf_selem_unlink_storage() and bpf_selem_free_list() correspondingly. bpf_local_storage_update() has been analyzed as the first caller above. bpf_local_storage_destroy() is invoked when freeing the local storage for the kernel object. Now cgroup, task, inode and sock storage have already disabled migration before invoking bpf_local_storage_destroy(). After the analyses above, it is safe to remove migrate_{disable|enable} from bpf_selem_free(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-17-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} from bpf_local_storage_free()Hou Tao
bpf_local_storage_free() has three callers: 1) bpf_local_storage_alloc() Its caller must have disabled migration. 2) bpf_local_storage_destroy() Its four callers (bpf_{cgrp|inode|task|sk}_storage_free()) have already invoked migrate_disable() before invoking bpf_local_storage_destroy(). 3) bpf_selem_unlink() Its callers include: cgrp/inode/task/sk storage ->map_delete_elem callbacks, bpf_{cgrp|inode|task|sk}_storage_delete() helpers and bpf_local_storage_map_free(). All of these callers have already disabled migration before invoking bpf_selem_unlink(). Therefore, it is OK to remove migrate_{disable|enable} pair from bpf_local_storage_free(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-16-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} from bpf_local_storage_alloc()Hou Tao
These two callers of bpf_local_storage_alloc() are the same as bpf_selem_alloc(): bpf_sk_storage_clone() and bpf_local_storage_update(). The running contexts of these two callers have already disabled migration, therefore, there is no need to add extra migrate_{disable|enable} pair in bpf_local_storage_alloc(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-15-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable|enable} from bpf_selem_alloc()Hou Tao
bpf_selem_alloc() has two callers: (1) bpf_sk_storage_clone_elem() bpf_sk_storage_clone() has already disabled migration before invoking bpf_sk_storage_clone_elem(). (2) bpf_local_storage_update() Its callers include: cgrp/task/inode/sock storage ->map_update_elem() callbacks and bpf_{cgrp|task|inode|sk}_storage_get() helpers. These running contexts have already disabled migration Therefore, there is no need to add extra migrate_{disable|enable} pair in bpf_selem_alloc(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-14-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-01-08bpf: Remove migrate_{disable,enable} in bpf_cpumask_release()Hou Tao
When BPF program invokes bpf_cpumask_release(), the migration must have been disabled. When bpf_cpumask_release_dtor() invokes bpf_cpumask_release(), the caller bpf_obj_free_fields() also has disabled migration, therefore, it is OK to remove the unnecessary migrate_{disable|enable} pair in bpf_cpumask_release(). Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20250108010728.207536-13-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>