tracing: Fix tracing_marker may trigger page fault during preempt_disable
[ Upstream commit3d62ab32df
] Both tracing_mark_write and tracing_mark_raw_write call __copy_from_user_inatomic during preempt_disable. But in some case, __copy_from_user_inatomic may trigger page fault, and will call schedule() subtly. And if a task is migrated to other cpu, the following warning will be trigger: if (RB_WARN_ON(cpu_buffer, !local_read(&cpu_buffer->committing))) An example can illustrate this issue: process flow CPU --------------------------------------------------------------------- tracing_mark_raw_write(): cpu:0 ... ring_buffer_lock_reserve(): cpu:0 ... cpu = raw_smp_processor_id() cpu:0 cpu_buffer = buffer->buffers[cpu] cpu:0 ... ... __copy_from_user_inatomic(): cpu:0 ... # page fault do_mem_abort(): cpu:0 ... # Call schedule schedule() cpu:0 ... # the task schedule to cpu1 __buffer_unlock_commit(): cpu:1 ... ring_buffer_unlock_commit(): cpu:1 ... cpu = raw_smp_processor_id() cpu:1 cpu_buffer = buffer->buffers[cpu] cpu:1 As shown above, the process will acquire cpuid twice and the return values are not the same. To fix this problem using copy_from_user_nofault instead of __copy_from_user_inatomic, as the former performs 'access_ok' before copying. Link: https://lore.kernel.org/20250819105152.2766363-1-luogengkun@huaweicloud.com Fixes:656c7f0d2d
("tracing: Replace kmap with copy_from_user() in trace_marker writing") Signed-off-by: Luo Gengkun <luogengkun@huaweicloud.com> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:
parent
526d747df4
commit
3f9b5dfbc4
|
@ -6949,7 +6949,7 @@ tracing_mark_write(struct file *filp, const char __user *ubuf,
|
|||
entry = ring_buffer_event_data(event);
|
||||
entry->ip = _THIS_IP_;
|
||||
|
||||
len = __copy_from_user_inatomic(&entry->buf, ubuf, cnt);
|
||||
len = copy_from_user_nofault(&entry->buf, ubuf, cnt);
|
||||
if (len) {
|
||||
memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);
|
||||
cnt = FAULTED_SIZE;
|
||||
|
@ -7020,7 +7020,7 @@ tracing_mark_raw_write(struct file *filp, const char __user *ubuf,
|
|||
|
||||
entry = ring_buffer_event_data(event);
|
||||
|
||||
len = __copy_from_user_inatomic(&entry->id, ubuf, cnt);
|
||||
len = copy_from_user_nofault(&entry->id, ubuf, cnt);
|
||||
if (len) {
|
||||
entry->id = -1;
|
||||
memcpy(&entry->buf, FAULTED_STR, FAULTED_SIZE);
|
||||
|
|
Loading…
Reference in New Issue