Skip to content

Commit f91072e

Browse files
Jiri Olsaingomolnar
authored andcommitted
perf/core: Fix race in the perf_mmap_close() function
There's a possible race in perf_mmap_close() when checking ring buffer's mmap_count refcount value. The problem is that the mmap_count check is not atomic because we call atomic_dec() and atomic_read() separately. perf_mmap_close: ... atomic_dec(&rb->mmap_count); ... if (atomic_read(&rb->mmap_count)) goto out_put; <ring buffer detach> free_uid out_put: ring_buffer_put(rb); /* could be last */ The race can happen when we have two (or more) events sharing same ring buffer and they go through atomic_dec() and then they both see 0 as refcount value later in atomic_read(). Then both will go on and execute code which is meant to be run just once. The code that detaches ring buffer is probably fine to be executed more than once, but the problem is in calling free_uid(), which will later on demonstrate in related crashes and refcount warnings, like: refcount_t: addition on 0; use-after-free. ... RIP: 0010:refcount_warn_saturate+0x6d/0xf ... Call Trace: prepare_creds+0x190/0x1e0 copy_creds+0x35/0x172 copy_process+0x471/0x1a80 _do_fork+0x83/0x3a0 __do_sys_wait4+0x83/0x90 __do_sys_clone+0x85/0xa0 do_syscall_64+0x5b/0x1e0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Using atomic decrease and check instead of separated calls. Tested-by: Michael Petlan <mpetlan@redhat.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Wade Mealing <wmealing@redhat.com> Fixes: 9bb5d40 ("perf: Fix mmap() accounting hole"); Link: https://lore.kernel.org/r/20200916115311.GE2301783@krava
1 parent 3dbde69 commit f91072e

1 file changed

Lines changed: 4 additions & 3 deletions

File tree

kernel/events/core.c

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5895,11 +5895,11 @@ static void perf_pmu_output_stop(struct perf_event *event);
58955895
static void perf_mmap_close(struct vm_area_struct *vma)
58965896
{
58975897
struct perf_event *event = vma->vm_file->private_data;
5898-
58995898
struct perf_buffer *rb = ring_buffer_get(event);
59005899
struct user_struct *mmap_user = rb->mmap_user;
59015900
int mmap_locked = rb->mmap_locked;
59025901
unsigned long size = perf_data_size(rb);
5902+
bool detach_rest = false;
59035903

59045904
if (event->pmu->event_unmapped)
59055905
event->pmu->event_unmapped(event, vma->vm_mm);
@@ -5930,7 +5930,8 @@ static void perf_mmap_close(struct vm_area_struct *vma)
59305930
mutex_unlock(&event->mmap_mutex);
59315931
}
59325932

5933-
atomic_dec(&rb->mmap_count);
5933+
if (atomic_dec_and_test(&rb->mmap_count))
5934+
detach_rest = true;
59345935

59355936
if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex))
59365937
goto out_put;
@@ -5939,7 +5940,7 @@ static void perf_mmap_close(struct vm_area_struct *vma)
59395940
mutex_unlock(&event->mmap_mutex);
59405941

59415942
/* If there's still other mmap()s of this buffer, we're done. */
5942-
if (atomic_read(&rb->mmap_count))
5943+
if (!detach_rest)
59435944
goto out_put;
59445945

59455946
/*

0 commit comments

Comments
 (0)