perf/core: Remove retry loop from perf_mmap()

AFAICT there is no actual benefit from the mutex drop on re-try. The
'worst' case scenario is that we instantly re-gain the mutex without
perf_mmap_close() getting it. So might as well make that the normal
case.

Reflow the code to make the ring buffer detach case naturally flow
into the no ring buffer case.

[ mingo: Forward ported it ]

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Ravi Bangoria <ravi.bangoria@amd.com>
Link: https://lore.kernel.org/r/20241104135519.463607258@infradead.org
This commit is contained in:
Peter Zijlstra
2024-11-04 14:39:26 +01:00
committed by Ingo Molnar
parent 0c8a4e4139
commit 8eaec7bb72

View File

@@ -6719,28 +6719,33 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
return -EINVAL;
WARN_ON_ONCE(event->ctx->parent_ctx);
again:
mutex_lock(&event->mmap_mutex);
if (event->rb) {
if (data_page_nr(event->rb) != nr_pages) {
ret = -EINVAL;
goto unlock;
}
if (!atomic_inc_not_zero(&event->rb->mmap_count)) {
if (atomic_inc_not_zero(&event->rb->mmap_count)) {
/*
* Raced against perf_mmap_close(); remove the
* event and try again.
* Success -- managed to mmap() the same buffer
* multiple times.
*/
ring_buffer_attach(event, NULL);
mutex_unlock(&event->mmap_mutex);
goto again;
ret = 0;
/* We need the rb to map pages. */
rb = event->rb;
goto unlock;
}
/* We need the rb to map pages. */
rb = event->rb;
goto unlock;
/*
* Raced against perf_mmap_close()'s
* atomic_dec_and_mutex_lock() remove the
* event and continue as if !event->rb
*/
ring_buffer_attach(event, NULL);
}
} else {
/*
* AUX area mapping: if rb->aux_nr_pages != 0, it's already