In the Linux kernel, the following vulnerability has been resolved:
xsk: Fix race condition in AF_XDP generic RX path
Move rx_lock from xsk_socket to xsk_buff_pool.
Fix synchronization for shared umem mode in
generic RX path where multiple sockets share
single xsk_buff_pool.
RX queue is exclusive to xsk_socket, while FILL
queue can be shared between multiple sockets.
This could result in race condition where two
CPU cores access RX path of two different sockets
sharing the same umem.
Protect both queues by acquiring spinlock in shared
xsk_buff_pool.
Lock contention may be minimized in the future by some
per-thread FQ buffering.
It's safe and necessary to move spin_lock_bh(rx_lock)
after xsk_rcv_check():
* xs->pool and spinlock_init is synchronized by
xsk_bind() -> xsk_is_bound() memory barriers.
* xsk_rcv_check() may return true at the moment
of xsk_release() or xsk_unbind_dev(),
however this will not cause any data races or
race conditions. xsk_unbind_dev() removes xdp
socket from all maps and waits for completion
of all outstanding rx operations. Packets in
RX path will either complete safely or drop.
Metrics
Affected Vendors & Products
References
History
Wed, 09 Jul 2025 02:30:00 +0000
Type | Values Removed | Values Added |
---|---|---|
Metrics |
cvssV3_1
|
cvssV3_1
|
Wed, 21 May 2025 03:00:00 +0000
Type | Values Removed | Values Added |
---|---|---|
References |
| |
Metrics |
threat_severity
|
cvssV3_1
|
Tue, 20 May 2025 15:30:00 +0000
Type | Values Removed | Values Added |
---|---|---|
Description | In the Linux kernel, the following vulnerability has been resolved: xsk: Fix race condition in AF_XDP generic RX path Move rx_lock from xsk_socket to xsk_buff_pool. Fix synchronization for shared umem mode in generic RX path where multiple sockets share single xsk_buff_pool. RX queue is exclusive to xsk_socket, while FILL queue can be shared between multiple sockets. This could result in race condition where two CPU cores access RX path of two different sockets sharing the same umem. Protect both queues by acquiring spinlock in shared xsk_buff_pool. Lock contention may be minimized in the future by some per-thread FQ buffering. It's safe and necessary to move spin_lock_bh(rx_lock) after xsk_rcv_check(): * xs->pool and spinlock_init is synchronized by xsk_bind() -> xsk_is_bound() memory barriers. * xsk_rcv_check() may return true at the moment of xsk_release() or xsk_unbind_dev(), however this will not cause any data races or race conditions. xsk_unbind_dev() removes xdp socket from all maps and waits for completion of all outstanding rx operations. Packets in RX path will either complete safely or drop. | |
Title | xsk: Fix race condition in AF_XDP generic RX path | |
References |
|

Status: PUBLISHED
Assigner: Linux
Published:
Updated: 2025-05-26T05:23:44.292Z
Reserved: 2025-04-16T04:51:23.968Z
Link: CVE-2025-37920

No data.

Status : Awaiting Analysis
Published: 2025-05-20T16:15:28.603
Modified: 2025-05-21T20:25:16.407
Link: CVE-2025-37920
