Lucene search

K
nvd416baaa9-dc9f-4396-8d5f-8c081fb06d67NVD:CVE-2021-47608
HistoryJun 19, 2024 - 3:15 p.m.

CVE-2021-47608

2024-06-1915:15:55
416baaa9-dc9f-4396-8d5f-8c081fb06d67
web.nvd.nist.gov
5
linux kernel
vulnerability
unprivileged users
kernel pointers
bpf
atomic fetch

EPSS

0

Percentile

9.0%

In the Linux kernel, the following vulnerability has been resolved:

bpf: Fix kernel address leakage in atomic fetch

The change in commit 37086bfdc737 (“bpf: Propagate stack bounds to registers
in atomics w/ BPF_FETCH”) around check_mem_access() handling is buggy since
this would allow for unprivileged users to leak kernel pointers. For example,
an atomic fetch/and with -1 on a stack destination which holds a spilled
pointer will migrate the spilled register type into a scalar, which can then
be exported out of the program (since scalar != pointer) by dumping it into
a map value.

The original implementation of XADD was preventing this situation by using
a double call to check_mem_access() one with BPF_READ and a subsequent one
with BPF_WRITE, in both cases passing -1 as a placeholder value instead of
register as per XADD semantics since it didn’t contain a value fetch. The
BPF_READ also included a check in check_stack_read_fixed_off() which rejects
the program if the stack slot is of __is_pointer_value() if dst_regno < 0.
The latter is to distinguish whether we’re dealing with a regular stack spill/
fill or some arithmetical operation which is disallowed on non-scalars, see
also 6e7e63cbb023 (“bpf: Forbid XADD on spilled pointers for unprivileged
users”) for more context on check_mem_access() and its handling of placeholder
value -1.

One minimally intrusive option to fix the leak is for the BPF_FETCH case to
initially check the BPF_READ case via check_mem_access() with -1 as register,
followed by the actual load case with non-negative load_reg to propagate
stack bounds to registers.