Lucene search

K
redhatcveRedhat.comRH:CVE-2024-26712
HistoryApr 04, 2024 - 12:06 a.m.

CVE-2024-26712

2024-04-0400:06:32
redhat.com
access.redhat.com
6
linux kernel
vulnerability fix
powerpc/kasan
page alignment
memory overwriting

6.9 Medium

AI Score

Confidence

Low

0.0004 Low

EPSS

Percentile

10.4%

In the Linux kernel, the following vulnerability has been resolved: powerpc/kasan: Fix addr error caused by page alignment In kasan_init_region, when k_start is not page aligned, at the begin of for loop, k_cur = k_start & PAGE_MASK is less than k_start, and then va = block + k_cur - k_start is less than block, the addr va is invalid, because the memory address space from va to block is not alloced by memblock_alloc, which will not be reserved by memblock_reserve later, it will be used by other places. As a result, memory overwriting occurs. for example: int __init __weak kasan_init_region(void start, size_t size) { […] / if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) / block = memblock_alloc(k_end - k_start, PAGE_SIZE); […] for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { / at the begin of for loop * block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400) * va(dcd96c00) is less than block(dcd97000), va is invalid */ void *va = block + k_cur - k_start; […] } […] } Therefore, page alignment is performed on k_start before memblock_alloc() to ensure the validity of the VA address.

6.9 Medium

AI Score

Confidence

Low

0.0004 Low

EPSS

Percentile

10.4%