Skip to content

Commit ca0e2fc

Browse files
committed
arm64: probes: Remove broken LDR (literal) uprobe support
jira LE-2177 cve CVE-2024-50099 Rebuild_History Non-Buildable kernel-5.14.0-503.19.1.el9_5 commit-author Mark Rutland <[email protected]> commit acc450a The simulate_ldr_literal() and simulate_ldrsw_literal() functions are unsafe to use for uprobes. Both functions were originally written for use with kprobes, and access memory with plain C accesses. When uprobes was added, these were reused unmodified even though they cannot safely access user memory. There are three key problems: 1) The plain C accesses do not have corresponding extable entries, and thus if they encounter a fault the kernel will treat these as unintentional accesses to user memory, resulting in a BUG() which will kill the kernel thread, and likely lead to further issues (e.g. lockup or panic()). 2) The plain C accesses are subject to HW PAN and SW PAN, and so when either is in use, any attempt to simulate an access to user memory will fault. Thus neither simulate_ldr_literal() nor simulate_ldrsw_literal() can do anything useful when simulating a user instruction on any system with HW PAN or SW PAN. 3) The plain C accesses are privileged, as they run in kernel context, and in practice can access a small range of kernel virtual addresses. The instructions they simulate have a range of +/-1MiB, and since the simulated instructions must itself be a user instructions in the TTBR0 address range, these can address the final 1MiB of the TTBR1 acddress range by wrapping downwards from an address in the first 1MiB of the TTBR0 address range. In contemporary kernels the last 8MiB of TTBR1 address range is reserved, and accesses to this will always fault, meaning this is no worse than (1). Historically, it was theoretically possible for the linear map or vmemmap to spill into the final 8MiB of the TTBR1 address range, but in practice this is extremely unlikely to occur as this would require either: * Having enough physical memory to fill the entire linear map all the way to the final 1MiB of the TTBR1 address range. * Getting unlucky with KASLR randomization of the linear map such that the populated region happens to overlap with the last 1MiB of the TTBR address range. ... and in either case if we were to spill into the final page there would be larger problems as the final page would alias with error pointers. Practically speaking, (1) and (2) are the big issues. Given there have been no reports of problems since the broken code was introduced, it appears that no-one is relying on probing these instructions with uprobes. Avoid these issues by not allowing uprobes on LDR (literal) and LDRSW (literal), limiting the use of simulate_ldr_literal() and simulate_ldrsw_literal() to kprobes. Attempts to place uprobes on LDR (literal) and LDRSW (literal) will be rejected as arm_probe_decode_insn() will return INSN_REJECTED. In future we can consider introducing working uprobes support for these instructions, but this will require more significant work. Fixes: 9842cea ("arm64: Add uprobe support") Cc: [email protected] Signed-off-by: Mark Rutland <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]> (cherry picked from commit acc450a) Signed-off-by: Jonathan Maple <[email protected]>
1 parent 779a13c commit ca0e2fc

File tree

1 file changed

+11
-5
lines changed

1 file changed

+11
-5
lines changed

arch/arm64/kernel/probes/decode-insn.c

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -99,10 +99,6 @@ arm_probe_decode_insn(probe_opcode_t insn, struct arch_probe_insn *api)
9999
aarch64_insn_is_blr(insn) ||
100100
aarch64_insn_is_ret(insn)) {
101101
api->handler = simulate_br_blr_ret;
102-
} else if (aarch64_insn_is_ldr_lit(insn)) {
103-
api->handler = simulate_ldr_literal;
104-
} else if (aarch64_insn_is_ldrsw_lit(insn)) {
105-
api->handler = simulate_ldrsw_literal;
106102
} else {
107103
/*
108104
* Instruction cannot be stepped out-of-line and we don't
@@ -140,6 +136,17 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
140136
probe_opcode_t insn = le32_to_cpu(*addr);
141137
probe_opcode_t *scan_end = NULL;
142138
unsigned long size = 0, offset = 0;
139+
struct arch_probe_insn *api = &asi->api;
140+
141+
if (aarch64_insn_is_ldr_lit(insn)) {
142+
api->handler = simulate_ldr_literal;
143+
decoded = INSN_GOOD_NO_SLOT;
144+
} else if (aarch64_insn_is_ldrsw_lit(insn)) {
145+
api->handler = simulate_ldrsw_literal;
146+
decoded = INSN_GOOD_NO_SLOT;
147+
} else {
148+
decoded = arm_probe_decode_insn(insn, &asi->api);
149+
}
143150

144151
/*
145152
* If there's a symbol defined in front of and near enough to
@@ -157,7 +164,6 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi)
157164
else
158165
scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE;
159166
}
160-
decoded = arm_probe_decode_insn(insn, &asi->api);
161167

162168
if (decoded != INSN_REJECTED && scan_end)
163169
if (is_probed_address_atomic(addr - 1, scan_end))

0 commit comments

Comments
 (0)