Skip to content

Commit 5d9ca30

Browse files
Xiao Guangrongavikivity
authored andcommitted
KVM: MMU: fix detecting misaligned accessed
Sometimes, we only modify the last one byte of a pte to update status bit, for example, clear_bit is used to clear r/w bit in linux kernel and 'andb' instruction is used in this function, in this case, kvm_mmu_pte_write will treat it as misaligned access, and the shadow page table is zapped Signed-off-by: Xiao Guangrong <[email protected]> Signed-off-by: Avi Kivity <[email protected]>
1 parent 889e5cb commit 5d9ca30

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

arch/x86/kvm/mmu.c

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3602,6 +3602,14 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa,
36023602

36033603
offset = offset_in_page(gpa);
36043604
pte_size = sp->role.cr4_pae ? 8 : 4;
3605+
3606+
/*
3607+
* Sometimes, the OS only writes the last one bytes to update status
3608+
* bits, for example, in linux, andb instruction is used in clear_bit().
3609+
*/
3610+
if (!(offset & (pte_size - 1)) && bytes == 1)
3611+
return false;
3612+
36053613
misaligned = (offset ^ (offset + bytes - 1)) & ~(pte_size - 1);
36063614
misaligned |= bytes < 4;
36073615

0 commit comments

Comments
 (0)