Skip to content

Commit 4a96998

Browse files
Guo Chaomatosatti
authored andcommitted
KVM: x86: Fix typos in x86.c
Signed-off-by: Guo Chao <[email protected]> Signed-off-by: Marcelo Tosatti <[email protected]>
1 parent c5ec2e5 commit 4a96998

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

arch/x86/kvm/x86.c

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1093,7 +1093,7 @@ void kvm_write_tsc(struct kvm_vcpu *vcpu, u64 data)
10931093
* For each generation, we track the original measured
10941094
* nanosecond time, offset, and write, so if TSCs are in
10951095
* sync, we can match exact offset, and if not, we can match
1096-
* exact software computaion in compute_guest_tsc()
1096+
* exact software computation in compute_guest_tsc()
10971097
*
10981098
* These values are tracked in kvm->arch.cur_xxx variables.
10991099
*/
@@ -1500,7 +1500,7 @@ static int kvm_pv_enable_async_pf(struct kvm_vcpu *vcpu, u64 data)
15001500
{
15011501
gpa_t gpa = data & ~0x3f;
15021502

1503-
/* Bits 2:5 are resrved, Should be zero */
1503+
/* Bits 2:5 are reserved, Should be zero */
15041504
if (data & 0x3c)
15051505
return 1;
15061506

@@ -1723,7 +1723,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data)
17231723
* Ignore all writes to this no longer documented MSR.
17241724
* Writes are only relevant for old K7 processors,
17251725
* all pre-dating SVM, but a recommended workaround from
1726-
* AMD for these chips. It is possible to speicify the
1726+
* AMD for these chips. It is possible to specify the
17271727
* affected processor models on the command line, hence
17281728
* the need to ignore the workaround.
17291729
*/
@@ -4491,7 +4491,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gva_t gva)
44914491

44924492
/*
44934493
* if emulation was due to access to shadowed page table
4494-
* and it failed try to unshadow page and re-entetr the
4494+
* and it failed try to unshadow page and re-enter the
44954495
* guest to let CPU execute the instruction.
44964496
*/
44974497
if (kvm_mmu_unprotect_page_virt(vcpu, gva))
@@ -5587,7 +5587,7 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
55875587
/*
55885588
* We are here if userspace calls get_regs() in the middle of
55895589
* instruction emulation. Registers state needs to be copied
5590-
* back from emulation context to vcpu. Usrapace shouldn't do
5590+
* back from emulation context to vcpu. Userspace shouldn't do
55915591
* that usually, but some bad designed PV devices (vmware
55925592
* backdoor interface) need this to work
55935593
*/
@@ -6116,7 +6116,7 @@ int kvm_arch_hardware_enable(void *garbage)
61166116
* as we reset last_host_tsc on all VCPUs to stop this from being
61176117
* called multiple times (one for each physical CPU bringup).
61186118
*
6119-
* Platforms with unnreliable TSCs don't have to deal with this, they
6119+
* Platforms with unreliable TSCs don't have to deal with this, they
61206120
* will be compensated by the logic in vcpu_load, which sets the TSC to
61216121
* catchup mode. This will catchup all VCPUs to real time, but cannot
61226122
* guarantee that they stay in perfect synchronization.
@@ -6391,7 +6391,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
63916391
map_flags = MAP_SHARED | MAP_ANONYMOUS;
63926392

63936393
/*To keep backward compatibility with older userspace,
6394-
*x86 needs to hanlde !user_alloc case.
6394+
*x86 needs to handle !user_alloc case.
63956395
*/
63966396
if (!user_alloc) {
63976397
if (npages && !old.rmap) {

0 commit comments

Comments
 (0)