Skip to content

SNP Guest VSM: Start VP hypercall handling #634

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Feb 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 26 additions & 3 deletions openhcl/virt_mshv_vtl/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -420,6 +420,8 @@ pub struct UhCvmVpInner {
tlb_lock_info: VtlArray<TlbLockInfo, 2>,
/// Whether VTL 1 has been enabled on the vp
vtl1_enabled: Mutex<bool>,
/// Whether the VP has been started via the StartVp hypercall.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense for vtl1_enabled to be a ReadWriteLock instead of a Mutex? It seems like we have a lot of paths that read it, but only one that needs to write to it and have exclusive access.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jstarks I think in the past you've said that RwLocks don't scale to a large number of processors, so I interpreted that as be careful when using them. Do you have general guidelines for when you would choose to use or not use a RwLock? Beyond basing it off the ratio of expected readers to writers?

started: AtomicBool,
}

#[cfg_attr(guest_arch = "aarch64", expect(dead_code))]
Expand Down Expand Up @@ -495,7 +497,7 @@ struct VbsIsolatedVtl1State {
#[derive(Clone, Copy, Default, Inspect)]
struct HardwareCvmVtl1State {
/// Whether VTL 1 has been enabled on any vp
enabled_on_vp_count: u32,
enabled_on_any_vp: bool,
/// Whether guest memory should be zeroed before it resets.
zero_memory_on_reset: bool,
/// Whether a vp can be started or reset by a lower vtl.
Expand Down Expand Up @@ -646,11 +648,31 @@ struct UhVpInner {
vp_info: TargetVpInfo,
cpu_index: u32,
#[inspect(with = "|arr| inspect::iter_by_index(arr.iter().map(|v| v.lock().is_some()))")]
hv_start_enable_vtl_vp: VtlArray<Mutex<Option<Box<hvdef::hypercall::InitialVpContextX64>>>, 2>,
hv_start_enable_vtl_vp: VtlArray<Mutex<Option<Box<VpStartEnableVtl>>>, 2>,
sidecar_exit_reason: Mutex<Option<SidecarExitReason>>,
}

#[cfg_attr(not(guest_arch = "x86_64"), allow(dead_code))]
#[derive(Debug, Inspect)]
/// Which operation is setting the initial vp context
pub enum InitialVpContextOperation {
/// The VP is being started via the StartVp hypercall.
StartVp,
/// The VP is being started via the EnableVpVtl hypercall.
EnableVpVtl,
}

#[cfg_attr(not(guest_arch = "x86_64"), allow(dead_code))]
#[derive(Debug, Inspect)]
/// State for handling StartVp/EnableVpVtl hypercalls.
pub struct VpStartEnableVtl {
/// Which operation, startvp or enablevpvtl, is setting the initial vp
/// context
operation: InitialVpContextOperation,
#[inspect(skip)]
context: hvdef::hypercall::InitialVpContextX64,
}

#[derive(Debug, Inspect)]
struct TlbLockInfo {
/// The set of VPs that are waiting for this VP to release the TLB lock.
Expand Down Expand Up @@ -1859,9 +1881,10 @@ impl UhProtoPartition<'_> {
) -> Result<UhCvmPartitionState, Error> {
let vp_count = params.topology.vp_count() as usize;
let vps = (0..vp_count)
.map(|_vp_index| UhCvmVpInner {
.map(|vp_index| UhCvmVpInner {
tlb_lock_info: VtlArray::from_fn(|_| TlbLockInfo::new(vp_count)),
vtl1_enabled: Mutex::new(false),
started: AtomicBool::new(vp_index == 0),
})
.collect();
let tlb_locked_vps =
Expand Down
Loading