x86: vmm: Disable IRQs when mucking with pcpu GPCs vmx_clear_vmcs() is called from a few places, and interrupts could be on. We could have had a race where we start to clear, then get interrupted by an IPI/IKM that mucks with the per-cpu GPC state. Then the interrupt returns. I didn't see this one - we'd probably need at least one VM bouncing around the cores to get this bug. Signed-off-by: Barret Rhoden <brho@cs.berkeley.edu>
diff --git a/kern/arch/x86/vmm/intel/vmx.c b/kern/arch/x86/vmm/intel/vmx.c index 8e1033d..ad61e41 100644 --- a/kern/arch/x86/vmm/intel/vmx.c +++ b/kern/arch/x86/vmm/intel/vmx.c
@@ -1357,8 +1357,11 @@ * */ void vmx_clear_vmcs(void) { - struct guest_pcore *gpc = PERCPU_VAR(gpc_to_clear_to); + struct guest_pcore *gpc; + int8_t irq_state = 0; + disable_irqsave(&irq_state); + gpc = PERCPU_VAR(gpc_to_clear_to); if (gpc) { vmcs_clear(gpc->vmcs); ept_sync_context(gpc_get_eptp(gpc)); @@ -1367,6 +1370,7 @@ gpc->vmcs_core_id = -1; PERCPU_VAR(gpc_to_clear_to) = NULL; } + enable_irqsave(&irq_state); } static void __clear_vmcs(uint32_t srcid, long a0, long a1, long a2) @@ -1416,6 +1420,7 @@ /* We don't have to worry about races yet. No one will try to load gpc * until we've returned and unlocked, and no one will clear an old VMCS to * this GPC, since it was cleared before we finished loading (above). */ + assert(!irq_is_enabled()); gpc->vmcs_core_id = core_id(); PERCPU_VAR(gpc_to_clear_to) = gpc; }