^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) ==============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) Running nested guests with KVM
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) ==============================
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) A nested guest is the ability to run a guest inside another guest (it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) can be KVM-based or a different hypervisor). The straightforward
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) example is a KVM guest that in turn runs on a KVM guest (the rest of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) this document is built on this example)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) .----------------. .----------------.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) | | | |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) | L2 | | L2 |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) | (Nested Guest) | | (Nested Guest) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) | | | |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) |----------------'--'----------------|
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) | |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) | L1 (Guest Hypervisor) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) | KVM (/dev/kvm) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) | |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) .------------------------------------------------------.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21) | L0 (Host Hypervisor) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) | KVM (/dev/kvm) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) |------------------------------------------------------|
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) | Hardware (with virtualization extensions) |
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) '------------------------------------------------------'
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) Terminology:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) - L0 – level-0; the bare metal host, running KVM
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) - L1 – level-1 guest; a VM running on L0; also called the "guest
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) hypervisor", as it itself is capable of running KVM.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) - L2 – level-2 guest; a VM running on L1, this is the "nested guest"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) .. note:: The above diagram is modelled after the x86 architecture;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) s390x, ppc64 and other architectures are likely to have
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) a different design for nesting.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) For example, s390x always has an LPAR (LogicalPARtition)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) hypervisor running on bare metal, adding another layer and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) resulting in at least four levels in a nested setup — L0 (bare
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) metal, running the LPAR hypervisor), L1 (host hypervisor), L2
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) (guest hypervisor), L3 (nested guest).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) This document will stick with the three-level terminology (L0,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) L1, and L2) for all architectures; and will largely focus on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) x86.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) Use Cases
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) ---------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) There are several scenarios where nested KVM can be useful, to name a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) few:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57) - As a developer, you want to test your software on different operating
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) systems (OSes). Instead of renting multiple VMs from a Cloud
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) Provider, using nested KVM lets you rent a large enough "guest
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) hypervisor" (level-1 guest). This in turn allows you to create
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) multiple nested guests (level-2 guests), running different OSes, on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) which you can develop and test your software.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64) - Live migration of "guest hypervisors" and their nested guests, for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) load balancing, disaster recovery, etc.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67) - VM image creation tools (e.g. ``virt-install``, etc) often run
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) their own VM, and users expect these to work inside a VM.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70) - Some OSes use virtualization internally for security (e.g. to let
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) applications run safely in isolation).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) Enabling "nested" (x86)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) -----------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77) From Linux kernel v4.19 onwards, the ``nested`` KVM parameter is enabled
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) by default for Intel and AMD. (Though your Linux distribution might
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) override this default.)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) In case you are running a Linux kernel older than v4.19, to enable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) nesting, set the ``nested`` KVM module parameter to ``Y`` or ``1``. To
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) persist this setting across reboots, you can add it in a config file, as
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) shown below:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) 1. On the bare metal host (L0), list the kernel modules and ensure that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) the KVM modules::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) $ lsmod | grep -i kvm
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) kvm_intel 133627 0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) kvm 435079 1 kvm_intel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) 2. Show information for ``kvm_intel`` module::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95) $ modinfo kvm_intel | grep -i nested
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96) parm: nested:bool
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) 3. For the nested KVM configuration to persist across reboots, place the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) below in ``/etc/modprobed/kvm_intel.conf`` (create the file if it
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) doesn't exist)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) $ cat /etc/modprobe.d/kvm_intel.conf
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) options kvm-intel nested=y
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) 4. Unload and re-load the KVM Intel module::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) $ sudo rmmod kvm-intel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) $ sudo modprobe kvm-intel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110) 5. Verify if the ``nested`` parameter for KVM is enabled::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) $ cat /sys/module/kvm_intel/parameters/nested
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) Y
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) For AMD hosts, the process is the same as above, except that the module
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) name is ``kvm-amd``.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) Additional nested-related kernel parameters (x86)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120) -------------------------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) If your hardware is sufficiently advanced (Intel Haswell processor or
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) higher, which has newer hardware virt extensions), the following
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) additional features will also be enabled by default: "Shadow VMCS
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) (Virtual Machine Control Structure)", APIC Virtualization on your bare
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) metal host (L0). Parameters for Intel hosts::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) $ cat /sys/module/kvm_intel/parameters/enable_shadow_vmcs
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) Y
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) $ cat /sys/module/kvm_intel/parameters/enable_apicv
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) Y
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) $ cat /sys/module/kvm_intel/parameters/ept
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) Y
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) .. note:: If you suspect your L2 (i.e. nested guest) is running slower,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) ensure the above are enabled (particularly
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) ``enable_shadow_vmcs`` and ``ept``).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) Starting a nested guest (x86)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) -----------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) Once your bare metal host (L0) is configured for nesting, you should be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) able to start an L1 guest with::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) $ qemu-kvm -cpu host [...]
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150) The above will pass through the host CPU's capabilities as-is to the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) gues); or for better live migration compatibility, use a named CPU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) model supported by QEMU. e.g.::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) $ qemu-kvm -cpu Haswell-noTSX-IBRS,vmx=on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) then the guest hypervisor will subsequently be capable of running a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) nested guest with accelerated KVM.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) Enabling "nested" (s390x)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) -------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) 1. On the host hypervisor (L0), enable the ``nested`` parameter on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) s390x::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) $ rmmod kvm
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) $ modprobe kvm nested=1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) .. note:: On s390x, the kernel parameter ``hpage`` is mutually exclusive
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) with the ``nested`` paramter — i.e. to be able to enable
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) ``nested``, the ``hpage`` parameter *must* be disabled.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) 2. The guest hypervisor (L1) must be provided with the ``sie`` CPU
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) feature — with QEMU, this can be done by using "host passthrough"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) (via the command-line ``-cpu host``).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) 3. Now the KVM module can be loaded in the L1 (guest hypervisor)::
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179) $ modprobe kvm
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) Live migration with nested KVM
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) ------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) Migrating an L1 guest, with a *live* nested guest in it, to another
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186) bare metal host, works as of Linux kernel 5.3 and QEMU 4.2.0 for
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) Intel x86 systems, and even on older versions for s390x.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) On AMD systems, once an L1 guest has started an L2 guest, the L1 guest
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190) should no longer be migrated or saved (refer to QEMU documentation on
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) "savevm"/"loadvm") until the L2 guest shuts down. Attempting to migrate
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) or save-and-load an L1 guest while an L2 guest is running will result in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) undefined behavior. You might see a ``kernel BUG!`` entry in ``dmesg``, a
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194) kernel 'oops', or an outright kernel panic. Such a migrated or loaded L1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) guest can no longer be considered stable or secure, and must be restarted.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) Migrating an L1 guest merely configured to support nesting, while not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) actually running L2 guests, is expected to function normally even on AMD
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) systems but may fail once guests are started.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200) Migrating an L2 guest is always expected to succeed, so all the following
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) scenarios should work even on AMD systems:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) - Migrating a nested guest (L2) to another L1 guest on the *same* bare
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) metal host.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) - Migrating a nested guest (L2) to another L1 guest on a *different*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) bare metal host.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) - Migrating a nested guest (L2) to a bare metal host.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) Reporting bugs from nested setups
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) -----------------------------------
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) Debugging "nested" problems can involve sifting through log files across
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) L0, L1 and L2; this can result in tedious back-n-forth between the bug
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) reporter and the bug fixer.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) - Mention that you are in a "nested" setup. If you are running any kind
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) of "nesting" at all, say so. Unfortunately, this needs to be called
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) out because when reporting bugs, people tend to forget to even
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) *mention* that they're using nested virtualization.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) - Ensure you are actually running KVM on KVM. Sometimes people do not
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) have KVM enabled for their guest hypervisor (L1), which results in
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) them running with pure emulation or what QEMU calls it as "TCG", but
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226) they think they're running nested KVM. Thus confusing "nested Virt"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) (which could also mean, QEMU on KVM) with "nested KVM" (KVM on KVM).
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) Information to collect (generic)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) The following is not an exhaustive list, but a very good starting point:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) - Kernel, libvirt, and QEMU version from L0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236) - Kernel, libvirt and QEMU version from L1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) - QEMU command-line of L1 -- when using libvirt, you'll find it here:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) ``/var/log/libvirt/qemu/instance.log``
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241) - QEMU command-line of L2 -- as above, when using libvirt, get the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) complete libvirt-generated QEMU command-line
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) - ``cat /sys/cpuinfo`` from L0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246) - ``cat /sys/cpuinfo`` from L1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) - ``lscpu`` from L0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) - ``lscpu`` from L1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) - Full ``dmesg`` output from L0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) - Full ``dmesg`` output from L1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) x86-specific info to collect
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) Both the below commands, ``x86info`` and ``dmidecode``, should be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) available on most Linux distributions with the same name:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) - Output of: ``x86info -a`` from L0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) - Output of: ``x86info -a`` from L1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) - Output of: ``dmidecode`` from L0
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) - Output of: ``dmidecode`` from L1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) s390x-specific info to collect
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) Along with the earlier mentioned generic details, the below is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) also recommended:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) - ``/proc/sysinfo`` from L1; this will also include the info from L0