Glossary of Virtualization Terms

For those of you not (yet) familiar with VGA passthrough, here some common terms used in the how-to:

  • host: This is the operating system installed on the PC, in our case Linux. This operating system “hosts” the VM (see below).
  • VM: VM is the abbreviation of Virtual Machine that runs on a host (see above). A virtual machine is an emulation of a real computer that runs an operating system and application software. In this tutorial we use full virtualization to install an unmodified Windows 10 operating system inside a VM. This operating system running in a VM is often referred to as “guest OS”, the VM simply as “guest”.
  • Gaming VM: This colloquialism refers to a virtual machine capable of running GPU intensive games or applications. It is often synonymous with a “Windows gaming VM” for running Microsoft Windows as a guest (VM) on a Linux host. In practice, the gaming VM employs a hypervisor (kvm or Xen) combined with GPU passthrough to achieve bare-metal like graphics performance.
  • hardware-assisted virtualization: Modern CPU architectures include features to support / assist virtualization in hardware, that is in the CPU. KVM, Xen, VMware and other virtualization platforms are able to utilize this hardware to improve performance. In our tutorial, hardware-assisted virtualization features including VT-x and VT-d on Intel platforms or AMD-V or SVM on AMD platforms are a basic requirement. Before purchasing a CPU and/or motherboard, make sure these features are supported! See also VFIO below.
  • hypervisor or virtualization platform: A hypervisor is (usually) a software platform that creates and runs virtual machines (VM). The hypervisor runs on a computer called the host. A distinction is made between type 1 and type 2 hypervisors.
  • type 1 hypervisor: also called bare-metal hypervisor, this hypervisor runs directly on the hardware and controls both the hardware and the guest operating system(s). Type 1 hypervisors include Xen, Oracle VM Server for SPARC, Oracle VM Server for x86, the Citrix XenServer, Microsoft Hyper-V and VMware ESX/ESXi.
  • type 2 hypervisor: this hypervisor runs on a conventional operating system, just like any other computer program. Type 2 hypervisors include VMware Workstation, VMware Player, VirtualBox, Parallels Desktop for Mac and QEMU.
  • KVM: Stands for Kernel-based Virtual Machine. KVM effectively converts the host operating system into a type 1 hypervisor. However, since the host OS still functions as a regular Linux distribution, KVM may also be categorized as a type 2 hypervisor.
  • PCI passthrough (or passthru): This is a technique which allows the hypervisor to pass through a PCI device to the VM. The guest OS – in our case Windows 10 – then uses its own hardware driver to access the device directly.
  • VGA passthrough or GPU passthrough: This is a specialized form of PCI passthrough for graphics cards / GPU. These devices are more complex and so is the interaction with them. There are two forms of VGA passthrough:
    • VGA passthrough to GPU as primary adapter: The VM boots and uses the passed through graphics card as its primary or only graphics adapter. This tutorial uses primary passthrough in combination with a special UEFI BIOS from the OVMF project (for more info see This latter method is also referred to as “VGA passthrough via OVMF”, “legacy-free VGA passthrough”, or “UEFI VGA passthrough”.
    • VGA passthrough to GPU as secondary adapter: Also referred to as legacy VGA passthrough. The VM first boots using the SeaBIOS virtual Cirrus adapter as the primary graphics adapter (you see the BIOS screen and the Windows boot logo in a window on the Linux desktop). After installation of the graphics driver under Windows and subsequent reboot of the VM, the VM will start with the SeaBIOS and then switch to the secondary graphics adapter at some stage of it’s boot process.
  • IOMMU: The input–output memory management unit (IOMMU) is a memory management unit (MMU) that connects a direct-memory-access–capable (DMA-capable) I/O bus to the main memory. The IOMMU maps device-visible virtual addresses (also called device addresses or I/O addresses in this context) to physical addresses. Intel designates IOMMU technology as Virtualization Technology for Directed I/O, abbreviated VT-d. The virtualization solution described in this tutorial relies on IOMMU. See also VFIO below.
  • IOMMU group: IOMMU provides isolation of the devices, but not always at the granularity of a single device. Devices are collected in IOMMU groups – each group providing isolation of its devices from other IOMMU groups. Some CPU / motherboard chipsets provide good isolation, establishing an IOMMU group for each distinct device, whereas others provide little isolation, grouping many different devices into a single IOMMU group. In the latter case, the ACS (Access Control Services) kernel patch may come to the rescue – see below.
  • Access Control Services (ACS): PCI Express allows for direct peer-to-peer transactions between devices on the same interconnect fabric. ACS is used to control which devices are allowed to communicate with each other (those are grouped in one IOMMU group), thus avoiding the improper routing of packets. For PCI/VGA passthrough to work, packets from PCIe devices must go through the IOMMU for mapping between I/O virtual address to physical memory address. For a good explanation on this, see
  • Qemu: Is a short for “Quick Emulator” and provides a hypervisor to virtualize hardware. Without KVM, it runs as a single process and emulates a complete computer system to the VM. Together with KVM it supports hardware-assisted virtualization which can greatly improve performance.
  • hugepages: Is a special application of “memory pages” or “virtual pages”. Instead of managing thousands, perhaps millions of tiny 4 Kilobyte pages, hugepages allow the definition of larger pages (e.g. 2 Megabyte or 4 Megabyte), reducing the page table size and thus increasing the lookup speed. Hugepages are not a requirement for VGA passthrough, but they can help improve the performance of Virtual Machines.
  • VFIO: Is the abbreviation for Virtual Function I/O. The VFIO driver is an IOMMU/device agnostic framework for exposing direct device access to userspace in a secure, IOMMU protected environment. In other words, this allows safe, non-privileged, userspace drivers to access the hardware (see We use VFIO to provide fast, efficient access to host devices.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.