Last updated: January 6, 2022
Table of Contents
Introduction
I’ve already written a detailed tutorial on Windows 10 kvm VGA passthrough based on QEMU version 2.11. Years have passed and recent distributions like Ubuntu 20.04, Linux Mint 20, or Manjaro come with QEMU 4.0, 4.2 or 5.1.
A lot has happened since version 2.11. QEMU 4.0 includes numerous changes and improvements such as trim support in the virtio-blk driver, pcie-root-port with PCIe 4.0 support (with Q35-4.0 machine type), as well as improved audio.
The downside is that with these improvements came changes in the QEMU syntax. Most tutorials use the “Virtual Machine Manager” for configuration within a convenient GUI. Recent versions of virt-manager (that’s the name of the package) include an XML editor. Unfortunately Virtual Machine Manager, a front-end to “libvirt”, doesn’t have much documentation.
The tutorial below is in part inspired by Bryan’s excellent GPU passthrough tutorial. This tutorial here is by no means a rewrite, however, as there are major differences. I suggest to have a look at both and decide what’s best for you.
Another great source is Virtual machines with PCI passthrough on Ubuntu 20.04 by Mathias.
Disclaimer
All information and data provided in this tutorial is for informational purposes only. I make no representations as to accuracy, completeness, recentness, suitability, or validity of any information in this tutorial and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its use. All information is provided on an as-is basis.
You are aware that by following this tutorial you may risk the loss of data, or may render your computer inoperable. Backup your computer drives! Make sure that important documents/data are accessible elsewhere in case your computer becomes inoperable.
Documentation
Here is an overview of the documentation for the tools we are going to use:
- https://libvirt.org/ – libvirt is the toolkit used by Virtual Machine Manager. This is where you look for help when you need to edit the XML configuration (and YES you will have to!). Here some specific links:
- XML Format – objects such as domains, networks, storage, etc. are configured using XML documents which are described here
- KVM/QEMU hypervisor driver – example qemu/kvm domain configurations, qemu command passthrough, converting from QEMU args to domain XML and vice versa from XML to QEMU (you may want that).
- libvirt wiki – community contributed documentation. A good resource for solutions to specific tasks and problems. Take for example networking.
- Releases – a list of libvirt releases, along with an overview of the changes.
- virt tools blog planet – a place to check if you want to dig deeper into virtualization and what’s cooking. This blog is maintained by the developers. Under “Subscriptions” is a list of individual blogs by developers that provides further updates and information.
- Discord VFIO channel invite link – very up-to-date forum/channel on VFIO/passthrough. Requires some basic understanding of VFIO and virtualization.
- https://www.qemu.org/ – the main QEMU homepage contains a list of releases and documents the changes.
- QEMU – this is the latest documentation on QEMU.
- QEMU System Emulation User’s Guide – That’s the documentation you need in order to start your virtual machine from the command line or via script. Look here if you want to use a script!!!
- QEMU Tools Guide – documentation on Qemu tools like the qemu-img disk image utility to create virtual disk drives.
- QEMU Guest Agent Protocol Reference – if you want to experiment with the QEMU guest agent.
- The VFIO and GPU Passthrough Beginner’s Resource – a list of resources for VGA passthrough.
Hardware Configuration
The following tutorial is based on the hardware listed below. It will likely work with other AMD Ryzen processors, with other AMD families of processors, and, with a minor modification, it should work with Intel processors too.
Here is the hardware configuration:
- AMD Ryzen 9 3900X CPU
- Gigabyte X570 Aorus Pro motherboard, upgraded to latest BIOS F12e
- 64 GB RAM
- Samsung SSD 970 EVO Plus 1TB NVMe drive for guests, set up as LVM drive
- Samsung SSD 970 EVO Plus 500 GB NVMe drive for the host
- Gigabyte Nvidia Geforce GTX 970 GPU for the guest
- PNY Nvidia Quadro 2000 card for host, updated to support UEFI using this BIOS (see also here)
- A bunch of HDD drives using LVM – around 11 TB internal storage
- Asus Xonar Essence STX PCIe sound card.
Software
For the host:
- Pop!_OS Linux based on Ubuntu 19.10 – note that I have updated the tutorial to work with Ubuntu 20.04 based distributions as well
- QEMU 4.0.0 (or 4.2 or 5.0)
- libvirt 5.4.0 (or newer)
- Virtual Machine Manager (virt-manager) 2.2.1 or newer
- Linux kernel 5.3.0 and newer up to 5.14 as of this update
The guest OS:
- Microsoft Windows 10 Pro 64 rel. 1909 or later – you need a valid license to install and use this software!
VM Resources
Before we start to set up a virtual machine, we need to plan the resources we want to allocate to our Windows VM. My specific use case is photo processing using Adobe Lightroom, Photoshop and other tools, as well as video editing. This is why I decided to give the Windows VM all the resources I can while leaving enough RAM to the host to avoid memory swap. Here a breakdown of my VM resources:
- Windows uses an LVM volume on a 1 TB NVMe drive (the other option would be to install Windows directly on the drive and pass it through to the VM)
- 48 GB RAM (out of a total of 64GB) backed by hugepages
- 12 cores / 24 threads = 24 vCPUs – yes, I’m giving everything to the VM.
Most of you will NOT have the same requirements. For this tutorial I am going to assign the following resources to the Windows VM:
- QCOW2 file or whatever you prefer as storage to install the Windows VM
- 16 GB RAM backed by hugepages
- 6 cores / 12 threads = 12 vCPUs.
Of course, you can and should adjust these resources to your requirements.
Tutorial
Before you Start
I suggest you read my post Upgrading my PC to an AMD Ryzen 9 3900X System. It explains some of the hardware/software choices and offers a migration plan and checklist that can be helpful.
Most important, it describes some of the pitfalls. My biggest mistake was setting out with a Gnome desktop that I really don’t like. That and some other shortcomings, bugs and doubtful choices made me eventually dump Pop!_OS.
Lesson learned: Use the distribution and desktop you like and are comfortable with. If you are relatively new to Linux and used to Microsoft Windows, then Linux Mint is an easy entry point.
Right now I use Manjaro Linux with XFCE desktop. It is a rolling “bleeding-edge” distro and as such it comes with the risk that updates break things (which happened to me a few times).
Note: This tutorial was originally written for Pop!_OS 19.10 with specific steps for that distribution. I added instructions for Ubuntu 19.10 and 20.04 based distributions (like Linux Mint) to make the tutorial more useful.
Setting up the Host for VGA Passthrough
We need to add or modify some settings for the host before we can start with the actual VM installation.
I strongly recommend installing a SSH server on the host before trying to pass through a graphics card. There is a chance that you end up with a blank display. With a SSH server configured and enabled you can reverse the settings by accessing your host from another computer on the network. Make sure that SSH access works and that you can login, and that you can run a root shell (sudo -i).
Software Packages and virtio Drivers
Install the required packages on your Linux host:
sudo apt install qemu-kvm qemu-utils libvirt-daemon-system libvirt-clients virt-manager ovmf
Download the Windows 10 ISO (you’ll need a valid license to install):
https://www.microsoft.com/en-us/software-download/windows10ISO
Download the virtio driver ISO to be used with the Windows installation from https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md. Below are the direct links to the ISO images:
Latest VIRTIO drivers: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso
Stable VIRTIO drivers: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso
I chose the latest driver ISO.
Make your user member of the kvm and libvirt groups (not sure this is necessary):
sudo usermod -a -G kvm myusername
sudo usermod -a -G libvirt myusername
BIOS Settings
Reboot the PC and enter the BIOS – usually via DEL, F2, F12 or whatever your motherboard manual or BIOS screen tells you. You must NOT skip this step. The screenshots below show the BIOS setup procedure for the Gigabyte X570 Aorus Pro motherboard. YMMV.
Go to the “Advanced Mode” screen. Select the “Tweaker” tab and enter “Advanced CPU Settings”:

Enable “SVM Mode”:




Select the “Settings” tab and go to “Miscellaneous”:




Enable “IOMMU”. (At this point you may want to enter the “Trusted Computing” sub-menu and disable that nonsense.):
Select the “Settings” tab and go to “AMD CBS”. Enable “ACS” and “AER”:
Select the “System Info.” tab and check your “BIOS Version” – it should be F11 or newer. Older versions, especially those prior to F10, are broken or no good for VFIO passthrough. If you do have an older BIOS version, read the instructions on how to flash it (by using a FAT16 or FAT32 USB drive with the BIOS file on it – beware of naming restrictions):




Assuming you have two graphics cards and wish to use the primary GPU in PCI slot 1 for Windows, and the second GPU in slot 2 (or another slot) for your Linux host, change the initial display output (note: often motherboards do not support this feature, or require you to enable legacy mode).
In most cases you want to use your high-performance graphics card in PCIe slot 1. Most motherboards support 16x speed on the first slot, and only 8x or less on the second. When using both slots as in our case, the PCIe bus speed may be reduced by 50% to 8x and 4x respectively, depending on the board and chipset. However, even the fastest graphics cards should have enough bandwidth in a PCIe ver3 x8 slot.
Another reason for placing the more powerful GPU in slot 1 is ventilation/cooling. You will have to make sure that there is a good airflow around the GPU to avoid overheating and throttling.
Important: If you use different vendors for your GPUs, and have not yet installed the graphics driver for your (new) host GPU, do the following:
- Don’t change the Initial Display Output setting (yet).
- Boot into Linux and disable the proprietary graphics driver / select the open source driver.
- Reboot and change the setting below.
- Once Linux boots, it should discover the new graphics card and use the appropriate open source driver. You can then select a proprietary driver via the driver installer or package manager of the distribution.
Select the “Settings” tab and go to “IO Ports”. Now select the Initial Display Output to use when the PC boots. Unless your setup is different, select “PCIe 2 Slot” to use the 2nd GPU for Linux:
When done with the adjustments, save & exit and reboot.
While Intel CPUs require a kernel option to activate IOMMU (see below), AMD CPUs should work without kernel options. For available kernel options, search https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html for “amd”, “intel”, or “iommu”. Optionally we can reserve hugepages here. Choose depending on your Linux distribution:
- Pop!_OS 19.10 and later uses the systemd bootloader. There is a tool called “kernelstub” that allows us to add/modify kernel parameters for systemd. To add a new entry, enter in a terminal window:
sudo kernelstub -a "hugepages=8192"
- The vast majority of distributions like Ubuntu, Linux Mint, Manjaro, Fedora etc. use the GRUB2 bootloader. Edit the
/etc/default/grub
file as follows:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash hugepages=8192"
Here a short explanation of these parameters:
intel_iommu=on enables IOMMU on Intel CPUs. It’s not required for AMD CPUs.
amd_iommu=pt is optional. It tells the kernel to bypass DMA translation to the memory, which may improve performance. For Intel CPU use intel_iommu=pt.
hugepages=8192 tells the kernel to set aside 8192 static hugepages. Important: setting static hugepages is optional ! My recommendation: don’t define static hugepages (unless they give you a measurable benefit). I will nevertheless describe how to use them.
On this system, each hugepage is the equivalent of 2 Megabyte. 8192 hugepages correspond to 16 Gigabyte of RAM. When the static hugepages are configured at boot time, they cannot be claimed by the host. You should adjust these numbers to fit the memory you want to assign to your VM. Tip: Use multiples of 1024 !
Note: Different platforms can have different hugepage sizes. On this system you can define 2 Megabyte or 1 Gigabyte hugepages, or a mix of both. You can see if your CPU supports 1 GB hugepages by looking for the pdpe1gb
CPU flag:
lscpu | grep pdpe1gb
Note about hugepages: There are three types of hugepages – transparent (THP), static (SHP), and dynamic hugepages. Transparent hugepages are automatically activated using QEMU/libvirt. To use dynamic huge pages, see the link to Bryan’s tutorial at the beginning.
A comparison of static hugepages versus transparent hugepages (the ones used by default) can be found here. As so often, the Arch Linux wiki offers valuable information on hugepages.
After you updated the /etc/default/grub file, execute as root:
update-grub
Now reboot again!
Let’s see if it worked. Open a terminal and enter:
dmesg | grep -i -e amd-vi -e dmar
or try this:
journalctl -b | grep -i -e amd-vi -e dmar
user@mypc:~$ journalctl -b | grep -i -e amd-vi -e dmar [ 3.697093] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported [ 3.702890] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40 [ 3.702890] pci 0000:00:00.2: AMD-Vi: Extended features (0x58f77ef22294ade): [ 3.702892] AMD-Vi: Interrupt remapping enabled [ 3.702893] AMD-Vi: Virtual APIC enabled [ 3.702893] AMD-Vi: X2APIC enabled [ 3.702983] AMD-Vi: Lazy IO/TLB flushing enabled
AMD-vi and IOMMU are now enabled and supported. On Intel machines you’ll see DMAR messages.
Bind Passthrough GPU to VFIO Driver
Note: If you are passing through a newer Nvidia series 1000, 2000, or 3000 GPU, chances are you need to pass an edited VBIOS file (ROM) to the virtual machine. The process is described in “Passing Through a Nvidia RTX 2070 Super GPU“.
In this tutorial I use 2 separate GPUs: one for the host; a second one for the guest.
Hardware tip: The Gigabyte X570 Aorus Pro motherboard lets you select the “Initial Display Output”, i.e. the GPU used by the host. Another nice feature is “PCIeX16 Bifurcation” to determine how the bandwidth of the PCIeX16 slot is divided. (Some motherboards require the host GPU to be in slot 1.)
Since I’m using 3 PCIe devices (2 GPUs and a sound card), I’ve divided the PCIe bandwidth to 8x4x4. PCIe slot 1 with a X8 bandwidth holds the passthrough GPU, and PCIe slot 2 with a X4 bandwidth holds the host GPU.
We need to determine the PCI bus IDs for our graphics cards. In a terminal window, enter:
lspci | grep VGA
user@mypc:~$ lspci | grep VGA 0b:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1) 0c:00.0 VGA compatible controller: NVIDIA Corporation GF106GL [Quadro 2000] (rev a1)
The GPU to pass through (the GeForce GTX 970) is on bus 0b:00.0. Most GPUs have additional devices onboard, such as an audio device, USB, etc. We must pass all devices through to the host. To determine the devices associated with our passthrough GPU, use the following command:
lspci -nn | grep 0b:00.
user@mypc:~$ lspci -nn | grep 0b:00. 0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1) 0b:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)
In this example, the GPU has only one additional device, an audio device on 0b:00.1.
All devices within the same IOMMU group must be passed to the VM! You find more information on that – as well as exceptions – in my IOMMU Groups – What You Need to Consider post.
Let’s have a look at our IOMMU groups and how PCI devices are split into these groups:
for a in /sys/kernel/iommu_groups/*; do find $a -type l; done | sort --version-sort
... /sys/kernel/iommu_groups/26/devices/0000:06:04.0 /sys/kernel/iommu_groups/27/devices/0000:07:00.0 /sys/kernel/iommu_groups/28/devices/0000:0b:00.0 /sys/kernel/iommu_groups/28/devices/0000:0b:00.1 /sys/kernel/iommu_groups/29/devices/0000:0c:00.0 /sys/kernel/iommu_groups/29/devices/0000:0c:00.1 ...
The graphics card and its two devices (VGA and audio) are within the same IOMMU group 28, and the group contains no additional devices. Perfect!
Tip: Copy the PCI bus IDs for your graphics card – 0000:0b:00.0 and 0000:0b:00.1 in the example above – into a .txt file since we need them in the next step!
We want to make sure that the passthrough GPU binds to the VFIO driver when the PC boots. Below I’m describing two methods for binding the vfio-pci driver to the graphics card. While the first method works for old and new kernels, the second method doesn’t work with the new Ubuntu releases. For a more comprehensive overview, see my article on “Blacklisting Graphics Driver“. Expand the option that pertains to your Linux / kernel version:
Pop!_OS 20.04 / Ubuntu 20.04 / kernel 5.4+ - press to expandSomebody must have had this feeling that VFIO passthrough is working too good and decided to break it. With Ubuntu 20.04 and kernel 5.4… the vfio-pci driver comes no longer as a module, but is integrated into the kernel. If in doubt, check it:
grep -i vfio /boot/config-`uname -r`
or for all kernels on the system, simply:
grep -i vfio /boot/config*
/boot/config-5.4.0-26-generic:CONFIG_KVM_VFIO=y /boot/config-5.4.0-26-generic:CONFIG_VFIO_IOMMU_TYPE1=y /boot/config-5.4.0-26-generic:CONFIG_VFIO_VIRQFD=y /boot/config-5.4.0-26-generic:CONFIG_VFIO=y
A “y” at the end means that the feature is part of the kernel, “m” denotes that we are dealing with a module.
For some inexplicable reason the driver-override script explained below doesn’t work anymore for newer kernels in Ubuntu releases. The solution is to bind the graphics card to the vfio-pci driver using the bootloader (systemd or grub).
We are using the same lspci command as before, but this time we write down the PCI vendor ID and model ID that are given in square brackets like [10de:13c2]. “10de” is the vendor ID for Nvidia, the 13c2 denotes a specific model, in this case a GTX 970 GPU.
lspci -nn | grep 0b:00.
user@mypc:~$ lspci -nn | grep 0b:00.
0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)
0b:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)
Note that some modern GPUs have more than two devices (graphics and audio), so make sure to get all of them.
With this information in hand, we can update the bootloader. Windows 10 release 1803 or newer also require the kvm.ignore_msrs=1 option, so we include it here (the following is one line):
sudo kernelstub -a "hugepages=8192 vfio_pci.ids=10de:13c2,10de:0fbb kvm.ignore_msrs=1"
Note: If your system uses grub2, edit the /etc/default/grub file and add the following to the “GRUB_CMDLINE_LINUX_DEFAULT” line (again one line):
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash hugepages=8192 vfio_pci.ids=10de:13c2,10de:0fbb kvm.ignore_msrs=1"
Then run update-grub
and reboot the PC!
It’s time to install the script that will bind the passthrough GPU to the vfio-pci dummy driver:
sudo nano /etc/initramfs-tools/scripts/init-top/vfio-override.sh
and copy/paste the following script into the file:
#!/bin/sh
DEVS="0000:0b:00.0 0000:0b:00.1"
for DEV in $DEVS; do
echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override
done
modprobe -i vfio-pci
Important: Don’t forget to replace the 0000:0b:00.0 0000:0b:00.1
PCI bus IDs for the ones you determined for your passthrough GPU. The leading “0000” determines the domain. For some CPUs that may be different (e.g. 0001).
Make the vfio-override.sh file executable:
sudo chmod u+x /etc/initramfs-tools/scripts/init-top/vfio-override.sh
To load vfio and other required modules at boot, edit the /etc/initramfs-tools/modules file:
sudo nano /etc/initramfs-tools/modules
Copy the following to the end of the modules file (it’s important to keep the order):
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
vhost-net
Save and close the file.
Windows 10 releases 1803 and newer require the following option:
echo 'options kvm ignore_msrs=1' | sudo tee -a /etc/modprobe.d/kvm.conf
For the above changes to take effect, enter:
sudo update-initramfs -u
followed by:
sudo kernelstub
to update the boot entries in the EFI folder (this is one of the bugs of Pop!_OS).
Now reboot!
For more explanations and solutions for common issues, see my article mentioned above as well as “Explaining CSM, efifb=off, and Setting the Boot GPU Manually“.
Create Windows 10 VM Configuration using the Virtual Machine Manager GUI
Open the Virtual Machine Manager GUI.
Connect to QEMU/KVM. Then create a new Virtual Machine by clicking the top left icon (screen with a triangular start button and a star, or a big “+” on newer versions):




Select “Local install media…” and click [Forward]:




Browse to the location of your Windows 10 ISO and select. Tick “Automatically detect from the installation media…”. Then click [Forward]:




Specify the amount of memory to assign to the VM in Megabyte. 8 GByte should be the minimum for a Windows 10 VM. If you reserved 16 Gigabyte of hugepages memory earlier in this tutorial (that is 8192 hugepages), change the “Memory” value to 16384:




The following screen allows you to choose from different options.
- If you want to pass through a disk (more specific a disk controller) to Windows, uncheck “Enable storage for this virtual machine“. At a later step you will be able to configure your passthrough storage device. This option comes handy when you already have Windows installed on a disk, and your motherboard/CPU combination allows you to pass through the disk controller / storage device. The disk must be connected to a controller that has its own IOMMU group.
In my case, the Gigabyte Aorus Plus / Ryzen 3900X combo has two NVMe slots, each with a controller having its own IOMMU group.
Note: IOMMU groupings in general are often determined by the motherboard BIOS. Sometimes a new BIOS can bring improvements, but don’t bet on it. There are examples where BIOS upgrades broke passthrough entirely. - In all other cases, tick “Enable storage for this virtual machine“. You now have the following choices:
- “Create a disk image for the virtual machine” which will be located at the default location /var/lib/libvirt/images/. This freaks me out every time I think of it.
- “Select or create custom storage” to choose a more sane location. Use this option to select or create a storage pool and/or storage volume (for example Qcow2, RAW, or LVM).
Unless you pass through a disk/storage controller, select as shown below and click [Manage…]:




If you chose “Select or create custom storage”, you will be presented with the following screen:




You can select an existing storage pool (the “Default” is at /var/lib/libvirt/images mentioned already), or create a new pool at the location of your choice. To create a new storage pool, click the [+] button at the bottom left. In the following window, type a name for the new pool. Then select the “Type” from the drop-down menu. There is a long list of choices, but home users will most likely choose one of the following:
- dir: Filesystem Directory – select a directory in your Linux file system, for example /home/user/vm-storage. This choice of storage pool predetermines the options for your storage volume – either Qcow2 or RAW. Both create a file in your file system that holds the entire VM image. This is the easiest way to get started with virtualization and has more benefits than drawbacks. Performance is pretty good too.
- disk: Physical Disk Device – if you have a disk to spare for your VMs, you could choose this option. BUT: this should not be confused with passing through a disk, respectively a storage controller, where Windows will be able to use its own driver to directly access the storage device.
- fs: Pre-Formatted Block Device – like the first option, but a block device for storing Qcow2 and RAW images.
- logical: LVM Volume Group – this is my favorite, but it’s not everyones cup of tea. In terms of performance, it’s considered second to passing through the disk controller. Volume groups and individual LV volumes can grow over multiple disks, have snapshot capability and much more. LVM has a learning curve and unfortunately today there is no GUI to manage logical volumes (the Gnome tool system-config-lvm was perfect for the job, but it’s discontinued 😥 ). Unless you are familiar with LVM, I cannot recommend it, despite many benefits.
- zfs: ZFS Pool – integrates the file system with LVM capabilities, provides redundancy and includes protection against data corruption. In a way, ZFS replaces LVM and improves on it. However, it’s not for the faint-hearted. If you are familiar with ZFS, by all means go for it.
For most of us the choice will be option 1 – dir: Filesystem Directory. Make sure to select the Target Path of YOUR choice:




Once you got the storage pool set up, you need to configure the storage volume for your Windows 10 VM. 50 GB of storage is probably the minimum; if you plan to install software and games, consider 300GB or more:




After you specified the name and capacity of your VM storage volume, click [Finish].
In the Window below, tick the “Customize configuration before install” box. This step is crucial!
Specify the name of the VM. After that I selected a preconfigured bridge under “Network selection“. Bridging is the preferred network setup for wired connections:




Click [Finish] when done.
In the following “Overview” window, make sure that “Chipset: Q35” is selected.
Under “Firmware“, select the 64 bit UEFI firmware “UEFI x86_64: /usr/share/OVMF/OVMF_CODE.fd” as shown below. Click [Apply]:




Select the “CPUs” configurator in the left column to configure the number of CPUs the VM can have. I’ve selected 12 vCPUs out of a total of 24. Make sure both “Current allocation” and “Maximum allocation” have the same number of CPUs.
Untick “Copy host CPU configuration”. In the “Model:” field below, type “host-passthrough” or select from drop-down menu. There can be a substantial performance difference between “copy host CPU configuration” and “host-passthrough” – you want the latter.
However, for modern AMD CPUs you may try one of the EPYC options from the drop-down menu . For more on CPU configuration options, see “CPU model configuration for QEMU/KVM on x86 hosts” by Daniel P. Berrangé. I’m currently using EPYC-IBPB. or host-passthrough with my Ryzen 3900X CPU.
Note for AMD Ryzen and EPYC users: There are predefined EPYC and EPYC-IPBP options in the drop-down menu that work well with AMD EPYC and RYZEN CPUs. You may want to try them out and see if it can improve performance. What I’ve seen so far is that they can influence memory and CPU cache performance, but more recent kernels and Qemu versions may tip the balance towards host-passthrough.
Select “Topology” and specify “Sockets: 1”, “Cores: 6”, and “Threads: 2”. This gives our Windows VM 12 virtual CPUs (vCPU). Each vCPU represents 1 thread. The AMD Ryzen 3900X has 12 core and 24 threads in total, so I am assigning half of the CPU resources to the VM. Make sure you select the right numbers for your CPU:




Select “SATA Disk 1” and open “Advanced options”. Select “Disk bus: VirtIO“. Under “Performance options”, select “Cache mode: none“and “IO mode: native” for best performance in most cases (later on you may want to experiment with option “threaded” too). If the drive is an SSD or NVMe drive, select “Discard mode: unmap”, else leave at default setting:




After clicking [Apply], the left column reads VirtIO Disk 1 to reflect the change:




Select “NIC:…” and choose “Device model: virtio” for improved network speed:




Select the “Sound ich9” device and make sure HDA (ICH9) is specified. New updates of Windows 10 do not work with ICH6:




Click [Add Hardware] and select “Storage” on the top of the list. Choose “Select or create custom storage“, select the path to the virtio-win-…iso file and select “Device type: CDROM device“:




Now comes the graphics card. Click [Add Hardware] and select “PCI Host Device“. Select the first entry of the VGA device you wish to pass through, in our case 0000:0B:00.0 (the GTX 970) and click [Finish]:




Repeat the last step with all devices associated with this GPU (that is, all devices in the same IOMMU group that must be passed through). In my case its only one more device – the audio device of the GPU – 0000:0B:00.1. Click [Finish] when done.
Important: If you use a newer Nvidia graphics card (like the GeForce 1000, 2000, or 3000 series), you most likely have to pass through a modified video BIOS. I have described this process in my separate tutorial “Passing Through a Nvidia RTX 2070 Super GPU“.




After you configured your passthrough GPU as PCI Host devices (modern GPUs often consist of 4 devices – graphics, audio, USB and UCSI), you may need to add additional PCI devices to pass through. For example the disk controller of your Windows drive (see above “Create a new virtual machine – Step 4 of 5”), a USB controller, or a sound card.
Note regarding the Gigabyte X570 Aorus Pro motherboard: I tried to pass through the USB host controller at 0e:00.3 (IOMMU group 33 in my configuration), but that didn’t work with BIOS release F11 or F12e.
Good news:
The second USB host controller in IOMMU group 22 works!!! You need to pass through the following PCI host devices: 0000:08:00.0, 0000:08:00.1 and 0000:08:00.3.
For a nice little bash script that lists the USB bus and IOMMU group associations, see the post by Level1Techs forum member “two2”.
Better news:
BIOS release F31j works fine. I can pass through either the chipset USB controller at 8:00.0, 8:00.1 and 8:00.3, or the CPU USB controller at 0e:00.3. Kudos to Gigabyte/AMD for fixing this bug!
For the GPU passthrough device you’ll get a screen like below, with the ROM BAR option ticked. Leave as is:




It’s time to configure the “Boot Options“. If you are going to install Windows onto a new storage device, select the boot order as shown in the screenshot below. If you have Windows installed on a drive and pass through that drive / controller to the VM, change the order so that the PCI device is the first in the list. In any case, make sure to tick “Enable boot menu“:




Unless you are passing through a USB host controller via PCI passthrough, you need to pass through your keyboard and mouse using the USB host device option.
Click [Add Hardware] and select “USB Host Device“. Select your mouse to pass through to the VM and click [Finish]. Repeat this step for your keyboard:




This is how far the GUI support goes. There are still a number of steps to perform before you can start the Windows VM!
Note: Different from the tutorial, I use a multi-device wireless mouse and keyboard that connect to two different USB receivers. I also pass through one of the two USB controllers as a PCI device. This allows me to switch the mouse and keyboard between host and guest at the press of a button. Unless the host freezes, I’m always in control.
Additional XML Configurations
The configuration capabilities of Virtual Machine Manager are limited. Luckily they gave us an integrated XML editor. In order to use it, “Enable XML editing” under “Edit->Preferences”.
In the configuration window, select “Overview“, then click XML:




In its new drivers, Nvidia has removed the VM test and allows you to pass through Nvidia graphics cards. However, as Nvidia removed the VM check, AMD graphics drivers may require the following now:
<vendor_id state="on" value="0123456789ab"/>
as well as:
<kvm>
<hidden state="on"/>
</kvm>
(Note: Professional Nvidia cards starting with the Quadro 2000 upwards did not require the “vendor_id” and “hidden state” entries to fool the Nvidia driver – they are specified by Nvidia to run in virtual environments. Recent Nvidia drivers allow you to pass through any modern Nvidia card. )
For better performance, enable the Hyper-V Enlightenments:
<relaxed state="on"/>
<vapic state="on"/>
<spinlocks state="on" retries="8191"/>
<vpindex state="on"/>
<runtime state="on"/>
<synic state="on"/>
<stimer state="on">
<direct state="on"/>
</stimer>
<reset state="on"/>
<frequencies state="on"/>
<reenlightenment state="on"/>
<tlbflush state="on"/>
<ipi state="on"/>
<evmcs state="off"/>
The aforementioned options go into the following locations:
<features> <acpi/> <apic/> <hyperv> <relaxed state="on"/> <vapic state="on"/> <spinlocks state="on" retries="8191"/> <vendor_id state="on" value="0123456789ab"/> <vpindex state='on'/> <runtime state="on"/> <synic state='on'/> <stimer state="on"> <direct state="on"/> </stimer> <reset state="on"/> <frequencies state="on"/> <reenlightenment state="on"/> <tlbflush state="on"/> <ipi state="on"/> <evmcs state="off"/> </hyperv> <kvm> <hidden state="on"/> </kvm> <vmport state="off"/> </features>
Click [Apply].
Some versions of virt-manager enable memory ballooning, which is bad. To disable it, scroll down to near the end of the XML configuration and look for the “memballoon” entry. Change that entry as follows:
<memballoon model="none"/>
Click [Apply].
In order to use our predefined hugepages memory that we set up before, insert:
<memoryBacking>
<hugepages/>
</memoryBacking>
as shown below:
<memory unit="KiB">16777216</memory> <currentMemory unit="KiB">16777216</currentMemory> <memoryBacking> <hugepages/> </memoryBacking>
Verify that “memory” and “currentMemory” have the same values and are multiples of 1024 (16777216/1024=16384).
Note: If you didn’t configure hugepages earlier in the tutorial, skip the above step.
Click [Apply].
Let’s look once more at the CPU topology options. The Ryzen 3900X is a 12-core/24-thread CPU. In my own system I give the Windows VM all CPU cores/threads. That seems to work well with running Adobe Lightroom and Photoshop under Windows. However, if you run multiple VMs simultaneously, or if you use the VM for gaming, there are better strategies.
As mentioned before, in this tutorial we are going to assign 1 socket (the system has only 1 CPU), 6 cores/12 threads to the VM. In addition we specify the “topoext” option to let the guest know about the CPU architecture. “cache passthrough” passes the actual CPU cache information to the virtual machine.
<cpu mode="host-passthrough" check="none">
<topology sockets="1" cores="6" threads="2"/>
<cache mode='passthrough'/>
<feature policy="require" name="topoext"/>
</cpu>
Click [Apply].
Note: topoext is needed to detect multithreading in AMD CPUs.
Note 2: For AMD Ryzen Zen architecture you can get good performance using “EPYC” as the CPU model:
<cpu mode="custom" match="exact" check="none">
<model fallback="allow">EPYC-IBPB</model>
There are additional performance tweaks such as iothreads, cpu pinning etc. But for now I like to focus on getting the VM to work. That requires some tricks described in the next chapter.
Another section in the XML configuration you should pay attention to is the <clock> section:
<clock offset="localtime">
<timer name="rtc" tickpolicy="catchup"/>
<timer name="pit" tickpolicy="delay"/>
<timer name="hpet" present="no"/>
<timer name="hypervclock" present="yes"/>
<timer name="tsc" present="yes" mode="native"/>
</clock>
For the above configuration the Linux host must use TSC. You can check that on the command line using:
cat /sys/devices/system/clocksource/*/current_clocksource
tsc
Note: Modern PCs use TSC by default. If not, you can use the “clocksource=tsc” option in grub. Check to see that tsc is an option using cat /sys/devices/system/clocksource/*/available_clocksource
.
Windows (VM) Installation
Update: A recent update to the edk2-ovmf package (OVMF UEFI BIOS) causes a black screen when trying to create the VM. Versions up until edk2-ovmf 202105 are good, newer ones up to 202111-4 are broken. Version edk2-ovmf 202111-5 fixed the problem! One way to work around this bug is to keep all the preconfigured Spice etc. devices and use the emulated graphics display of Virtual Machine Manager for installation. Once the graphics driver is installed in Windows, you can boot using the passthrough GPU.
When you are done configuring your Windows VM in Virtual Machine Manager, click the “Begin installation” button and then the “terminal” icon. This should present you with a console where – for the moment – the action takes place.
As the VM boots, it will briefly show a Tianocore UEFI BIOS screen, followed by a brief notice to “Press any key to boot from CD or DVD…”. If you are not fast enough, this message will time out and you’ll be presented with a very confusing “UEFI interactive shell”. (See here for some screen shots.)
Don’t panic: Click the window, then type “exit
” and press [Enter]. You are presented with a menu offering several options. Use the cursor keys to navigate to the “Boot Manager” option and press [Enter]. Select the first UEFI QEMU DVD ROM which should be your Windows installation ISO. Press again [Enter].
When the “Press any key…” messages appears again, hit [Enter]. Windows should boot now.
Once you clicked the console, you can release the mouse cursor and keyboard by simultaniously pressing the [Left-Ctrl] and [Left-Alt] keys!
At the “Where do you want to install Windows?” screen, click “Load driver” in the bottom left. In the new window, click “Browse” and select the “virtio-win…” CD drive. Open the CD drive, go down to vioscsi (SCSI driver) or viostor (SATA driver), depending on what storage device you configured earlier. Inside the folder, select “W10” and then “amd64”.
Note: I had to untick “Hide incompatible drivers…” to actually be able to select the viostor driver I selected. Also when Windows reboots, I killed it and changed the boot order.
Get some popcorn, watch a movie, or do other stuff while Windows installs. Don’t forget to answer the zillion questions that a Linux distro would never have asked you.
When Windows finished booting into the desktop, the first thing to do is install the network driver. Right-click the Windows icon, select “Device Manager” and “Network adapters”. Install the network driver by browsing to the virtio-win CD ROM, then “NetKVM”, “W10” and finally “amd64”.
After you ensured Internet access, open the browser and go to the Nvidia or AMD website to download the appropriate driver. You need to reboot once more.
Now the Windows VM will boot using the passthrough GPU and its connected display. It might boot into a dual screen setup with the real screen and the console window.
Once passthrough works to your satisfaction, the Display Spice server, Channel spice, Tablet, Video QXL, and USB Redirector (1 and 2) can be removed.
Here is how my Windows 10 configuration screen looks like after the cleanup:




The Unifying Receiver shown in the screenshot is for my wireless multi-device keyboard and mouse.
Bugs and Regressions
Unfortunately QEMU 3.1 and 4.0 introduced some regressions or bugs. For more information, see Windows 10 client issues. (I’m now using QEMU 6.1.0.)
Let’s tackle them one by one:
Qemu 4.0.0 hangs the host and Windows 10 client
QEMU 4.0.0 hangs the host and Windows 10 client, for example when passing through a Nvidia card. For an under-the-hood explanation see here.
Solution for virt-manager: Add
<ioapic driver="kvm"/>
to the configuration as shown below:
<vmport state="off"/> <ioapic driver="kvm"/> </features>
Note: When using a QEMU script, add the following option under the qemu–system–x86_64 command:
kernel_irqchip=on
This workaround disables the irq split that was introduced as a default in QEMU 4.0.0. The workaround should not influence the performance.
Note: The issue has been resolved in kernel 5.6 where this option is no more required (please check to make sure).
vhost_region_add_section: Overlapping but not coherent sections
This bug appears in some QEMU releases prior to QEMU 5.0 and can lead to network disconnection and performance drop. You may or may not notice this issue, but check your win10.log file under:
/var/log/libvirt/qemu
Here is what I found in my win10.log file:
2020-03-20T09:58:01.415434Z qemu-system-x86_64: vhost_region_add_section: Overlapping but not coherent sections at 108000 2020-03-20T09:58:01.415435Z qemu-system-x86_64: vhost_region_add_section: Overlapping but not coherent sections at 109000 2020-03-20T09:58:01.415436Z qemu-system-x86_64: vhost_region_add_section: Overlapping but not coherent sections at 10a000 ...
As a workaround to the problem, we need to disable vhost. Add the following to the <interface> section of your XML configuration:
<driver name="qemu"/>
like this:
<interface type="bridge"> <mac address="52:54:00:e1:49:c3"/> <source bridge="br0"/> <model type="virtio"/> <driver name="qemu"/> <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/> </interface>
Note 1: Disabling vhost does impede network performance, but is far better than any of the other choices. In most cases you won’t notice a difference (we are talking around 2.5 GBit versus 10 Gbit, but let’s face it – can you use the full 10 Gbit bandwidth?). Another workaround is to turn off the hypervisor extension “stimer” (see above). Feel free to experiment.
Note 2: QEMU version 5.0 fixes this issue! (Ubuntu 20.04 and derivatives and most other distributions deliver QEMU 5.0 or later now.)
No sound – pulseaudio fails
Note: If you have Virtual Machine Manager v3.0 or newer and a recent version of libvirt, you can configure pulseaudio using native XML syntax. See the Arch wiki under “Passing VM audio to host via PulseAudio“. You may still need to follow the steps a little further down here where it reads “We want to run the VMs under our own user name…”.
QEMU 4.0 brings improved audio support, but not entirely without hiccups. First virt-manager 2.2.1 doesn’t yet support the new syntax. We need to configure the audio support manually using QEMU commands.
Inside the Virtual Machine Manager GUI, at the very top of the XML configuration, change:
<domain type="kvm">
to:
<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" type="kvm">
This is how it looks then:
<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" type="kvm"> <name>win10</name> <uuid>d23aeb98-etc-pp-and-moreofit</uuid> <metadata>
After the above declaration, we can insert the QEMU option into the XML configuration. But first we need to identify our pulseaudio sound server. Enter in a terminal window:
pax11publish -d
Server: {a14b_lotsofit_884c}unix:/run/user/1000/pulse/native Cookie: 3417_even_more_numbers_letters_isnt_it_fun_1accb9...
Note the location of the sound server /run/user/1000/pulse/native – we need it in the following step.
Inside Virtual Machine Manager, place the following lines at the bottom of the XML configuration, just above the last line </domain>:
<qemu:commandline>
<qemu:arg value="-audiodev"/>
<qemu:arg value="pa,id=pa1,server=/run/user/1000/pulse/native"/>
</qemu:commandline>
If the “Sound ich9” device isn’t listed in your list of configured devices yet, use the following instead:
<qemu:commandline>
<qemu:arg value="-device"/>
<qemu:arg value="ich9-intel-hda,bus=pcie.0,addr=0x1b"/>
<qemu:arg value="-device"/>
<qemu:arg value="hda-micro,audiodev=hda"/>
<qemu:arg value="-audiodev"/>
<qemu:arg value="pa,id=hda,server=/run/user/1000/pulse/native"/>
</qemu:commandline>
We want to run the VMs under our own user name. Edit the following file with your editor of choice:
sudo nano /etc/libvirt/qemu.conf
and search for the “user =” entry. Specify your user name and remove the hashtag:
user = "myusername"
Save and close the file. Now restart libvirtd:
sudo systemctl restart libvirtd
Unfortunately that may not be enough. First thing we test the new setting and start the Windows VM. If you still get a “sound disabled” icon in Windows and you can’t find the hda sound device in the Windows Sound Troubleshooter, close the VM.
Check the following log file:
cat /var/log/libvirt/qemu/win10.log
with “win10” being the name of your Windows VM as specified in Virtual Machine Manager. At or towards the end of the log you should see the following:
pulseaudio: pa_context_connect() failed pulseaudio: Reason: Connection refused pulseaudio: Failed to initialize PA contextaudio: Could not init `pa' audio driver audio: warning: Using timer based audio emulation
Some suggest to grant the root user access to the sound server. Assuming you run virt-manager / libvirt as root, that might perhaps work.
Before you go any further on that, have a look at syslog:
cat /var/log/syslog | grep DENIED
Mar 21 00:16:11 mypc kernel: [45172.799423] audit: type=1400 audit(1584742571.201:57): apparmor="DENIED" operation="open" profile="libvirt-d23aeb98-3f95-4ed5-a74b-c88b74e89b15" name="/etc/pulse/client.conf.d/" pid=14959 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0 Mar 21 00:16:11 mypc kernel: [45172.799431] audit: type=1400 audit(1584742571.201:58): apparmor="DENIED" operation="open" profile="libvirt-d23aeb98-3f95-4ed5-a74b-c88b74e89b15" name="/dev/shm/" pid=14959 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0 Mar 21 00:16:11 mypc kernel: [45172.799554] audit: type=1400 audit(1584742571.201:59): apparmor="DENIED" operation="connect" profile="libvirt-d23aeb98-3f95-4ed5-a74b-c88b74e89b15" name="/run/user/1000/pulse/native" pid=14959 comm="qemu-system-x86" requested_mask="wr" denied_mask="wr" fsuid=1000 ouid=1000
As you can see, Apparmor is the culprit. To test this theory, we only need to disable apparmor for libvirt/QEMU and restart the libvirt-daemon. Use your favorite editor and once again edit the following file:
sudo nano /etc/libvirt/qemu.conf
Look for “security_driver” and change to read as follows (without hashtag!):
security_driver = "none"
Safe the file and restart libvirtd:
sudo systemctl restart libvirtd
Now start your Windows VM and see if it worked. By now you should have sound output. If not, check again the log files.
Assuming sound is working, we need to modify the default apparmor configuration for new and existing VMs. Edit:
sudo nano /etc/apparmor.d/abstractions/libvirt-qemu
and insert the following access rules below the line reading
“/var/lib/dbus/machine-id r,“:
/etc/pulse/client.conf.d/** r,
/dev/shm/ r,
owner /run/user/1000/pulse/native rw,
/etc/machine-id r,
Note: The “owner” statement is optional – if you keep having trouble with sound, remove “owner” and see if it helps.
Save and quit.
Enable apparmor:
sudo nano /etc/libvirt/qemu.conf
and add the hashtag:
#security_driver = "none"
Save and quit. Then restart libvirtd:
sudo systemctl restart libvirtd
and start the Windows VM. This should work.
Performance Tuning
The Windows VM you just created should already perform very well. But there are definitely ways to further improve performance. Instead of repeating what I or others already wrote, here are some links to further information:
- The Performance Tuning section in my previous tutorial, in particular MSI Message Signaled Interrupts.
- Improving the performance of a Windows Guest on KVM/QEMU by Leduccc is a comprehensive guide on VM performance tuning.
- “Part 4: Improving VM Performance” of Bryans excellent passthrough tutorial, in particular the sections on “CPU pinning” and “performance governor”.
- Check out my latest configuration here.
- For concise, up-to-date information on kvm VFIO passthrough, check out the VFIO channel on Discord.
- Mathias Hueber’s Comprehensive guide to performance optimizations for gaming on virtual machines with KVM/QEMU and PCI passthrough – though slightly outdated now – is a great reference point for VM tuning. If you have an AMD Ryzen CPU, check out the CPU pinning and CCX alignment section.
In case the Virtual Machine Manager didn’t give you the choice to select a bridge as network interface, you may want to configure one. Note that a bridge only works with wired connections, not Wifi.
To create a network bridge to be used by virt-manager, follow these steps:
sudo install network-manager-gnome
- In a terminal window, type
nm-connection-editor
- Setup the network as described here.
- Within Virtual Machine Manager, select the bridged network connection.
Latest Software
With regard to VFIO / GPU passthrough, up-to-date virtualisation packages – especially the QEMU package – sometimes improve performance, increase compatibility, or introduce new features. Obviously newer Ubuntu releases offer more up-to-date packages. Below is an overview of the out-of-the-box QEMU versions:
- Ubuntu 19.10 and derivatives provide QEMU 4.0.
- Ubuntu 20.04 LTS and its derivatives (e.g. Pop!_OS 20.04, Linux Mint 20, etc.) come with QEMU 4.2.
- Ubuntu 20.10 and Pop!_OS 20.10 provide QEMU 5.0
The downside is that new QEMU releases can also introduce bugs and stability issues. There is a wise saying: “If it ain’t broke, don’t fix it.” Unless you have a good reason to upgrade, don’t!
You can get newer (almost the latest) versions of the QEMU, libvirt, libvirt-python, virt-manager, liburing, and wine packages by adding Jacob Zimmermanns virtualisation repository to your sources:
sudo add-apt-repository ppa:jacob/virtualisation
sudo apt update
sudo apt upgrade
Note: Updating your system with unsupported packages from an untrusted PPA always bears a security risk.
Benchmarks
Have a look at my first benchmarks here.
Or see my latest Passmark 10 benchmark.
More benchmarks to follow.
Credits
I’ve been literally using hundreds of online sources to research this tutorial, in addition to my own notes. However, among the many sources, there are some that stick out.
First and foremost is Bryan Steiner’s comprehensive “gpu-passthrough-tutorial“. He describes a number of new concepts in his refreshing and very well written tutorial. It also includes a chapter about performance tuning that you definitely should look into.
Another great source is Mathias Hueber’s “Beginner friendly guide to windows virtual machines with GPU passthrough on Ubuntu 18.04; or how to play competitive games in a virtual machine“. It contains a wealth of information as well as an optimization and troubleshooting section.
For a more comprehensive list of resources, see the References section in my “Running Windows 10 on Linux using KVM with VGA Passthrough” tutorial.
Thanks to David G. for pointing out the need for the ich9 device that’s required for audio.
If this article has been helpful, click the “Like” button below. Don’t forget to share this page with your friends.
greetings,
I love your tutorials, they are comprehensive, understandable and are working!
Thanks a lot for your effort!
Here in this tutorial are PopUps embedded eg. “Pop!_OS 20.04 / Ubuntu 20.04 / kernel 5.4+ – press to expand” – unfortunately they are not working. I’ve tried FF on Ubuntu 20.2, Edge+FF on Windows. Am i doing wrong or are they really broken?
Regards,
Björn
Dear Björn,
Many thanks for your comment. I have contacted the plugin author to help resolve this issue. I’m hopeful it can be fixed, so I’m currently not touching the website. If I can’t get it working by tomorrow, I will look for another plugin or solution.
In the meantime, my apologies for your not being able to open and read the content. This will be temporary. As to the solution for “Pop!_OS 20.04 / Ubuntu 20.04 / kernel 5.4+ – press to expand”, just read https://www.heiko-sieger.info/blacklisting-graphics-driver/. You will find the answers there.
You should bind the vfio-pci driver in the grub file at boot time (this is the easiest), or use the driverctl utility.
Heiko
Hello Björn,
The problem is fixed now – “press to expand” works again. Thanks to the amazingly fast help from collapse-o-matic !!! I’ve never seen such a fast and to the point response in my life. It took perhaps 2 minutes to get a solution.
All should be good now.
I should have double checked your commands instead of blindly follow them, but you might want to change “kernelstub -o” to “kernelstub -a”. “-o” has overwritten my options necessary to boot my system. Fortunately is was easily fixable within minutes from the recovery boot.
Thanks for your post! I have updated the instructions.
I removed Pop_OS! long ago as I ran into more problems than I care for. kernelstub was one of the problems. Perhaps I’m just more familiar with grub.
The kernelstub -o option replaces the entry, the -a adds to the existing entries. Both may work, it depends on your configuration.
I’m glad you were able to solve the boot problem. Sorry for the inconvenience.
Hey Mr. Sieger,
I absolutely loved this tutorial it was so well organized and everything was explained very well. Although I am having issues with my setup, I’m using an AMD igpu for the host (working) and an Nvidia 1650 Super for the guest (kind of working), I’m on Linux mint mate 21 and the guest OS is Linux mint cinnamon 21. It boots properly where I can select the download and will even display the Linux Mint Logo but after that the screen shuts down. I’m at a loss so any help would be super appreciated. Keep up the fantastic work!
After looking deeper I found I actually missed a step, makes sense I was confused why we didn’t revisit the ROM file, lol. However after following the steps on “Create/Modify the XML VM configuration file” in “Passing Through a Nvidia RTX 2070 Super GPU.” I now get the error that it can’t find the file in my Downloads folder, is that causing the issue?
Hello Blake,
Glad you found the tutorial(s) useful. Now to the issue of the missing ROM file:
Check that the file name and path you specified in the XML file match the actual location of the VBIOS file.
If everything is correct, check the file permissions. I also suggest to use a folder other than the Downloads folder to avoid accidental deletion.
Should you be running kernel 6, there might be an issue with the grub method of binding the graphics card. See my latest post.
Checked the file permissions and everything should be allowed to Read and Write, Running Kernel 5.15, and I moved it out of downloads and double checked the name in the XML and still nothing.
I loved this post, the best around, detailed and explanatory.
Is there any chance of writing a similar one, for X670E & 7xx0X3D systems?
I upgraded mine and straggle to make it work