Time flies. 12 years have passed since I built my first PC with GPU passthrough. Back in the old days there was little documentation on how to do it. I found a tutorial written for Fedora, plus some messages here and there. VGA passthru, as it was often called, was very restrictive. You had to have the right hardware to make it work, including a graphics card that was supported (like the Nvidia Quadro series).
My first Microsoft Windows 7 VM with PCI passthrough ran on a Xen hypervisor. Dom0 (the “host”) was Linux Mint, a Ubuntu derivative. Once I got it up and running, I wrote a detailed tutorial on the Linux Mint forum under the title “HOW-TO make dual-boot obsolete using XEN VGA passthrough“. To my great surprise, it had several hundred thousand readers over the years. Seems like dual-boot was a pain in the neck not only for me.
Fast forward: If you search the Internet for “gpu passthrough”, “pci passthrough”, “vfio”, “virtio” or “Windows gaming vm”, you are bound to find information and tutorials on how to run a Microsoft Windows virtual machine on Linux with near bare metal performance. That includes video tutorials on Youtube – for a good one, see the series by BlandManStudios.
VFIO Hardware Support
Kernel-based virtual machines (kvm) have become popular. No wonder that on the CPU side most modern AMD and Intel CPUs support IOMMU which is a prerequisite for passthrough.
IOMMU support on motherboards is still a bit tricky. Manufacturers often don’t specify that feature. Sometimes you’ll find IOMMU (or SVM or VT-d as manufacturers name it) mentioned in the BIOS manual. But that doesn’t guarantee that it’s working properly. The best way is to search the Internet or the vfio Reddit forum for reports from other users regarding a specific motherboard.
On the graphics card front, Nvidia now supports GPU passthrough for all their mainstream graphics cards. Even before Nvidia’s change in policy, the driver restrictions were easily circumvented by hiding the kvm hypervisor. The AMD 6800/6800XT graphics cards have reportedly overcome the nasty FLR reset bug, making them usable for virtual machines. However, the newer Radeon RX 7000 series are reportedly no good for passthrough. In short, unless proven otherwise, one should stay clear of AMD GPUs for the guest system.
GPU passthrough is also possible with laptops, for example when using Nvidia Optimus technology.
Advancements in Virtualization
Virtualization technology has not only experienced improved support from the hardware vendors, but also from the open source community. Many new features and tweaks have been introduced that make PCI passthrough as well as GPU passthrough – which is a special case of the former – easier and better. Here a list of things I noticed that have improved greatly over the years:
- Better audio support in the VM – no more crackling sound
- Windows guest drivers with better support for storage devices, network, and more
- Easy passthrough of USB devices such as keyboard and mouse.
- Virtual Machine Manager (virt-manager) supports most of the features to configure and start a GPU passthrough VM
- Better support for single-GPU passthrough
- Support of virtiofs to share host folder with guest
The list is by no means complete. Some users will find Looking Glass by Geoffrey McRae a.k.a. gnif very helpful. Looking Glass lets you relay the video frames from the guest VM to the host, so you won’t need a second screen or a cable connecting the guest GPU to the monitor. Geoffrey has also created some kernel patches for the reset bug that haunts so many AMD graphics cards.
More than a decade ago I got fed up with dual-boot. I needed the Microsoft Windows installation to edit my photos using Adobe Lightroom and Photoshop. VirtualBox did not cut it, it was and still is far from bare metal performance.
Linux is my main OS with most of the applications I use – Internet, email, e-book reader, document editor, office applications, productivity tools, multimedia, backup utilities, even some games. The Windows VM is used for photo and video editing, and once in a while for gaming.
Both the Linux host (Linux Mint or Manjaro) and the Microsoft Windows passthrough VM have worked amazingly well. In fact, when I upgraded my PC hardware – a switch from Intel to AMD – I simply used my existing Windows VM and let it reconfigure the changes, without reinstalling from scratch. This migration would not have been possible if I had installed Windows directly onto bare metal.
Using LVM for Storage
I could passthrough my NVMe controller to Windows and perhaps get a little faster disk performance. But I’ve opted for NTFS partitions on LVM (at the host-side) which offers enormous flexibility. LVM stands for Logical Volume Manager. It allows you to create storage pools, with or without RAID, and assign disk space spanning multiple drives to a single logical volume (LV) or partition.
Here is an example of what I mean with “flexibility”. Couple of days ago I ran out of disk space on the Windows C: drive. My volume group
vmvg spans over two drives: a fast NVMe drive and a SSD drive. My Windows system partitions are all on the NVMe drive, and I want them to stay there. Since the NVMe drive didn’t have enough free storage space, I had to move another VM to make room for a larger C: drive:
pvmove -n /dev/vmvg/opensuse /dev/nvme1n1p1 /dev/sdb1
The above command moves the
/dev/vmvg/opensuse logical volume (LV) from the
nvme1n1p1 NVMe drive to the
sdb1 SSD drive. The next command enlarges the
win10 system LV:
lvextend -L +30G /dev/vmvg/win10 /dev/nvme1n1p1
lvextend -L +30G command extends the volume by 30 GByte.
.../win10 is the logical volume, and the optional
/dev/nvme1n1p1 selects the NVME drive.
I use bash scripts to backup or restore my Windows VM. It takes about 15 minutes to backup or restore a 270 GByte partition to/from HDD. All file handling is done on the host. I use scripts to mount and unmount my Windows NTFS data volumes in Linux and to back them up remotely on a server using rsync.
I’ve been using LVM for as long as I’m running a Windows VM with PCI passthrough. The only downside of LVM is that there is no stand-alone GUI that would help in LVM administration (except perhaps blivet-gui). Virtual Machine Manager can create and use LVM volumes, though.
Another storage option is ZFS, if you have enough money to spend and need lots of disk space. ZFS is more demanding to set up properly but ultimately offers the highest data integrity and safeguarding. The Ars Technica website has some good intros into ZFS.
There is no way I would go back to running Microsoft Windows on bare metal. In practice, the Windows VM is about as fast as a bare metal installation. I can easily edit 4k or even 8k videos in a virtual machine – that is insane.
Data partitions (like drive E: or F:) can be backed up using the snapshot feature of LVM while the VM is running. You can even automate the backup using a Linux cron-job. These tools are prepackaged with Linux, or can be easily installed from repository.
In case of hardware failure or upgrade, the Windows VM can most likely be migrated to the new hardware, without the need for a new installation (we all know what a pain in the neck this is with MS Windows).
The only downside of running Microsoft Windows in a VM is the legal requirement to purchase the proper Windows license – usually an expensive retail license. The OEM license that comes with preinstalled Windows PCs is usually only good for a bare metal installation on that particular hardware.