Qemu/kvm provides you with a plethora of ways to configure your storage devices. Yet no other type of device shows such a variance in its performance, with disk I/O throughput anywhere from stellar to abysmal using the very same hardware.
In this post I like to show some configuration options that can help improve VM disk performance. For an in-depth presentation on the latest developments and features, with hands-on examples, see Storage Performance Tuning for FAST! Virtual Machines.
Storage Options
Qemu supports a variety of storage formats for Virtual Machines, some of which perform better than others. In the comparison below I don’t consider features such as “life migration”, etc. that may or may not be important to you.
File-based storage
This is probably the easiest to implement, and the one I chose for my virtualization tutorial. Qemu offers an array of image file formats, two of which are worth a closer look:
- raw – Raw disk image format (default). This format has the advantage of
being simple and easily exportable to other emulators. - qcow2 – QEMU image format, the most versatile format. qcow2 can provide
smaller images (thin provisioning), encryption, compression, and support of multiple VM snapshots.
Of the above two, you need to weigh in performance against storage space and flexibility. “raw” is the more performant option, whereas “qcow2” has the ability to use up no more space than the data inside actual occupies. However, with the right settings, qcow2 can come very close to the performance of raw images.
For best performance using a raw image file, use the following command to create the file and preallocate the disk space:
qemu-img create -f raw -o preallocation=full vmdisk.img 100G
Change file name and size (in GByte) to match your needs.
For best performance using a qcow2 image file, increase the cluster size when creating the qcow2 file:
qemu-img create -f qcow2 -o cluster_size=2M vmdisk.qcow2 100G
Change file name and size (in GByte) to match your needs.
You may see some small performance gain when preallocating (full) disk space with qcow2 images, but this defeats thin provisioning.
Whether raw or qcow2 image files, you will want to use the virtio driver. In that case it is recommended to use “iothread“. Inside the VM start script, modify the qemu-system-x86_64 command to include:
-object iothread,id=io1 \
-device virtio-blk-pci,drive=disk0,iothread=io1 \
-drive if=none,id=disk0,cache=none,format=raw,aio=threads,file=/path/to/vmdisk.img \
Notice the aio=threads
option. This is the preferred option when storing the VM image file on an ext4 file system. With other file systems, aio=native
should perform better. You can experiment with that.
Disk-based storage
Qemu/kvm virtual machines can use disk-based storage much the same way as Linux or Windows uses disks or partitions. Simply specify your partition in the qemu-system-x86_64 -drive command, instead of the image file name.
A downside of using disk-based storage is the lack of flexibility, and the inability to take snapshots (for backup, for example). The solution to this is LVM or Logical Volumes, which is what I use.
In the examples below I use the following syntax for partitions:
file=/dev/sdb1
/dev/sdb1 is an unformated “raw” disk partition.
If you use LVM logical volumes, the above changes to:
file=/dev/volume_group/logical_volume
where volume_group could be “VMs” and logical_volume could be “myWindows10VM”.
LVM logical volumes need to be configured before you can use them, but this goes beyond the scope of this post.
There are a number of parameters to configure. In most cases cache=none
provides the best result.
With regard to aio=native
versus aio=threads
, this setting depends on the number of VM you are running concurrently on the system. For one VM, throughput increases when using aio=threads
on SSD-based storage. An in-depth presentation and benchmarks can be found here.
Here is the qemu-system-x86_64 configuration for a storage partition /dev/sdb1 using the virtio-blk-pci driver in conjunction with iothread:
-object iothread,id=io1 \
-device virtio-blk-pci,drive=disk0,iothread=io1 \
-drive if=none,id=disk0,cache=none,format=raw,aio=threads,file=/dev/sdb1 \
The example below shows the use of the virtio-scsi-pci driver. In this case I defined an ioh3420 root port driver (PCIe) where I attached the SCSI devices:
-device pcie-root-port,bus=pcie.0,addr=1c.0,multifunction=on, port=1,chassis=1,id=root.1 \
-object iothread,id=io1 \
-device virtio-scsi-pci,id=scsi0,iothread=io1,num_queues=4,bus=pcie.0 \
-drive id=scsi0,file=/dev/sdb1,if=none,format=raw,aio=threads,cache=none \
-device scsi-hd,drive=scsi0 \
I hope the above examples help improve your storage I/O performance. For more information and configuration examples, follow the links in the text.
If this article has been helpful, click the “Like” button below. Don’t forget to share this page with your friends.