Some first performance tests of various kvm backing stores

There are quite a few options for backing store for kvm virtual
machines. I’ve started benchmarking just a few of the possible
combinations. For starters, I installed a stock headless ubuntu
precise system from the mini network installer. It was running
the generic kernel, with no x-windows and only the virtualization
host package set selected through tasksel. I ran all VMs with
1024M ram, vnc, and the user network – which is fine as I was
only running kernbench.

The specific test I ran was the phoronix pts/build-linux-kernel
test, which runs 3 compiles and gives you the average time and
standard error and standard deviation.

The only thing I varied was the backing store. I used a raw
partition, and lvm partition, and raw, qcow2, and qed image
files (all un-allocated, that is, sparse). The qcow2 and qed
images I tried both on ext4 and xfs filesystems. The VM in
all cases was partitioned with the default ext4 fs.

I ran the test (with three iterations) twice with each setup.

drive option Average time Standard error Standard deviation
-drive file=raw.img,if=virtio,cache=none (raw.img on ext4) 1004.07 1.24 0.21%
-drive file=raw.img,if=virtio,cache=none (raw.img on ext4) 1003.54 2.48 0.43%
-hda /dev/sda7 1003.77 0.93 0.16%
-hda /dev/sda7 1006.18 1.21 0.21%
-drive file=/dev/sda7,if=virtio,cache=none 1003.77 0.93 0.16%
-drive file=/dev/sda7,if=virtio,cache=none 1005.66 2.08 0.36%
-drive file=/dev/schroot/kvm,if=virtio,cache=none 1002.84 1.39 0.24%
-drive file=/dev/schroot/kvm,if=virtio,cache=none 1008.85 1.51 0.26%
-drive file=/dev/schroot/kvm,if=virtio,cache=writethrough 1002.21 2.05 0.36%
-drive file=/dev/schroot/kvm,if=virtio,cache=writethrough 1004.07 1.31 0.23%
-drive file=/dev/qcow2.img,if=virtio,cache=none (qcow2.img on ext4) 1007.84 1.29 0.22%
-drive file=/dev/qcow2.img,if=virtio,cache=none (qcow2.img on ext4) 1011.28 1.17 0.20%
-drive file=/dev/qed.img,if=virtio,cache=none (qed.img on ext4) 1018.18 0.86 0.15%
-drive file=/dev/qed.img,if=virtio,cache=none (qed.img on ext4) 1012.69 1.14 0.20%
-drive file=/dev/qcow2.img,if=virtio,cache=none (qcow2.img on xfs) 1006.70 1.53 0.26%
-drive file=/dev/qcow2.img,if=virtio,cache=none (qcow2.img on xfs) 1009.96 1.38 0.24%
-drive file=/dev/qed.img,if=virtio,cache=none (qed.img on xfs) 1005.70 0.80 0.14%
-drive file=/dev/qed.img,if=virtio,cache=none (qed.img on xfs) 1009.56 1.14 0.20%
-drive file=/dev/qcow2.img,if=virtio,cache=writethrough (qcow2.img on xfs) 1002.05 1.91 0.33%
-drive file=/dev/qcow2.img,if=virtio,cache=writethrough (qcow2.img on xfs) 1003.56 1.85 0.32%

I had a few snafus with reinstallations, so hopefully next time I can do
this faster, but overall this took me about a week on one laptop with an
intel core2 duo T6670 @ 2.20Ghz. I’m hoping to have two of these in a few
weeks so I can do more tests simultaneously.

I’ll leave analysis out of this post for now, and just dump these numbers.
I know, I’d rather report mean+/-95%CI, for each set of 6, but the numbers
I’m reporting were nicely provided (as a .png) as is by the testsuite,
and I was short on time…

About these ads
This entry was posted in Uncategorized. Bookmark the permalink.

4 Responses to Some first performance tests of various kvm backing stores

  1. Could you please add cache=unsafe to the test?

  2. Soren says:

    I don’t think these numbers are very interesting. A kernel compile isn’t particularly I/O bound, and I/O is what we’re attempting to gauge. The fact that all the numbers are very, very close indicates that you’re not really measuring differences in I/O performance.

    I ran similar tests a couple of months ago where I unpacked a kernel, sync(1)ed, then read every file again (“find /usr/src -print0 | xargs -0 cat > /dev/null” or something like that.) I’m sure that has a bunch of its own problems, but the results were reproducable, it’s almost entirely I/O bound, and the differences in performance were quite unambiguous.

    I took the ubuntu cloud image, fired it up, installed the compressed kernel source package. Then I shut the VM down again. That gave me my raw base image.

    I whipped up a Makefile that would take this raw base image, and either make a copy, or convert it to a qcow2 or qed, or shove it onto an LVM volume, IIRC. Then I’d start the timer, fire up the VM, passing it an OVF style ISO that would instruct cloud-init to perform the above mentioned test, and then shut down. Once the machine was shut down, I’d stop the timer and record the time spent. I ran it on a desktop-ish workstation (not a laptop). I think it took 12 hours to go through all the permutations ({raw,qcow2,qed,lvm}-{none,writethrough,writeback}-{ide,scsi,virtio}). The differences were *huge*. I’m not sure if I still have the data nor the script, though :(

    I hope this helps.

    • s3hh says:

      Thanks for the info.

      I actually didn’t start off intending to only measure disk performance,. I’m also interested in things like impact of KSM and ballooning, etc. In fact when I started I was planning to re-install the host for each test. I picked kernbench just bc it’s one of the 4 things I always ran to test the impact of the LSM stacker.

      But point taken. And it’s a great idea, thanks.

      So I think what I’ll do for my next round (which just might happen this week after all) is to set up an image with an upstart job installed starting on runlevel 2 which does as you suggest – or actually, one tuned for testing read and one for testing write – and shutdown; time that for permutations you list, plus cache=unsafe.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s