I intend to do kvm performance comparisons of various tuning knobs in 3 sets. One to test virtual disk performance. One to test network performance. And one to test general cpu performance.
For disk performance, I’m going to very closely follow Soren’s suggestion. I’ll use two upstart jobs, one which does heavy read activity (find / -type f and tar zvft of kernel tree) then shuts down, and one which does heavy write activity. Start up the kvm VM and time how long until it is complete. I intend to do all permutations of the following:
host fs: xfs, ext4, ext3, ext2, lvm, lvm snapshot
drive: qcow2, qed, raw, lvm
preallocated (for qcow and qed): yes and no
interface: virtio, ide
cache: unsafe, none writeback, writethrough, directsync
aio: threads, native
For cpu performance, I’ll have an upstart job run kernbench with permutations of the following:
guest memory: 256M, 512M, 1024M, 2048M, 4096M (*1), 8192M (*1)
overcommit: none, ksm, balloon, both
guest swap: none, =memsize, =3xmemsize
smp: 1, 2, 4 (*2), 8 (*2)
Finally, for network performance I’ll probably have an upstart job repeatedly wget kernel tarballs from apache on the host, trying the following options:
nic: tap, user
type: virtio, rtl8139, ne2k_pci, e1000, pcnet
Feedback welcome!
*1 – I won’t do these as my target machine only has 4G ram
*2 – I won’t do these as my target machine only has 4 virtual cpus
I’m looking forward to see the results. Can you also add (my favourite) jfs to the set of tested host fs?
Thanks!
Sure – jfs was actually on my list originally, I’ll put it back in 🙂
I’m really interesting in this performance measurements. But it is a realy lot of works and can took a weeks. So, good luck in this not easy deal.
Can’t wait, this sounds great!