Kvm performance runs under way

I’ve finally gotten the kvm performance tests rolling. I’m hoping to have the first set of results some time next week. I installed a new precise server image on a laptop with 100M for rootfs (ext4), and a 100M partition for the guest images. I installed a precise server guest in a 10G base image on the rootfs, and for the first set of tests formatted the other partition as ext4. For each test, the base image gets copied to the experimental partition, where I intend to test xfs, jfs, etc.

The guest has the following upstart script.

{{{
# iotest – perform 5 read and 5 write tests
# shut down after each test so caller can time it
description “Run 5 read and 5 write tests, one per reboot”
author “Serge Hallyn ”

start on runlevel [2345]
stop on runlevel [!2345]
console output

script
dosetup()
{
echo 0 > /etc/nreadtests
echo 0 > /etc/nwritetests
cd /root
which wget > /dev/null 2>&1 || {
sudo apt-get update
sudo apt-get -y install wget
}
wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.3.2.tar.bz2
shutdown -h now
exit 0
}
doreadtest()
{
cd /root
find / -type f > /dev/null 2>&1 || true
tar jvft linux-3.3.2.tar.bz2 > /dev/null
find / -type f > /dev/null 2>&1 || true
tar jvft linux-3.3.2.tar.bz2 > /dev/null
}
dowritetest()
{
rm -f /root/bigfile
dd if=/dev/zero of=/root/bigfile bs=1G count=2
sync || true
rm -f /root/bigfile
dd if=/dev/zero of=/root/bigfile bs=1G count=2
sync || true
rm -f /root/bigfile
dd if=/dev/zero of=/root/bigfile bs=1G count=2
sync || true
cd /root
tar jxf linux-3.3.2.tar.bz2
sync || true
rm -rf linux-3.3.2
sync || true
}

[ ! -f /etc/nreadtests ] && dosetup
nreads=`cat /etc/nreadtests`
nwrites=`cat /etc/nwritetests`
echo “nreads is $nreads”
echo “nwrites is $nwrites”
if [ $nreads -lt 10 ]; then
doreadtest
nreads=$((nreads+1))
echo $nreads > /etc/nreadtests
shutdown -h now
exit 0
fi
if [ $nwrites -lt 10 ]; then
dowritetest
nwrites=$((nwrites+1))
echo $nwrites > /etc/nwritetests
shutdown -h now
exit 0
fi
echo “iotest: at end (should not reach here until done with all runs)”
end script
}}}

On the first boot it downloads a kernel tarball, then shuts down. The next 5 boots it does some heavy read activity, then for the next 5 heavy write activity. Each time it shuts down.

On the host I am running the following script:

{{{
#!/bin/bash

#host fs: xfs, ext4, ext3, ext2, jfs, lvm, lvm snapshot
#interface: virtio, ide
#cache: unsafe, none writeback, writethrough, directsync
#aio: threads, native

runperf()
{
# one setup, 5 read, 5 write tests
for i in `seq 1 11`; do
echo “run $i” >> perfout
echo “kvm -m 1024 $* -net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no -vnc :1”
time kvm -m 1024 $* -net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no -vnc :1
done
}

runtests()
{
disk=$1
drive=$2

for cache in unsafe none writeback writethrough directsync; do
for aio in threads native; do
for myif in virtio ide; do
echo “drive type $drive cache $cache aio $aio myif $if starting”
echo “drive type $drive cache $cache aio $aio myif $if starting” >> perfout
prealloc=””
if [ $drive = “qcow2pre” ]; then
prealloc=”-o preallocation=metadata ”
drive=qcow2
fi
echo Converting disk…
echo “qemu-img convert -f raw -O $drive $prealloc /home/serge/base.img /srv/x.img”
echo “qemu-img convert -f raw -O $drive $prealloc /home/serge/base.img /srv/x.img” >> perfout
qemu-img convert -f raw -O $drive $prealloc /home/serge/base.img /srv/x.img
echo “Starting runs”
runperf -drive file=$disk,aio=$aio,cache=$cache,if=$myif,index=0
done
done
done
}

# NOTE need to create each hostfs by hand and run this script
# lvm and lvm snapshot count as more hostfs’s
# NOTE AFAIK preallocation only works with qcow2. That is handled inside
# runtests
for format in qcow2 qed raw qcow2pre; do
echo “Starting format $format”
echo “Starting format $format” >> perfout
runtests /srv/x.img $format
done
}}}

I left a console logged in doing ‘tail -f perfout’ so I can try to monitor progress without perturbing the system. I think the first set of qcow2 runs took about 24 hours, so figure about 4 days for the ext4 results.

I think I’ll only check the many cache/aio/etc options with ext4. Then I may pick one combination and use that in another script to test with lvm, and lvm snapshot, and with raw format image file on a hostfs of xfs, jfs, etc.

I’ll update when the first round of results are done.

Advertisements
This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s