libvirt defaults (and openvswitch bridge performance)

The libvirt-bin package in Ubuntu installs a default NATed virtual network,
virbr0. This isn’t always the best choice for everyone, however it “just
works” everywhere. It also provides some simple protection – the VMs aren’t
exposed on the network for all attackers to see.

Two alternatives are sometimes suggested. One is to simply default to a
non-NATed bridge. The biggest reason we can’t do this is that it would break
users with wireless cards. Another issue is that instead of simply tacking
something new onto the network, we have to usurp the default network interface
into our new bridge. It’s impossible to guess all the ways users might have
already customized their network.

The other alternative is to use an openvswitch bridge. This actually has the
same problems as the linux bridge – you still can’t add a VM nic to an
openvswitch-bridged wireless NIC, and we still would be modifying the default
network.

However the suggestion did make me wonder – how would ovs bridges compare to
linux ones in terms of performance? I’d have expected them to be slower (as a
tradeoff for much greater flexibility), but I was surprised when I was told
that ovs bridges are expected to perform better. So I set up a quick test, and
sure enough!

I set up two laptops running saucy, connected over a physical link. On the one
I installed a saucy VM. Then I ran iperf over the physical link from the other
laptop to the VM. When the VM was attached using a linux bridge, I got:

830 M/s
757 M/s
755 M/s
827 M/s
821 M/s

When I instead used an openvswitch bridge, I got:

925 M/s
925 M/s
925 M/s
916 M/s
924 M/s

So, if we’re going to go with a new default, using openvswitch seems like a
good way to go. I’m still loath to make changes to the default, however a
script (sitting next to libvirt-migrate-qemu-disks) which users can optionally
run to do the gruntwork for them might be workable.

About these ads
This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

4 Responses to libvirt defaults (and openvswitch bridge performance)

  1. Zhengpeng says:

    tried to get lxc work with ovs in saucy, but faiked, not sure if its fixed or not.

    • s3hh says:

      You say “not sure if fixed” – did openvswitch fail, or did you not succeed in setting it up right?

      Note that there are currently two openvswitch drivers. The one shipped with the upstream kernel does not support GRE tunnels, which at least in my own usage of ovs in lxc I needed. If you want to connect two openvswitch bridges on separate machines using a GRE tunnel, then you need to run the out of tree drivers.

      • Zhengpeng says:

        trying to boot a container after replace linux bridge by ovs’s
        sudo lxc-start -n devbox
        lxc-start: failed to attach ‘vethx1RO3J’ to the bridge ‘ovsbr0′ : Operation not supported
        lxc-start: failed to create netdev
        lxc-start: failed to create the network
        lxc-start: failed to spawn ‘devbox’

        I don’t use the GRE tunnel, and the ovs on my system works fine with libvirt+qemu actually.

      • Zhengpeng says:

        Figured out the issue, its due to the deprecation of brcompat, there is no linux bridge compatible module there, but lxc still use the brctrl system call to attach to ovs.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s