deploying multiple (connected) lxc compute nodes – with juju

This post got delayed a bit due to a few unexpected complications. First, it turns out that you cannot connect GRE tunnels in Amazon’s EC2 over the instances’ private addresses. You must use the public addresses. Second, quantal removed the openvswitch-datapath-dkms package because the openvswitch kernel module is now available upstream. However it turns out that the upstream openvswitch module does not yet provide GRE tunnels configurable through the db. Therefore hopefully the openvswitch-datapath-dkms package will soon be reintroduced, but meanwhile we will use it from the inestimable James Page’s “junk” ppa.

Oh, but first things second. What are we doing today? We’re going to use juju to fire off a set of lxc compute nodes, pre-populated with LVM backed pristine containers which can be very quickly cloned, and which will be able to communicate over an openvswitch private network no matter which compute node hosts them.

My use case for this is to set up for a long varied bug triage and replication session. It takes about 10-20 minutes (much longer on amazon, but setting a local mirror in /etc/default/lxc should speed that up there) to initially set up, after which starting a new container takes about 3 seconds.

There are two bzr trees involved. The actual juju charm is at lp:~serge-hallyn/charms/quantal/ovs-lxc/trunk. It relates one master compute node to any number of slave nodes. The master node will be used just as the slave ones, but is set apart to be the central openvswitch hub. So every slave will have a GRE tunnel to the master, and slaves can talk to each other over two GRE links (through the master). (You’ll want to check this out under ~/charms/quantal, i.e. “mkdir -p ~/charms/quantal; cd ~/charms/quantal; bzr branch lp:~serge-hallyn/charms/quantal/ovs-lxc/trunk ovs-lxc;”)

The other bzr tree is lp:~serge-hallyn/+junk/jujulxcscripts. The first script here is ‘juju-deploy-lxc’, which accepts a number of slaves to start, boostraps juju, deploys the nodes, and relates each slave to the master. It finally runs ‘grabnodes’ which will gather information used by the other scripts.

Next, ‘startcontainer’ will clone and start a new container. It rotates round robin among the master and slaves each time it is invoked. With no arguments it will start an amd64 quantal container. It can also be called as

startcontainer precise

or
startcontainer quantal i386

for the obvious result.

Finally, ‘sshcontainer (n)’ will ssh into the (n)th container you’ve started, starting with 0. The scripts don’t get too fancy or try to do too much – if you want much more, you might actually want to deploy openstack 🙂

I do hope at some point to expand this so as to use a (juju-deployed) ceph cluster for the container backing store. It is not as flexible as it ought to be, as it expects /dev/vdb or /dev/xvdb to be a spare drive and mounted on /mnt at instance startup, but this is good enough to work for me on Amazon ec2 as well as an openstack based cloud, which is all I need to make this useful for myself.

It won’t work by default on a local (lxc-backed) juju config, but I will play with that as an exercise to investigate what sorts of site customizations we should support in juju-lxc. In particular, we’ll need to (a) be able to use lxc mount hooks (so cgroups can be mounted in the container) and custom apparmor profiles.

Advertisements
This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

4 Responses to deploying multiple (connected) lxc compute nodes – with juju

  1. Anirup Dutta says:

    I have to say before reading this post, I knew nothing about lxc or juju. Thanks for the great posts. My questions can be bit nobbish. I ran the scripts and it took me a long time to set it up. Can we not just start with a custom ami which has the openswitch module installed and has a lxc container installed in it before hand. I if we can handle the networking part of the lxc then it becomes much easier. Next I checked with juju status and the setup since perfect but I can’t ssh. It always says public key denied. Also instead of having a spoked liked configuration, if I wish to have a chained configuartion(oe after the other), which file would be the best to have a look at.

    • s3hh says:

      1. I always try to start from one of the official ubuntu cloud image amis for reproducability, but yes you could of course start with a custom ami.

      2. Do you mean you can’t ssh into the cloud instances started by juju, or you can’t ssh into the containers you’ve created? The latter shouldn’t have pubkey auth, just use ubuntu/ubuntu, so you probably mean the former. In your .juju/environments.yaml file, you can have a line like:

      authorized_keys: `cat id_rsa.pub`

      Then just ssh -i id_rsa ubuntu@node-ip

      3. Lastly – to change the ovs configuration, in the ovs-lxc juju charm edit the ovs-vsctl commands in the files hooks/master-relation-changed, hooks/master-relation-departed, and hooks/slave-relation-changed.

      • Anirup Dutta says:

        Thanks for the response. I guess there was some problem with my public key. Maybe permission wise it was too open. I did have it in my juju environmental file.

        I have a general question. In the current setup when I send traffic from one of the slaves to another, it goes through the master node. However I want every traffic coming to master node, to go through the first lxc container launched in the master node. To make it simple, lets assume we have only one lxc conatiner in the master node. Normally openswitch is clever enough that whenever it sees the packets it routes them correctly to the destination.

  2. s3hh says:

    @Anirup,

    I think you’re saying you want that container on the master node to see all traffic? I don’t know how to do that, but one possibility might be to have each other container’s nic traffic be mirrorred to a nic on the snooping container. Check the ‘port mirroring’ section in the ovs-vsctl man page.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s