Category Archives: LXC

Introducing the python LXC API

One of our top goals for LXC upstream work during the Ubuntu 12.10 development cycle was reworking the LXC library and turn it from a private library mostly used by the other lxc-* commands into something that’s easy for developers to work with and is accessible from other languages with some bindings.

Although the current implementation isn’t complete enough to consider the API stable and some changes will still happen to it over the months to come, we have pushed the initial implementation to the LXC staging branch on github and put it into the lxc package of Ubuntu 12.10.

The initial version comes with a python3 binding packaged as python3-lxc, that’s what I’ll use now to give you an idea of what’s possible with the API. Note that as we don’t have full user namespaces support at the moment, any code using the LXC API needs to run as root.

First, let’s start with the basics, creating a container, starting it, getting its IP and stopping it:

#!/usr/bin/python3
import lxc
container = lxc.Container("my_container")
container.create("ubuntu", {"release": "precise", "architecture": "amd64"})
container.start()
print(container.get_ips(timeout=10))
container.shutdown(timeout=10)
container.destroy()

So, pretty simple.
It’s also possible to modify the container’s configuration using the .get_config_item(key) and .set_config_item(key, value) functions. For those keys supporting multiple values, a list will be returned and a list will be accepted as a value by .set_config_item.

Network configuration can be accessed through the .network property which is essentially a list of all network interfaces of the container, properties can be changed that way or through .set_config_item and saved to the config file with .save_config().

The API isn’t terribly well documented at this point, help messages are present for all functions but there’s no generated html help yet.

To get a better idea of the functions exported by the API, you may want to look at the API test script. This script uses all the functions and properties exported by the python module so it should be a reasonable reference.

Posted in Canonical voices, LXC, Planet Ubuntu | Tagged | 11 Comments

Easily ssh to your containers and VMs on Ubuntu 12.04

With the DNS changes in Ubuntu 12.04, most development machines running with libvirt and lxc end up running quite a few DNS servers.

These DNS servers work fine when queried from a system on their network, but aren’t integrated with the main dnsmasq instance and so won’t let you resolve your VM and containers from outside of their respective networks.

One way to solve that is to install yet another DNS resolver and use it to redirect between the various dnsmasq instances. That can quickly become tricky to setup and doesn’t integrate too well with resolvconf and NetworkManager.

Seeing a lot of people wondering how to solve that problem, I took a few minutes yesterday to come up with an ssh configuration that’d allow one to access their containers and VM using their name.

The result is the following, to add to your ~/.ssh/config file:

Host *.lxc
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(host $(echo %h | sed "s/\.lxc//g") 10.0.3.1 | tail -1 | awk '{print $NF}') %p

Host *.libvirt
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(host $(echo %h | sed "s/\.libvirt//g") 192.168.122.1 | tail -1 | awk '{print $NF}') %p

After that, things like:

  • ssh user@myvm.libvirtu
  • ssh ubuntu@mycontainer.lxc

Will just work.

For LXC, you may also want to add a “User ubuntu” line to that config as it’s the default user for LXC containers on Ubuntu.
If you configured your bridges with a non-default subnet, you’ll also need to update the IPs or add more sections to the config.

These also turn off StrictHostKeyChecking and UserKnownHostsFile as my VMs and containers are local to my machine (reducing risk of MITM attacks) and tend to exist only for a few hours, to then be replaced by a completely different one with a different SSH host key. Depending on your setup, you may want to remove these lines.

Posted in Canonical voices, LXC, Planet Ubuntu | Tagged | 15 Comments

LXC in Ubuntu 12.04 LTS

Quite a few people have been asking for a status update of LXC in Ubuntu as of Ubuntu 12.04 LTS. This post is meant as an overview of the work we did over the past 6 months and pointers to more detailed blog posts for some of the new features.

What’s LXC?

LXC is a userspace tool controlling the kernel namespaces and cgroup features to create system or application containers.

To give you an idea:

  • Feels like somewhere between a chroot and a VM
  • Can run a full distro using the “host” kernel
  • Processes running in a container are visible from the outside
  • Doesn’t require any specific hardware, works on all supported architectures

A libvirt driver for LXC exists (libvirt-lxc), however it doesn’t use the “lxc” userspace tool even though it uses the same kernel features.

Making LXC easier

One of the main focus for 12.04 LTS was to make LXC dead easy to use, to achieve this, we’ve been working on a few different fronts fixing known bugs and improving LXC’s default configuration.

Creating a basic container and starting it on Ubuntu 12.04 LTS is now down to:

sudo apt-get install lxc
sudo lxc-create -t ubuntu -n my-container
sudo lxc-start -n my-container

This will default to using the same version and architecture as your machine, additional option are obviously available (–help will list them). Login/Password are ubuntu/ubuntu.

Another thing we worked on to make LXC easier to work with is reducing the number of hacks required to turn a regular system into a container down to zero.
Starting with 12.04, we don’t do any modification to a standard Ubuntu system to get it running in a container.
It’s now even possible to take a raw VM image and have it boot in a container!

The ubuntu-cloud template also lets you get one of our EC2/cloud images and have it start as a container instead of a cloud instance:

sudo apt-get install lxc cloud-utils
sudo lxc-create -t ubuntu-cloud -n my-cloud-container
sudo lxc-start -n my-cloud-container

And finally, if you want to test the new cool stuff, you can also use juju with LXC:

[ ! -f ~/.ssh/id_rsa.pub ] && ssh-keygen -t rsa
sudo apt-get install juju apt-cacher-ng zookeeper lxc libvirt-bin --no-install-recommends
sudo adduser $USER libvirtd
juju bootstrap
sed -i "s/ec2/local/" ~/.juju/environments.yaml
echo " data-dir: /tmp/juju" >> ~/.juju/environments.yaml
juju bootstrap
juju deploy mysql
juju deploy wordpress
juju add-relation wordpress mysql
juju expose wordpress

# To tail the logs
juju debug-log

# To get the IPs and status
juju status

Making LXC safer

Another main focus for LXC in Ubuntu 12.04 was to make it safe. John Johansen did an amazing work of extending apparmor to let us implement per-container apparmor profiles and prevent most known dangerous behaviours from happening in a container.

NOTE: Until we have user namespaces implemented in the kernel and used by the LXC we will NOT say that LXC is root safe, however the default apparmor profile as shipped in Ubuntu 12.04 LTS is blocking any armful action that we are aware of.

This mostly means that write access to /proc and /sys are heavily restricted, mounting filesystems is also restricted, only allowing known-safe filesystems to be mounted by default. Capabilities are also restricted in the default LXC profile to prevent a container from loading kernel modules or control apparmor.

More details on this are available here:

Other cool new stuff

Emulated architecture containers

It’s now possible to use qemu-user-static with LXC to run containers of non-native architectures, for example:

sudo apt-get install lxc qemu-user-static
sudo lxc-create -n my-armhf-container -t ubuntu -- -a armhf
sudo lxc-start -n my-armhf-container

Ephemeral containers

Quite a bit of work also went into lxc-start-ephemeral, the tool letting you start a copy of an existing container using an overlay filesystem, discarding any change you make on shutdown:

sudo apt-get install lxc
sudo lxc-create -n my-container -t ubuntu
sudo lxc-start-ephemeral -o my-container

Container nesting

You can now start a container inside a container!
For that to work, you first need to create a new apparmor profile as the default one doesn’t allow this for security reason.
I already did that for you, so the few commands below will download it and install it in /etc/apparmor.d/lxc/lxc-with-nesting. This profile (or something close to it) will ship in Ubuntu 12.10 as an example of alternate apparmor profile for container.

sudo apt-get install lxc
sudo lxc-create -t ubuntu -n my-host-container
sudo wget https://www.stgraber.org/download/lxc-with-nesting -O /etc/apparmor.d/lxc/lxc-with-nesting
sudo /etc/init.d/apparmor reload
sudo sed -i "s/#lxc.aa_profile = unconfined/lxc.aa_profile = lxc-container-with-nesting/" /var/lib/lxc/my-host-container/config
sudo lxc-start -n my-host-container
(in my-host-container) sudo apt-get install lxc
(in my-host-container) sudo stop lxc
(in my-host-container) sudo sed -i "s/10.0.3/10.0.4/g" /etc/default/lxc
(in my-host-container) sudo start lxc
(in my-host-container) sudo lxc-create -n my-sub-container -t ubuntu
(in my-host-container) sudo lxc-start -n my-sub-container

Documentation

Outside of the existing manpages and blog posts I mentioned throughout this post, Serge Hallyn did a very good job at creating a whole section dedicated to LXC in the Ubuntu Server Guide.
You can read it here: https://help.ubuntu.com/12.04/serverguide/lxc.html

Next steps

Next week we have the Ubuntu Developer Summit in Oakland, CA. There we’ll be working on the plans for LXC in Ubuntu 12.10. We currently have two sessions scheduled:

If you want to make sure the changes you want will be in Ubuntu 12.10, please make sure to join these two sessions. It’s possible to participate remotely to the Ubuntu Developer Summit, through IRC and audio streaming.

My personal hope for LXC in Ubuntu 12.10 is to have a clean liblxc library that can be used to create bindings and be used in languages like python. Working towards that goal should make it easier to do automated testing of LXC and cleanup our current tools.

I hope this post made you want to try LXC or for existing users, made you discover some of the new features that appeared in Ubuntu 12.04. We’re actively working on improving LXC both upstream and in Ubuntu, so do not hesitate to report bugs (preferably with “ubuntu-bug lxc”).

Posted in Canonical voices, Conferences, LXC, Planet Ubuntu | Tagged | 64 Comments

Booting an Ubuntu 12.04 virtual machine in an LXC container

One thing that we’ve been working on for LXC in 12.04 is getting rid of any remaining LXC specific hack in our templates. This means that you can now run a perfectly clean Ubuntu system in a container without any change.

To better illustrate that, here’s a guide on how to boot a standard Ubuntu VM in a container.

First, you’ll need an Ubuntu VM image in raw disk format. The next few steps also assume a default partitioning where the first primary partition is the root device. Make sure you have the lxc package installed and up to date and lxcbr0 enabled (the default with recent LXC).

Then run kpartx -a vm.img this will create loop devices in /dev/mapper for your VM partitions, in the following configuration I’m assuming /dev/mapper/loop0p1 is the root partition.

Now write a new LXC configuration file (myvm.conf in my case) containing:

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.utsname = myvminlxc

lxc.tty = 4
lxc.pts = 1024
lxc.rootfs = /dev/mapper/loop0p1
lxc.arch = amd64
lxc.cap.drop = sys_module mac_admin

lxc.cgroup.devices.deny = a
# Allow any mknod (but not using the node)
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
#lxc.cgroup.devices.allow = c 4:0 rwm
#lxc.cgroup.devices.allow = c 4:1 rwm
# /dev/{,u}random
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
# rtc
lxc.cgroup.devices.allow = c 254:0 rwm
#fuse
lxc.cgroup.devices.allow = c 10:229 rwm
#tun
lxc.cgroup.devices.allow = c 10:200 rwm
#full
lxc.cgroup.devices.allow = c 1:7 rwm
#hpet
lxc.cgroup.devices.allow = c 10:228 rwm
#kvm
lxc.cgroup.devices.allow = c 10:232 rwm

The bits in bold may need updating if you’re not using the same architecture, partition scheme or bridges as I’m.

Then finally, run: lxc-start -n myvminlxc -f myvm.conf

And watch your VM boot in an LXC container.

I did this test with a desktop VM using network manager so it didn’t mind LXC’s random MAC address, server VMs might get stuck for a minute at boot time because of that though.
In such case, either clean /etc/udev/rules.d/70-persistent-net.rules or set “lxc.network.hwaddr” to the same mac address as your VM.

Once done, run kpartx -d vm.img to remove the loop devices.

Posted in Canonical voices, LXC, Planet Ubuntu | Tagged | 28 Comments

Ever wanted an armhf container on your x86 machine? It’s now possible with LXC in Ubuntu Precise

It took a while to get some apt resolver bugs fixed, a few packages marked for multi-arch and some changes in the Ubuntu LXC template, but since yesterday, you can now run (using up to date Precise):

  • sudo apt-get install lxc qemu-user-static
  • sudo lxc-create -n armhf01 -t ubuntu — -a armhf -r precise
  • sudo lxc-start -n armhf01
  • Then login with root as both login and password

And enjoy an armhf system running on your good old x86 machine.

Now, obviously it’s pretty far from what you’d get on real ARM hardware.
It’s using qemu’s user space CPU emulation (qemu-user-static), so won’t be particularly fast, will likely use a lot of CPU and may give results pretty different from what you’d expect on real hardware.

Also, because of limitations in qemu-user-static, a few packages from the “host” architecture are installed in the container. These are mostly anything that requires the use of ptrace (upstart) or the use of netlink (mountall, iproute and isc-dhcp-client).
This is the bare minimum I needed to install to get the rest of the container to work using armhf binaries. I obviously didn’t test everything and I’m sure quite a few other packages will fail in such environment.

This feature should be used as an improvement on top of a regular armhf chroot using qemu-user-static and not as a replacement for actual ARM hardware (obviously), but it’s cool to have around and nice to show what LXC can do.

I confirmed it to work for armhf and armel, powerpc should also work, though it didn’t succeed to debootstrap when I tried it earlier today.

Enjoy!

Posted in Canonical voices, LXC, Planet Ubuntu | Tagged | 12 Comments