Update: Added an hardy i386 template, mentioned the need of bridge-utils and fixed typo (s/addbr/brctl addbr/g)
This (quite long) post is about the LXC (Linux containers), an example of its usage on Karmic is provided after the introduction to contextualization.
Most of you are probably already familiar with “usual” virtualization as kvm/virtualbox/vmware/… These are now extremely fast ways to do “full” virtualization of an OS on a host running either the same OS or a completely different one.
In Ubuntu, the most widely used is probably KVM used with libvirt and virt-manager as frontend.
At Revolution Linux, we have literately hundreds of virtual machines for each of our customers and we noticed that they are all Ubuntu virtual machines running on Ubuntu hosts. Then, running them in a “full” virtualization environment adds unneeded overhead and makes resource assignment quite difficult (you can’t easily change the CPU/RAM/DISK/NIC of a running virtual machine).
So, what we are currently doing is using contextualization instead of regular virtualization.
Contextualization can (in a much simpler way) be seen as improved chroots, these “chroots” are called containers and work just like a regular virtual machine, inside them you have your own network interface, can apply disk/cpu/ram quotas and start/stop/suspend as many of them as you want.
All the quotas and restrictions can be changed on the fly without needing any restart, because it’s technically just a set of process running on the host, not a single process as with virtualization.
It also means that you can list/kill or execute a process in any of these containers, directly from the host (a container obviously can’t access another’s processes).
The technology we have been using for more then a year now has been OpenVZ (open source implementation of Virtuozo) which basically is a huge patchset on top of the Linux kernel and only exists in Ubuntu hardy (8.04 LTS).
What I’ve been looking at more recently and hope to have working correctly in Lucid (10.04 LTS) is LXC. LXC is basically the same as OpenVZ except that it’s in the upstream kernel and uses already existing kernel features such as “cgroups” for example.
LXC is also supported by libvirt although it’s not working in Karmic, that will let users play with it just like any other virtualization technology using their existing scripts and interfaces.
Here’s a quick howto to make it work on Karmic with an Ubuntu 8.04 amd64 container (I’ve had issues making Karmic to work in a container):
- Install bridge-utils: sudo apt-get install bridge-utils
- Install LXC from my PPA (upstream snapshot) : https://launchpad.net/~stgraber/+archive/ppa/+packages
- Create /var/lib/lxc/: mkdir -p /var/lib/lxc/
- amd64 template (if your computer is running Ubuntu 64bit): Get http://www.stgraber.org/download/lxc-ubuntu-8.04-amd64.tar.gz (Hardy amd64 image)
- i386 users (if your computer is running Ubuntu 32bit): Get http://www.stgraber.org/download/lxc-ubuntu-8.04-i386.tar.gz (Hardy i386 image)
- Uncompress it in /var/lib/lxc/ (will create an ubuntu directory containing a configuration file and a root directory)
- Mount cgroups somewhere: sudo mkdir /dev/cgroup && mount -t cgroup none /dev/cgroup
- Create a bridge with: sudo brctl addbr br0
- Set an IP on the bridge: ifconfig br0 192.168.2.1 (VE will be 192.168.2.2 by default)
- Start the VE: lxc-start -d -n ubuntu
- Enter the VE: “lxc-console -n ubuntu” or “ssh root@192.168.2.2” (root password is “password”)
The VE (virtual environment) configuration file is in: /var/lib/lxc/ubuntu/config
Additional information can be found on:
Also, I plan to have a session about it at UDS-Lucid in Dallas
Hey there,
iv been trying the small howto & didn’t seem to get it to work, will the provided amd64 image work on a 32bit install? (karmic desktop).
Id be interested in creating an image from scratch (iv managed to create a rootfs from a mirror), how can approach that?
Thanks!
Hi Ronen, thanks for your comment.
The amd64 image can’t run on a 32bit install.
I uploaded another image which is 32bit.
If you want to create your own image, the easiest is to use debootstrap (from the package of the same name) to generate the initial installation, then install SSH server inside and use it as your root directory for the LXC container.
We have developed a few scripts for OpenVZ (vz-utils on Launchpad) but these don’t seem to work well for LXC and so will need to be updated.
Are you going to make sure the lxc userspace is in shape for Lucid?
My plan is to make sure LXC works correctly with libvirt and is included in Main so everyone can use it as easily as KVM.
The actual userspace for Lucid doesn’t worry me too much. The main issue at the moment is the relation between a LXC container and upstart/udev. That’s something I had to workaround for OpenVZ in the past, unfortunately the same workaround doesn’t work for LXC.
Once LXC starts to be maintained and integrated in Ubuntu (in the very close future I hope), fixing mountall and upstart so they don’t hang with containers will probably be the next step so we don’t have to “patch” the OS to work in a container.
This is good news, because OpenVZ seem to be a dead end… We are currently searching for a viable OpenSource replacement. Right now KVM seem to be the only viable option for the futur, but for our need, full virtualization is overkill. I hope LXC will fill the void.
Hi Stephane,
I have read your posts about lxc. I have sucessfully setup lxc guests (hardy amd64, i386) which you submittedl.
Thank you for sharing this.
I have tried to setup lxc ubuntu karmic/lucid guest but with no success (stuck in upstream – mountall procedure – already reported here https://bugs.launchpad.net/ubuntu/+source/mountall/+bug/461438).
On your microblogging site I have found thaty you have succees with running lucid guest. How did you resolve udev/upstart issues ? Can you share this configuration with us ?
My lucid test was quickly done at the developer summit.
IIRC it was with libvirt 0.7.2 and a regular debootstrap of lucid.
Though that was over a month ago, I’ll need to do more tests with what’s currently on Lucid (it seems 0.7.2 will be used unless we have a very good version to go to 0.7.4 or 0.7.5) and see how Lucid-on-Lucid works (now that I run lucid on my laptop).
Hi,
I’ve made a suite of script to build containers and solved (almost) the karmic issue.
See karmic related scripts at:
http://lxc-provider.git.sourceforge.net/git/gitweb.cgi?p=lxc-provider/lxc-provider;a=tree;f=libexec/cache_helpers;h=8c6c17d6fc779764e84c02488b21a4aa8b4df7b7;hb=HEAD
Thanks to a vserver doc :
http://linux-vserver.org/Upstart_issues
I have a little problem with network configuration.
I have to restart networking manualy, but I think this is not a big issue.
Regards
Hi guys,
Just to tell you, a little lxc experiment of mine, named ‘vzgot,’ reached a working stage.
I was able to run a wide range of distribution rh7.3, rh8.0, rh9, fc2 ->fc12 and Centos-4.[6,7,8], Centos-5.[2,3,4], rhel-4 on a recent unmodified kernel (2.6.31.6-162.fc12).
I was able to rpmbuild clamav on those 33 distributions, so I am confident enough about reliability (charge reached 70.0 on the hardware host (a Dell-2800)).
VE network is working fine (VE yum usage is fully working) as long you have
set the bridge interface (br0) and use quagga
.
Next step is to work on ressources contention (cgroup).
You can access the RPM
ftp://ftp.safe.ca/pub/linux/vzgot/
Theoretically RPM is comprehensive enough to allow you to duplicate result.
Please give me feedback.
Hi,
I’ve followed this blog tut on LXC and got it working, however I notice my container cannot access the internet. I’ve updated the /etc/hosts and /etc/resolv.conf and still no luck.
I’m assuming quagga will be able to route the packets from 192.168.2.2 through the bridge 192.168.1.1 to the router say 192.168.1.1 and out to the web.
Do you have a quick example on how to get quagga going?
Cheers
It’s looking very nice. Thanks for good work!
Your templates are working fine. I try to convert xen vm into lxc, but it does not work.
lxc-info -n guest
‘guest’ is RUNNING
ping to guest working
but i can’t get console:
# lxc-console -n guest
Type to exit the console
in syslog of guest system i see only:
init: tty2 main process (609) killed by TERM signal
init: tty3 main process (610) killed by TERM signal
init: tty1 main process (763) killed by TERM signal
init: tty6 main process (612) killed by TERM signal
i also can not connect via ssh. Por 22 is not open.
How can I convert vm to lxc?
Thank’s for help.
I have been working with LXC and am happy to learn it is under development.
I posted a few blogs on LXC :
http://blog.bodhizazen.net/linux/lxc-linux-containers/
http://blog.bodhizazen.net/linux/lxc-configure-ubuntu-karmic-containers/
http://blog.bodhizazen.net/linux/lxc-configure-ubuntu-lucid-containers/
I have space on a server for LXC containers, similar to openvz :
http://bodhizazen.fivebean.net/LXC/
Right now I only have a few , but as time allows I will add to it.
Hopefully some of this information will help others.
hi
with lxc is it possible ( now or in the future ) to have 3d accel ?
hi
with lxc is it possible ( now or in the future ) to have 3d accel ?
Ubutu 10.04 Beta 1
Why a have this?
root@srv01:~# lxc-start -n proxy1
lxc-start: No such file or directory – failed to mount ‘/home/lxc/proxy1/rootfs.ubuntu’->’/tmp/lxc-rdqdyBc’
lxc-start: failed to set rootfs for ‘proxy1′
lxc-start: failed to setup the container
root@srv01:~# lxc-start -n proxy1
lxc-start: No such file or directory – failed to mount ‘/home/lxc/proxy1/rootfs.ubuntu’->’/tmp/lxc-rXeJjg8′
lxc-start: failed to set rootfs for ‘proxy1′
lxc-start: failed to setup the container
root@srv01:~# lxc-start -n proxy1
lxc-start: No such file or directory – failed to mount ‘/home/lxc/proxy1/rootfs.ubuntu’->’/tmp/lxc-rs2TxjR’
lxc-start: failed to set rootfs for ‘proxy1′
SSH: PTY allocation request failed on channel 0
Where my tty?
lxc-ps –lxc
CONTAINER PID TTY TIME CMD
prizm1 1330 ? 00:00:00 init
prizm1 1406 ? 00:00:00 upstart-udev-br
prizm1 1420 ? 00:00:00 udevd
prizm1 1427 ? 00:00:00 getty
prizm1 1434 ? 00:00:00 getty
prizm1 1700 ? 00:00:00 sshd
prizm1 1702 ? 00:00:00 apache2
prizm1 1722 ? 00:00:00 getty
prizm1 1723 ? 00:00:00 apache2
prizm1 1724 ? 00:00:00 apache2
prizm1 1725 ? 00:00:00 apache2
prizm1 1726 ? 00:00:00 apache2
prizm1 1727 ? 00:00:00 apache2
prizm1 1903 ? 00:00:00 apache2
—————————————————
Script for /dev
#!/bin/bash
# bodhi.zazen’s lxc-config
# Makes default devices needed in lxc containers
# modified from http://lxc.teegra.net/
ROOT=$(pwd)
DEV=${ROOT}/rootfs/dev
if [ $ROOT = ‘/’ ]; then
printf “