This is post 1 out of 10 in the LXC 1.0 blog post series.
So what’s LXC?
Most of you probably already know the answer to that one, but here it goes:
“LXC is a userspace interface for the Linux kernel containment features.
Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.”
I’m one of the two upstream maintainers of LXC along with Serge Hallyn.
The project is quite actively developed with milestones every month and a stable release coming up in February. It’s so far been developed by 67 contributors from a wide range of backgrounds and companies.
The project is mostly developed on github: http://github.com/lxc
We have a website at: http://linuxcontainers.org
And mailing lists at: http://lists.linuxcontainers.org
So what’s that 1.0 release all about?
Well, simply put it’s going to be the first real stable release of LXC and the first we’ll be supporting for 5 years with bugfix releases. It’s also the one which will be included in Ubuntu 14.04 LTS to be released in April 2014.
It’s also going to come with a stable API and a set of bindings, quite a few interesting new features which will be detailed in the next few posts and support for a wide range of host and guest distributions (including Android).
How to get it?
I’m assuming most of you will be using Ubuntu. For the next few posts, I’ll myself be using the current upstream daily builds on Ubuntu 14.04 but we maintain daily builds on 12.04, 12.10, 13.04, 13.10 and 14.04, so if you want the latest upstream code, you can use our PPA.
Alternatively, LXC is also directly in Ubuntu and quite usable since Ubuntu 12.04 LTS. You can choose to use the version which comes with whatever release you are on, or you can use one the backported version we maintain.
If you want to build it yourself, you can do (not recommended when you can simply use the packages for your distribution):
git clone git://github.com/lxc/lxc cd lxc sh autogen.sh # You will probably want to run the configure script with --help and then set the paths ./configure make sudo make install
What about that first container?
Oh right, that was actually the goal of this post wasn’t it?
Ok, so now that you have LXC installed, hopefully using the Ubuntu packages, it’s really as simple as:
# Create a "p1" container using the "ubuntu" template and the same version of Ubuntu # and architecture as the host. Pass "-- --help" to list all available options. sudo lxc-create -t ubuntu -n p1 # Start the container (in the background) sudo lxc-start -n p1 -d # Enter the container in one of those ways## Attach to the container's console (ctrl-a + q to detach) sudo lxc-console -n p1 ## Spawn bash directly in the container (bypassing the console login), requires a >= 3.8 kernel sudo lxc-attach -n p1 ## SSH into it sudo lxc-info -n p1 ssh ubuntu@<ip from lxc-info> # Stop the container in one of those ways ## Stop it from within sudo poweroff ## Stop it cleanly from the outside sudo lxc-stop -n p1 ## Kill it from the outside sudo lxc-stop -n p1 -k
And there you go, that’s your first container. You’ll note that everything usually just works on Ubuntu. Our kernels have support for all the features that LXC may use and our packages setup a bridge and a DHCP server that the containers will use by default.
All of that is obviously configurable and will be covered in the coming posts.
I ran this intro article past a couple of non-techies and they couldn’t get past the quoted intro parargraph. The problem seems to be that the phrase “kernel containment features” is meaningless to anyone without a fair degree of OS level experience. I’d like to point out to linux novices why this is such a cool feature so if anyone could come up with the shortest bulletproof lxc intro explanation I, for one, would really appreciate it.
Yeah, that’s always a tough one…
If they already are familiar with chroots, then it’s usually fairly easy to define LXC has chroots on steroid, with their own networking and isolated view of the process tree (though in practice it does quite a lot more than that).
If they’re not, then a vague approximation would be a zero-overhead Linux only virtualization technology that’s extremely flexible but requires you to share your kernel with the host.
“kernel containment features” to me is “it’s really 1 kernel, but when you use lxc it’s like having a separate computer (especially w.r.t. security)”
I am definately a noob but my take on LXC.
Near metal speed virtualisation that shares the same kernel space, without any of the hassle of defining disk or memory space.
Bugger maybe not right but just damn easy virtualisation where you can start partitioning processes to make really stable solutions connected at the network level.
Bugger maybe thats not right, just really exciting thats all.
Stephane are you going to do an example with overlays unionfs or aufs where a “golden image” can be used multiple times so LXC can demonstrate a tiny resource footprint in disk and memory?
I wouldn’t try and explain it just show the results
Apols to replying to myself.
I would love to see a setup of many ubuntu servers all performing specific roles.
MySQL, couple of instances of apache, postfix (mail), maybe a desktop via freenx or other, and any other servers you can think of.
All of them using a cloned base image and employing unionfs or aufs to allow the instance space.
It would be really good to get a modest piece of hardware and show how little resources are being used in comparison to other forms of virtualisation.
Also demonstrate that network partitioning via LXC makes the move to dedicated servers very simple.
Containers are a form of light weight virtualization where there is a single kernel that is shared by isolated, resource managed groups of processes… each being potentially a different Linux distribution, with its own users and network stack. Container don’t offer as much flexibility as hardware virtualization (Linux-on-Linux only), but being much lighter weight they offer increased performance, scalibility, and density and are appropriate for many virtualization use cases.
Basically LXC 1.0 is almost as good as the out-of-tree OpenVZ patch from 2005 to present, except LXC runs on contemporary kernels since is has been part of the mainline kernel from the beginning.
What I understand of LXC so far is that
– LXC containers are similar to Oracle Solaris Zones
– LXC containers share all the same kernel which means that an kernel update hits
all LXC containers and at the same time. If each LXC container has a different services and/or customer, finding a downtime window which meets all needs can be difficult.
Maybe once I see a use-case and some examples, I’ll understand. For now, I’m in the dark! Sounds exciting though so I’ll keep reading!
Thanks very much for documenting this, it’s awesome. One thing I noticed is that, in the first set of example commands to get things going, where you tell us how to ssh into the container, you say to use lxc-info to get the IP, but that only gets the pid. The IP can be gotten using ‘sudo lxc-ls –fancy’.
Thanks again for your work.
You must be using an older lxc.
stgraber@castiana:~$ lxc-info -n precise-gui
CPU use: 675.46 seconds
Memory use: 362.43 MiB
TX bytes: 14.67 MiB
RX bytes: 295.93 MiB
Total bytes: 310.61 MiB
Well… it would be good, if you tell a few words about set up an internet (bridget) inside container..=)
I use github version and all inet-options inside “…/myctn/config” are commented
by you say nothing in this tutorial and it just work after creating for you =(
Were you asking how to set up a bridge INSIDE the container or do you mean setting one up on the host? If you mean the ability to get the container onto the same subnet it wasn’t too difficult.
(May differ across distros but I’m using Ubuntu Server)
Disable default lxcbr0 in /etc/default/lxc
Make sure bridge-utils is installed
# apt-get install bridge-utils
You should have a line that says:
auto eth0 (not sure if this is necessary but I would think so in order to make sure eth0 comes up)
Edit /etc/network/interfaces and add:
iface br0 inet static
address 192.168.1.### (Host IP not ###)
This is the part that confuses people. The eth0 interface will NOT have an IP. It will simply be up. All traffic on the host will actually start routing through br0 with eth0 being the PORT (means of pkt transport). Unless you are using some sort of advanced config then eth0 should just be up but with no settings attached to it.
Edit container config:
# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.ipv4 = 192.168.1.###/24 (container IP)
Now when you start the container (using dhcp) it should always have the IP you specify in its config. After that you just need to get the DNS into resolv.conf. The Ubuntu containers use resolv.conf. I’m not a big fan of it so I usually remove it and simply add the DNS manually.
Not sure if that’s what you were looking for but it seemed to work for me.
Yup, thank you very much!
I did all this stuff inside VirtualBox (NAT-ed) with CentOS, so… I was a bit confused with web-tutorials and different configs..
Today I finally found necessary combinations of all configs and set up network (lxc->VBox->www)! Voila 😀
So, trying to wrap my fragile brain around this. You create an Ubuntu “host” that creates containers with potentially any linux distro. Does the host have to be Ubuntu? And how do you run other container OSes (like CentOS)?
Awesome stuff. Finally something similar to FBSD jails for Linux. You said earlier that the containers have their own network stack correct? Is this akin to vimage or more so similar to the default network aliasing FBSD uses for jails? It seemed as though in one of your examples you routed through the “host” system’s eth0. Is this necessary or is it possible to put the container on the same subnet?
For complete beginners, it would be useful to state the username/password combination that will get you past the ‘p1 login’ prompt that lxc-console gives you. (ubuntu/ubuntu)
“You’ll note that everything usually just works on Ubuntu.”
Except when it doesn’t. Attempted on an Intel chromebook, with crouton.
update-rc.d: warning: default stop runlevel arguments (0 1 6) do not match ssh Default-Stop values (none)
invoke-rc.d: policy-rc.d denied execution of start.
# above showed up near end of install output
# below showed up after first start attempt
lxc-start: The container failed to start.
lxc-start: To get more details, run the container in foreground mode.
lxc-start: Additional information can be obtained by setting the –logfile and –log-priority options.
Updated apt, upgraded to current version of lxc, same result.
Seems as if lxc can be installed, but won’t and run inside a chroot.
Very nice article.
But getting lxc from git and installing just by ‘sudo make install’ will not not work out of the box. It doesn’t install all the required packages to get started with containers. It would be good if there is some sort of dependency package check script that would install all the required packages.
For me ‘sudo apt-get install lxc’ worked like a charm.
When i try to start with the lxc container created it fails with below error, I have used the ubuntu template for this template.
It seems lxc is unable to drop the kernal privileges it is trying to drop, tried to figure it out but failed miserably.
can someone please help to get this solved.
ubuntu version : 14.04
kernal : 3.13.0-43-generic
lxc-start: conf.c: setup_caps: 2337 unknown capability mac_admin
lxc-start: conf.c: lxc_setup: 4172 failed to drop capabilities
lxc-start: start.c: do_start: 688 failed to setup the container
lxc-start: sync.c: __sync_wait: 51 invalid sequence number 1. expected 2
lxc-start: start.c: __lxc_start: 1088 failed to spawn ‘lxc-one’
lxc-start: lxc_start.c: main: 345 The container failed to start.
I have a Python script which creates a bunch of LXC containers… out of a user configuration file. Everything works fine, except that I can’t get ssh running immediately. It seems that, after executing lxc-start, the network seems in a transition mode… here is the ps output :
root 1 0 0 00:19 ? 00:00:00 /sbin/init
root 386 1 0 00:19 ? 00:00:00 /lib/systemd/systemd-journald
root 431 1 0 00:19 ? 00:00:00 /bin/sh -e /etc/init.d/networking start
root 452 431 0 00:19 ? 00:00:00 ifup -a
root 492 452 0 00:19 ? 00:00:00 /bin/sh -c dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhcli
root 493 492 0 00:19 ? 00:00:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.le
root 591 0 0 00:22 pts/0 00:00:00 /bin/bash
it seems that ‘ifup’ is blocked… a few minutes later (don’t know excactly how long), everything is back to normal…
root 1 0 0 00:19 ? 00:00:00 /sbin/init
root 386 1 0 00:19 ? 00:00:00 /lib/systemd/systemd-journald
root 614 1 0 00:24 ? 00:00:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.le
root 671 1 0 00:24 ? 00:00:00 /usr/sbin/cron -f
syslog 688 1 0 00:24 ? 00:00:00 /usr/sbin/rsyslogd -n
root 785 1 0 00:24 ? 00:00:00 /usr/sbin/sshd -D
It looks like it’s blocking trying to access DNS :
ntpdate: name server cannot be used: Temporary failure in name resolution (-3)
And, 4 minutes later, suddenly :
dhclient: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 3 (xid=0x15f64c66)
This is probably when sshd is available. Is this a bad chain dependency problem is systemd ? Why do we need to wait for a name server to be available ? Is this normal ?
I can run as much test as you’d like, since this is a dev/test machine.
This is my first time seriously experimenting with Linux Containers. Thanks for the blog series (and the picture of a snow-covered mountain, very cooling on a warm spring day), it’s a good start! Everything just works in Centos, too.
I use Centos 6.6. To install containers:
# enable the EPEL repository
# yum install -y lxc lxc-templates lxc-doc
And though Centos 6 has a 2.6.32 kernel, “sudo lxc-attach -n p1” works nicely.
Addition: I tried creating a Ubuntu container, which failed due to a missing “bootstrap” command. A Centos container works though:
# lxc-create -t /usr/share/lxc/templates/lxc-centos -n p1
Maybe you should install some other packages,like this:
#sudo apt-get install lxc lxctl lxc-templates
I need help on turning off security staff in a container or find the problem.
I am trying to use nfs ganesha but the client on the remote host can not mount the share.
showmonut -e however displays the share.
Here is my problem: open_by_handle_at() is returning EPERM error, from the manpage: The caller does not have the CAP_DAC_READ_SEARCH capability. Maybe selinux or other security stuff preventing root to open the file? as far as I know this container is running in privileged mode according to:
0 0 4294967295 menas priviledge, right?
I have no idea how to confirm what is the problem and how to change it?
I want to create some linux virtual machines with infomation:
How can i use command line to create it ?
How can i check all the information of these VM ?
Thanks for all
I am new on LXC and Containers in general.
I will like to read more on LXC, Cgroup , Name spaces
Please can you help me with ebook or important links ?