One thing that we’ve been working on for LXC in 12.04 is getting rid of any remaining LXC specific hack in our templates. This means that you can now run a perfectly clean Ubuntu system in a container without any change.
To better illustrate that, here’s a guide on how to boot a standard Ubuntu VM in a container.
First, you’ll need an Ubuntu VM image in raw disk format. The next few steps also assume a default partitioning where the first primary partition is the root device. Make sure you have the lxc package installed and up to date and lxcbr0 enabled (the default with recent LXC).
Then run kpartx -a vm.img this will create loop devices in /dev/mapper for your VM partitions, in the following configuration I’m assuming /dev/mapper/loop0p1 is the root partition.
Now write a new LXC configuration file (myvm.conf in my case) containing:
lxc.network.type = veth lxc.network.flags = up lxc.network.link = lxcbr0 lxc.utsname = myvminlxc lxc.tty = 4 lxc.pts = 1024 lxc.rootfs = /dev/mapper/loop0p1 lxc.arch = amd64 lxc.cap.drop = sys_module mac_admin lxc.cgroup.devices.deny = a # Allow any mknod (but not using the node) lxc.cgroup.devices.allow = c *:* m lxc.cgroup.devices.allow = b *:* m # /dev/null and zero lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm # consoles lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm #lxc.cgroup.devices.allow = c 4:0 rwm #lxc.cgroup.devices.allow = c 4:1 rwm # /dev/{,u}random lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 5:2 rwm # rtc lxc.cgroup.devices.allow = c 254:0 rwm #fuse lxc.cgroup.devices.allow = c 10:229 rwm #tun lxc.cgroup.devices.allow = c 10:200 rwm #full lxc.cgroup.devices.allow = c 1:7 rwm #hpet lxc.cgroup.devices.allow = c 10:228 rwm #kvm lxc.cgroup.devices.allow = c 10:232 rwm
The bits in bold may need updating if you’re not using the same architecture, partition scheme or bridges as I’m.
Then finally, run: lxc-start -n myvminlxc -f myvm.conf
And watch your VM boot in an LXC container.
I did this test with a desktop VM using network manager so it didn’t mind LXC’s random MAC address, server VMs might get stuck for a minute at boot time because of that though.
In such case, either clean /etc/udev/rules.d/70-persistent-net.rules or set “lxc.network.hwaddr” to the same mac address as your VM.
Once done, run kpartx -d vm.img to remove the loop devices.
Hi Stéphane,
Interesting post, it’s nice to see LXC moving forward.
A suggestion for a future post: it would be good to have a general update on the current state of LXC in Ubuntu 12.04 regarding functionality, security, known issues…
Regards,
Paul
Hi,
Yes it’s planned, I’m just waiting for a few features and fixes to land first, but should be out in a week or so.
Do you have a preferred place to fetch an Ubuntu VM image in raw disk format? Thank you sharing… Looking forward to trying this soon!
We don’t have raw disk images with Ubuntu preinstalled to download, so the main use for LXC booting raw VM images is for people who installed Ubuntu 12.04 in a VM and then notice they don’t actually need the extra overhead and just want to move it to a container.
Basically install Ubuntu 12.04 as usual in a VM, then use the VM disk image as the rootfs for a container.
Works very well indeed! Excellent work!
Cheers!!
Kaj
Hi ! I don’t know if it’s apropiate to abuse you here but…
I can’t share a host (12.04) directory inside a lxc ubuntu guest
Which would be the rigth approach ? (host fstab – guest fstab – lxc.mount.entry)
I look any howto and try with no luck.
Help appreciate.
The fstab file is where it needs to happen./fstab and add:
Edit /var/lib/lxc/
/mnt host-mnt none bind
That’ll make /mnt of the host appear in /host-mnt of the container. You’ll need to ensure host-mnt exists in the container before starting it, otherwise it’ll fail.
This don’t work in my case…
Maybe (following your example) :
because /mnt it’s a zfs mounted device ?
because /mnt it’s a nfs shared zfs mounted device ?
I have uptodate 12.04 instalation trying with a 12.04 guest
Something I can try ?
Thanks Stéphane
Solved (Workaround)
/mnt it’s a no go actually if you want to use subdirectories of it for mount
/srv works with subdirs for mounting
I hope LXC developers gets a look in this issue with /mnt subdirs as a mounting points for host dirs.
My interest in LXC is in virtual routing and in improving/accelerating network throughput. Could you please direct me to what you see as the appropriate discussion forums?
While I’m on it, it seems to me that there is a gaping hole in LXC’s networking (or more precisely, VM networking in general). Anyone reading this comment is welcome to feel free to correct me, but as far as I can see, the idea of VM’s located on the same physical machine communicating with one another through the networking stack is absurdly inefficient. It certainly is not characteristic of the Linux and Open source community which usually excels at optimizing hardware and software resources. It’s like a couple sitting together at the dinner table, conversing with one another through the cellphone, while paying by the minute and suffering a degradation in quality. Wouldn’t you think that VM’s residing in common memory on the same physical machine should simply transfer their message data by memory copy without having to traverse the TCP/IP and level two stacks? It c(sh)ould be done transparently to the user, but the way it’s done today seems wasteful, or is this an issue for the networking stack folks?
That said, again, I value the advice of any of the readers as to what forum they recommend my getting involved in.
Thank you very much,
Yitzhak Bar Geva
Hi,
The right place for that discussion would be the lxc-users or lxc-devel mailing list.
As containers are well, containers, we usually don’t want them to communicate to each other using a different way that they would with another machine.
However, the veth devices we use are a lot more efficient than any fake ethernet device you get in a VM, as they’re all kernel generate network devices on the same kernel, bridged by the kernel, the throughput tends to be pretty good.
For faster communication, you’d usually bind-mount a socket between containers from the host, then the containers can both access that socket, that should be pretty fast too.
Thanks. Did you mean the Sourceforge lxc-users and lxc-devel?
I’d like to examine the eficiency issue further. On the face of it, there should be a distinction, but at the network level, transparent to the user, between going through convulutions just to talk to another VM on the same machine. I’d take that even further and think of a faster communications path between adjacent machines
Yep, the LXC mailing lists are indeed on sourceforge.
Great post, thanks!
I decided to try out btrfs as default fs when I installed 12.04. Your post made me think that if this can be done with a raw image why not a snapshot? So I took a snapshot of the installed system (both / and /home) and mounted the snapshots in /whereever and used this as the rootfs. That is the only change needed to the .conf listed above 🙂
The only caveat (so far) is to make sure that you remove the lxc package from the guest, otherwise it will set up lxcbr0 in the guest also and there will be no external network connectivity. Once you remove it just restart the host and guest to ensure the services are restarted correctly, thanks to bergerx on #lxcontainers for helping me resolve this.
The writeup says..
First, you’ll need an Ubuntu VM image in raw disk format.
I’ve got that as I took the .img from a KVM install I had done of Ubuntu 12.04.
Then it says later..
Then finally, run: lxc-start -n myvminlxc -f myvm.conf
>And watch your VM boot in an LXC container.
I must have missed something because between the first comment and the second comment I fail to see how or where the .img file is placed -or- used in any way.
So how does the lxc-start -n myvminlxc -f myvm.conf utilize the .img file?
sorry my previous question used some symbols that apparently were interpreted as HTML attributes… and made my question less clear:
The writeup says..
1) “First, you’ll need an Ubuntu VM image in raw disk format.”
I’ve got that as I took the .img from a KVM install I had done of Ubuntu 12.04.
Then it says later..
2) “Then finally, run: lxc-start -n myvminlxc -f myvm.conf”
And watch your VM boot in an LXC container.
I must have missed something because between the first comment (1) and the second comment (2) I fail to see how or where the .img file is placed -or- used in any way.
So how does the lxc-start -n myvminlxc -f myvm.conf utilize the .img file?
You need to run kpartx against your .img, this will map all partitions found in the .img as dm entries in /dev/mapper/loop*, that’s what’s then put in the lxc container configuration and used to boot from.
@Yitzhak:
I don’t know if you’ve seen these;
https://sites.google.com/site/routeflow/documents/first-routeflow
https://sites.google.com/site/routeflow/documents/tutorial2-four-routers-with-ospf
http://www.jedelman.com/1/post/2012/05/flow-based-protocols-and-routeflow-the-new-ways-of-forwarding.html
I have a dhcp server in the office which I have no control over. And I’m starting the LXC ubuntu guest as daemon with dhcp. It starts with an ip address like 192.168.1.x but since this address is dynamically assigned. I have no way of retrieving the address and using it to access the vm further. So basically I’m trying to create a way for users to launch lxc guests so I need to return an ip address to them so that they can use the guest.
Is there anyway I can do this?
I know this is dumb but I have managed to get the img running.
What I wanted to do is get a X11 session running in a container.
So I have tried ssh -x ip-addr
the server side (.img) I changed the conf to allow x-11 forwarding and port 22.
/usr/bin/xauth: /home/webadmin/.Xauthority not writable, changes will be ignored
Doh sorry got it
Hi. I am really new at this. I have Openstack Folsom running on Ubuntu 12.04. I have setup the system to us lxc on my compute node. Openstack requires that I upload an image to be launched on the compute node.
I am now trying to find (create) an appropriate virtual machine that can run on the compute node.
Are the steps you describe above used to create the image to be launched?
Any guidance would be appreciated.
thanks
John
Would this functionality be integrated into virtual machine manager? Or we must convert a KVM image into a LXC one manually? T.I.A.
Hey,
Do you know?, how to create own container template after installing some packages. If you have tutorial, so can you plz share that?
Hi Stephane,
If we do boot a VM inside the container as you mentioned, would it be of any use like a normal VM from the networking perspective? Since the VM needs emulated interfaces via qemu, it seems having VM like networking(emulated interfaces connecting to linux bridges etc.) working on this VM is not possible? Or would you need to map the qemu interfaces to veth interfaces and define the config file accordingly?
Thanks
Anjali
I had hoped to use this method to take an existing development virtual machine and deploy it in a container. I currently use several virtual machines for development work typically running a flavor of Ubuntu or Fedora. The promise of removing the overhead of vmware/vbox/kvm is very appealing to me, especially the ability to have hardware assisted 3D graphics acceleration which is not well supported by virtual machines at this time. I have spent many hours experimenting on real hardware and in virtual machines and have concluded that Ubuntu 15.04 (with the move to systemd) has broken the ability to bring up a containers X desktop in a host vt (typically vt8).
I have read about bringing systemd based systems up in a container and the additional config parameters that may be required, but this hasn’t helped with getting an X desktop started.
I almost achieved what I was looking for with 14.04 and will continue my experiments there for now.
It would be really helpful if this example could be extended to include bringing a virtual machine up in a container and starting it with a full desktop on a spare vt. Although a non privileged container would be ideal, it is not a hard requirement for my application.
I think I have exhausted every website, blog and forum post on this topic, as far as I know, nobody has achieved what I am looking for with the possible exception an Arch linux based LXC setup.