Getting started with LXD – the container lightervisor

Introduction

For the past 6 months, Serge Hallyn, Tycho Andersen, Chuck Short, Ryan Harper and myself have been very busy working on a new container project called LXD.

Ubuntu 15.04, due to be released this Thursday, will contain LXD 0.7 in its repository. This is still the early days and while we’re confident LXD 0.7 is functional and ready for users to experiment, we still have some work to do before it’s ready for critical production use.

LXD logo

So what’s LXD?

LXD is what we call our container “lightervisor”. The core of LXD is a daemon which offers a REST API to drive full system containers just like you’d drive virtual machines.

The LXD daemon runs on every container host and client tools then connect to those to manage those containers or to move or copy them to another LXD.

We provide two such clients:

  • A command line tool called “lxc”
  • An OpenStack Nova plugin called nova-compute-lxd

The former is mostly aimed at small deployments ranging from a single machine (your laptop) to a few dozen hosts. The latter seamlessly integrates inside your OpenStack infrastructure and lets you manage containers exactly like you would virtual machines.

Why LXD?

LXC has been around for about 7 years now, it evolved from a set of very limited tools which would get you something only marginally better than a chroot, all the way to the stable set of tools, stable library and active user and development community that we have today.

Over those years, a lot of extra security features were added to the Linux kernel and LXC grew support for all of them. As we saw the need for people to build their own solution on top of LXC, we’ve developed a public API and a set of bindings. And last year, we’ve put out our first long term support release which has been a great success so far.

That being said, for a while now, we’ve been wanting to do a few big changes:

  • Make LXC secure by default (rather than it being optional).
  • Completely rework the tools to make them simpler and less confusing to newcomers.
  • Rely on container images rather than using “templates” to build them locally.
  • Proper checkpoint/restore support (live migration).

Unfortunately, solving any of those means doing very drastic changes to LXC which would likely break our existing users or at least force them to rethink the way they do things.

Instead, LXD is our opportunity to start fresh. We’re keeping LXC as the great low level container manager that it is. And build LXD on top of it, using LXC’s API to do all the low level work. That achieves the best of both worlds, we keep our low level container manager with its API and bindings but skip using its tools and templates, instead replacing those by the new experience that LXD provides.

How does LXD relate to LXC, Docker, Rocket and other container projects?

LXD is currently based on top of LXC. It uses the stable LXC API to do all the container management behind the scene, adding the REST API on top and providing a much simpler, more consistent user experience.

The focus of LXD is on system containers. That is, a container which runs a clean copy of a Linux distribution or a full appliance. From a design perspective, LXD doesn’t care about what’s running in the container.

That’s very different from Docker or Rocket which are application container managers (as opposed to system container managers) and so focus on distributing apps as containers and so very much care about what runs inside the container.

There is absolutely nothing wrong with using LXD to run a bunch of full containers which then run Docker or Rocket inside of them to run their different applications. So letting LXD manage the host resources for you, applying all the security restrictions to make the container safe and then using whatever application distribution mechanism you want inside.

Getting started with LXD

The simplest way for somebody to try LXD is by using it with its command line tool. This can easily be done on your laptop or desktop machine.

On an Ubuntu 15.04 system (or by using ppa:ubuntu-lxc/lxd-stable on 14.04 or above), you can install LXD with:

sudo apt-get install lxd

Then either logout and login again to get your group membership refreshed, or use:

newgrp lxd

From that point on, you can interact with your newly installed LXD daemon.

The “lxc” command line tool lets you interact with one or multiple LXD daemons. By default it will interact with the local daemon, but you can easily add more of them.

As an easy way to start experimenting with remote servers, you can add our public LXD server at https://images.linuxcontainers.org:8443
That server is an image-only read-only server, so all you can do with it is list images, copy images from it or start containers from it.

You’ll have to do the following to: add the server, list all of its images and then start a container from one of them:

lxc remote add images images.linuxcontainers.org
lxc image list images:
lxc launch images:ubuntu/trusty/i386 ubuntu-32

What the above does is define a new “remote” called “images” which points to images.linuxcontainers.org. Then list all of its images and finally start a local container called “ubuntu-32” from the ubuntu/trusty/i386 image. The image will automatically be cached locally so that future containers are started instantly.

The “<remote name>:” syntax is used throughout the lxc client. When not specified, the default “local” remote is assumed. Should you only care about managing a remote server, the default remote can be changed with “lxc remote set-default”.

Now that you have a running container, you can check its status and IP information with:

lxc list

Or get even more details with:

lxc info ubuntu-32

To get a shell inside the container, or to run any other command that you want, you may do:

lxc exec ubuntu-32 /bin/bash

And you can also directly pull or push files from/to the container with:

lxc file pull ubuntu-32/path/to/file .
lxc file push /path/to/file ubuntu-32/

When done, you can stop or delete your container with one of those:

lxc stop ubuntu-32
lxc delete ubuntu-32

What’s next?

The above should be a reasonably comprehensive guide to how to use LXD on a single system. Of course, that’s not the most interesting thing to do with LXD. All the commands shown above can work against multiple hosts, containers can be remotely created, moved around, copied, …

LXD also supports live migration, snapshots, configuration profiles, device pass-through and more.

I intend to write some more posts to cover those use cases and features as well as highlight some of the work we’re currently busy doing.

LXD is a pretty young but very active project. We’ve had great contributions from existing LXC developers as well as newcomers.

The project is entirely developed in the open at https://github.com/lxc/lxd. We keep track of upcoming features and improvements through the project’s issue tracker, so it’s easy to see what will be coming soon. We also have a set of issues marked “Easy” which are meant for new contributors as easy ways to get to know the LXD code and contribute to the project.

LXD is an Apache2 licensed project, written in Go and which doesn’t require a CLA to contribute to (we do however require the standard DCO Signed-off-by). It can be built with both golang and gccgo and so works on almost all architectures.

Extra resources

More information can be found on the official LXD website:
https://linuxcontainers.org/lxd

The code, issues and pull requests can all be found on Github:
https://github.com/lxc/lxd

And a good overview of the LXD design and its API may be found in our specs:
https://github.com/lxc/lxd/tree/master/specs

Conclusion

LXD is a new and exciting project. It’s an amazing opportunity to think fresh about system containers and provide the best user experience possible, alongside great features and rock solid security.

With 7 releases and close to a thousand commits by 20 contributors, it’s a very active, fast paced project. Lots of things still remain to be implemented before we get to our 1.0 milestone release in early 2016 but looking at what was achieved in just 5 months, I’m confident we’ll have an incredible LXD in another 12 months!

For now, we’d welcome your feedback, so install LXD, play around with it, file bugs and let us know what’s important for you next.

About Stéphane Graber

Project leader of Linux Containers, Linux hacker, Ubuntu core developer, conference organizer and speaker.
This entry was posted in Canonical voices, LXC, LXD, Planet Revolution-Linux, Planet Ubuntu and tagged . Bookmark the permalink.

40 Responses to Getting started with LXD – the container lightervisor

  1. Xavier says:

    Hi, this is very exciting, LXD (with LXCFS) might be a good replacement for OpenVZ. But what is the current state of LXD ? Are we able to manage networking and templates ? CPU / memory / disk quota ? Does it works with Debian (as guest and host) ? Unfortunately, last time I checked (months ago), LXD was only able to start containers, nothing more.

    1. Hi,

      Networking can now be configured either directly for a container or for a set of containers through the use of profiles.

      CPU and memory quotas will soon be implemented through the limits.* configuration keys. Until then, you can use the raw.lxc configuration key to pass LXC config snippets (lxc.cgroup.memory.* for example).

      Disk quotas is a bit trickier as we can’t do the same kind of hacks the OpenVZ folks are doing with their patched kernel. Instead we intend to either rely on the backing filesystem for that (like btrfs) or use a separate block device per container. At this point neither is directly supported, however if you’re running on a btrfs filesystem, you can manually create subvolumes and quota groups until we grow the feature.

      As for running Debian. Pre-systemd Debian works fine within LXD. More recent Debian is hitting a few systemd issues along the way but it eventually boots and seems functional, this will get resolved as Debian either updates to a more recent systemd or cherry-picks the various container fixes we committed to systemd upstream.

  2. CodePo8 says:

    Hi,

    Can you explain how exactly LXD runs full operating systems? What technique is being used for HW emulation/simulation?

    1. Full system containers refer to running a full Linux distro inside a container, spawning its regular /sbin/init and then making sure it can behave as if it was running straight on metal.

      Those are still containers in that we do not simulate hardware nor run a separate kernel for each system, but differs from application containers in that our aim is to run verbatim copies of a regular Linux system in the container, without requiring any change or any communication between the container and the container manager.

  3. finid says:

    So how do you manage/configure IP and port mapping with LXD?

    1. LXD doesn’t do any network management for you.

      All it does is let you list available bridges on the host and let you assign network interfaces for your containers to those bridges.

      So by default, your container will get a 10.0.3.x IP from the default LXC bridge (lxcbr0).
      We expect multiple-host deployments to either use something like OpenVSwitch/Neutron or good old VLANs to offer the same network across multiple hosts.

      As for port mapping, it’s again not something we’re in the business of doing. When using LXD through OpenStack, Neutron will do that for you, otherwise, if you want to setup NAT rules, you’ll have to do that yourself.

      The main rationale here is that since we do VM-like full system containers and not app-specific containers, having port mapping be done by LXD doesn’t feel right (as LXD’s rule is not to care about what’s inside the container).

      As for managing cross-host network and more complex use cases, we figured that there are a bunch of really good SDN options these days, they all end up giving you standard Linux bridges, so rather than re-invent the wheel, we simply let our users use what they think is best and then LXD will just use the resulting bridge.

      1. finid says:

        Ok, I’ll keep an eye on this and see how fast/far development goes.

        The folks over at Project Atomic are pushing Atomic to handle containers much like system daemons, so that if the host server is rebooted, the containers come back up with it.

        Any plans for that with LSD or has that being implemented already?

        I know that’s already doable with systemd-nspawn, but does LXD have that capability now?

        1. Not implemented yet but it’s in our specification.

          Basically the idea is to do the equivalent to the hardware “restore power on last state”.

          So if a host is rebooted, all containers that were running at the time of shutdown will be started again at boot time.

  4. Naga Phani says:

    Nice article for newbies like me.
    Can you please explain more about ‘/sbin/init’ as you mentioned in comments.
    How the base hardware is abstracted to containers?
    Is there any post of yours which explains more about containers ( namespaces, cgroups etc.., ) & differences between vms & containers?

    Thanks & Regards.

  5. David Andel says:

    Hi Stéphane,

    thanks a lot for this intro. Especially for the command ‘lxc image list’ which I haven’t found anywhere in the documentation.

    One question about the output:
    +——————————–+————–+——–+————————-+———+——————————–+
    | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | UPLOAD DATE |
    +——————————–+————–+——–+————————-+———+——————————–+
    | | b587142d2b98 | yes | Centos 6 (amd64) | x86_64 | Apr 27, 2015 at 5:17am (CEST) |
    | | 27c16c66e142 | yes | Centos 6 (amd64) | x86_64 | Apr 28, 2015 at 5:17am (CEST) |
    | centos/6/amd64 (1 more) | 223b07936937 | yes | Centos 6 (amd64) | x86_64 | Apr 29, 2015 at 5:18am (CEST) |

    You have always three images of the same system on your images server, the last one saying “(1 more)”.
    After ‘lxd-images import’ I see aliases of my local images followed by “(2 more)”.
    Can you explain? What does that mean?

    Thanks

    1. The public image server keeps the last 3 images around. Only the latest one has aliases pointing to it (the nice user friendly name).

      Images can have more than one aliases and the images.linuxcontainers.org uses that.

      Typically an Ubuntu 14.04 amd64 image will have:
      – ubuntu/trusty/amd64/default
      – ubuntu/trusty/amd64

      The image listing will always list the shorted alias pointing to the image and if there are more, mention the (x more) next to it.

      The full list of aliases can be obtained with “lxc image info”.

  6. Oliver R says:

    Where can I find a list of valid configuration options for the command “lxc config set container-name”? I tried this for expample: “lxc config set container-name autostart 1”. This always gives me an error: “error: Bad key: autostart”. I’ve also tried “lxc.start.auto” and “start.auto” with no luck. Can you help me?

  7. chroot says:

    Why do we need another version of chroot? Traditionally chroot or schroot has been used to cage off applications to provide multi-tenancy.

  8. tim says:

    Hi,

    I am wondering is it possible to export my current lxc containers to the lxd format since lxd is based on the lxc?

    1. Pat says:

      Same question here!

  9. Philip Orleans says:

    I always handle networking for LXC containers like this
    #lxc.network.type = macvlan
    #lxc.network.flags = up
    #lxc.network.link = eth1
    #lxc.network.name = eth1
    #lxc.network.macvlan.mode = bridge
    #lxc.network.hwaddr = 00:b6:6b:81:f2:7b
    #lxc.network.ipv4 = 0.0.0.0/25
    How would I define this settings using the new tool?

    Also, I share some directories from the host
    lxc.mount.entry = /usr/src usr/src1 none bind 0 0
    lxc.mount.entry = /common common none bind 0 0
    lxc.mount.entry = /lib/modules usr/lib/modules none bind 0 0

    How can that be done?

    Also, if my add a third question, my containers do not restrict anything that happens inside the container.
    lxc.autodev = 1
    lxc.aa_profile = unconfined
    lxc.cap.drop=
    lxc.kmsg=0

    How do achieve the same effects?

  10. But will it run a completely virtualized environment? Can I run, say, FreeBSD?

    1. Jeff Esquivel S. says:

      Hi Randal,

      Because it is not a “real” hypervisor (it’s not virtualizing hardware, it’s only running stuff in an isolated environment atop the Linux Kernel) it can only run Linux distributions, not different OS.

      Regards,

  11. Joaquim says:

    Will it exist a gui to manage the LXD containers? (something like the LXC gui that already exists)?

    thanks,

    J.

    1. Anthony says:

      Hi, i work on a lxd-webui which use LXD API. The webapp use the angular2 framework.

      Some features is missing but i will work on it.

      The demo require an LXD instance for try the app : http://aarnaud.github.io/lxd-webui/

      The repository is https://github.com/aarnaud/lxd-webui

  12. matthewearl says:

    I could not find anywhere on how to set lxd to work on an existing bridge such as bro. i have changed all the files in /etc/lxc/default.conf, /etc/default/lxc-net and so forth but everytime i create a container it comes up with 10.0.1.x. I reboot and the lxcbr0 bridge goes away finally then i create a container and when it starts up it fails. we need a step by step walkthrough on how to set up lxd on an existing bridge. i have tried everything and it is not working. turned lxcbridge to false also.

    Thanks

    1. Jeff Green says:

      all you have to do is create the bridge and then configure the nics in the container either by profile or add the nics directly to the container. You then have to bash into the container and configure the network interfaces for the nics you specified in the profile or added direct. when you add them to the container that is when it is specified to what the nice is bridged to see below.
      lxc config device add u1 eth1 nic nictype=bridged parent=lxcbr1

      So u1 is the container name everything else is self explanatory but you see how we tell lxd that the nic is bridged to the host bridge.

      The only other way to do this is to create a profile to accomplish the same thing.

      hope this helps someone!!

  13. Neat stuff, but too bad it has no direct network management applications but it does list available bridges on the host and allows assigning network interfaces for containers to those bridges so it can still be useful . Still, worth looking into.

  14. devnull says:

    Hi Stéphane,
    GUI apps in LXC containers work great. I was wondering how to run gui apps in LXD. How do we pass all the devices to the container in LXD ? Can you post a configuration profile for the same ?

    Thanks 🙂

  15. Dave says:

    Is it possible to install LXD on Ubuntu Snappy Core OS?

  16. Dave says:

    beep beep?

  17. Makan Taghizadeh says:

    Is it possible to use LXD with some other backing devices, say ZFS?

    Thanks

  18. emiliangilder says:

    Is possible to bind x server to start gui applications?

  19. rafkat says:

    you mentioned the future work as to implementing the support for pass-through devices. Does that mean lxc will enable direct access of devices inside the container ? Like a raw disk device inside container?

  20. Dilip Renkila says:

    Hi Can you specify the url of the rest api of lxd ?

  21. Matt nelson says:

    Unfortunately I Need systemd capabilities or I can’t use it. I need a good bad jails alternative for my Linux machines. There are tons of systemd issues I am having from stopping containers to apps not able to run. I wish Debian just ditched systemd

  22. tim says:

    Hi, may i know what is the default port number used for the REST API and how do I initiate that in order for me to build application with the REST API? Thanks.

    1. By default LXD doesn’t listen to the network at all.

      You must run something like:
      lxc config set core.https_address [::]

      Which will have it bind all addresses on the default 8443 port.

      1. tim says:

        Thanks for the reply, I will try to work it 🙂

  23. Reshma says:

    Hi Stéphane Graber ,

    I am trying to setup OpenStack on LXD container on a single machine using conjure-up, But I am facing an issue , Is this the right place to discuss?

    Thanks

Leave a Reply to finid Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.