LXC 2.0 has been released!

LXD logo

Introduction

Today I’m very pleased to announce the release of LXC 2.0, our second Long Term Support Release! LXC 2.0 is the result of a year of work by the LXC community with over 700 commits done by over 90 contributors!

It joins LXCFS 2.0 which was released last week and will very soon be joined by LXD 2.0 to complete our collection of 2.0 container management tools!

What’s new?

The complete changelog is linked below but the main highlights for me are:

  • More consistent user experience between the various LXC tools.
  • Improved checkpoint/restore support.
  • Complete rework of our CGroup handling code, including support for the CGroup namespace.
  • Cleaned up storage backend subsystem, including the addition of a new Ceph RBD backend.
  • A massive amount of bugfixes.
  • And lastly, we managed to get all that done without breaking our API, so LXC 2.0 is fully API compatible with LXC 1.0.

The focus with this release was stability and maintaining support for all the environments in which LXC shines. We still support all kernels from 2.6.32 though the exact feature set does obviously vary based on kernel features. We also improved support for a bunch of architectures and fixed a lot of bugs and other rough edges.

This is the release you want to run in production for the next few years!

Support length

As mentioned, LXC 2.0 is a Long Term Support release.
This is the second time we do such a release with the first being LXC 1.0.

Long Term Support releases come with a 5 years commitment from upstream to do bugfixes and security updates and release new point releases when enough fixes have accumulated.

The end of life date for the various LXC versions is as follow:

  • LXC 1.0, released February 2014 will EOL on the 1st of June 2019
  • LXC 1.1, released February 2015 will EOL on the 1st of September 2016
  • LXC 2.0, released April 2016 will EOL on the 1st of June 2021

We therefore very strongly recommend LXC 1.1 users to update to LXC 2.0 as we will not be supporting this release for very much longer.

We also recommend production deployments stick to our Long Term Support release.

Project information

Upstream website: https://linuxcontainers.org/lxc/
Release announcement: https://linuxcontainers.org/lxc/news/
Code: https://github.com/lxc/lxc
IRC channel: #lxcontainers on irc.freenode.net
Mailing-lists: https://lists.linuxcontainers.org

Try it online

Want to see what a container with LXC 2.0 installed feels like?
You can get one online to play with here.

This entry was posted in Canonical voices, LXC, Planet Ubuntu and tagged . Bookmark the permalink.

7 Responses to LXC 2.0 has been released!

  1. Pingback: Lanzado oficialmente LXD 2.0 [ENG]

  2. Josef says:

    Hi, I’m on Debian 8.4 using XFS, and I’m thinking of deploying LXC/LXD.

    https://lists.linuxcontainers.org/pipermail/lxc-users/2015-June/009575.html

    My understanding from the above is that for things to work smoothly I would have to rebuild the kernel with the proper options set, and I would have to create a separate ZFS partition if I want to manage disk quotas for the containers? Is this still valid for 2.0?

    Would it be easier/recommended to just set-up LXC 2.0 on Debian, and manage the containers manually?

    • I seem to recall the Debian kernel being mostly good as far as the needed options. I believe they carry a patch that turns off user namespaces by default with an available sysctl to turn it back on, so with that switch flipped back to enabled and the right boot options to enable the various cgroups you care about, things should be fine.

      Now, you will need to build LXD itself by hand as there are no packages for Debian quite yet. I know a few folks are looking at making Debian packages but they first need to package a whole bunch of Golang dependencies.

      As far as storage setup, that is correct, the only two storage backends with support for disk quotas are btrfs and ZFS with ZFS being the only one actually reporting space usage properly in “df”.

      Depending on your needs, using just plain LXC from the repositories may be easier though you will need to do all the resource limits, ZFS configuration, … by hand and obviously won’t get the nice network management features of LXD.

  3. Josef says:

    Do you have any idea of how LXC and LXD compares to BSD jails or Solaris zones in terms of performance and security? Is there any difference in this regard, between LXC and LXD?

  4. Piers Dawson-Damer says:

    Would you be able to point me in the direction of some documentation regarding utilising Ceph RBD as a backend? Thanks

  5. Karl says:

    I am new to LXC, and have created a couple of armhf containers on an x86_64 system in order to simplify cross-compilation. They are working well.

    I tried to use LXD to set them up, but apparently cannot create armhf containers on an x86_64 using LXD.

    I also tried using a couple of published methods for converting an LXC container into a package to run under LXD. The conversion seemed to work, but LXD said it could not run armhf on x86_64.

    priv/non-priv is not an issue as these are not being used for external access, but simply to facilitate cross-compiles.

    Do you have any guidance on how one might go about getting this approach working under LXD? The network management tools for LXD look attractive.

    Thanks for any info,

    Karl

  6. Clara says:

    Hi Stéphane, thanks for all these super interesting LXC tutorials.
    I’ve always used virtual machines in order to run non-open source applications that I don’t trust, but now I want to use unprivileged Linux Containers because they seem much more convenient to me, but I have a question: Do you recommend to create a separate user to create these containers or it’s safe to run them as my default user? I’ve seen that you run them as user 1000 in your examples which I assume is the default user but just want to be sure. I don’t want to overcomplicate things if it’s not necessary.
    Thanks.

Leave a Reply

Your email address will not be published. Required fields are marked *