Tag Archives: containers

LXD 2.0: Installing and configuring LXD [2/12]

This is the second blog post in this series about LXD 2.0.

LXD logo

Where to get LXD and how to install it

There are many ways to get the latest and greatest LXD. We recommend you use LXD with the latest LXC and Linux kernel to benefit from all its features but we try to degrade gracefully where possible to support older Linux distributions.

The Ubuntu archive

All new releases of LXD get uploaded to the Ubuntu development release within a few minutes of the upstream release. That package is then used to seed all the other source of packages for Ubuntu users.

If you are using the Ubuntu development release (16.04), you can simply do:

sudo apt install lxd

If you are running Ubuntu 14.04, we have backport packages available for you with:

sudo apt -t trusty-backports install lxd

The Ubuntu Core store

Users of Ubuntu Core on the stable release can install LXD with:

sudo snappy install lxd.stgraber

The official Ubuntu PPA

Users of other Ubuntu releases such as Ubuntu 15.10 can find LXD packages in the following PPA (Personal Package Archive):

sudo apt-add-repository ppa:ubuntu-lxc/stable
sudo apt update
sudo apt dist-upgrade
sudo apt install lxd

The Gentoo archive

Gentoo has pretty recent LXD packages available too, you can install those with:

sudo emerge --ask lxd

From source

Building LXD from source isn’t very difficult if you are used to building Go projects. Note however that you will need the LXC development headers. In order to run LXD, your distribution also needs a recent Linux kernel (3.13 at least), recent LXC (1.1.5 or higher), LXCFS and a version of shadow that supports user sub-uid/gid allocations.

The latest instructions on building LXD from source can be found in the upstream README.

Networking on Ubuntu

The Ubuntu packages provide you with a “lxdbr0” bridge as a convenience. This bridge comes unconfigured by default, offering only IPv6 link-local connectivity through an HTTP proxy.

To reconfigure the bridge and add some IPv4 or IPv6 subnet to it, you can run:

sudo dpkg-reconfigure -p medium lxd

Or go through the whole LXD step by step setup (see below) with:

sudo lxd init

Storage backends

LXD supports a number of storage backends. It’s best to know what backend you want to use prior to starting to use LXD as we do not support moving existing containers or images between backends.

A feature comparison table of the different backends can be found here.

ZFS

Our recommendation is ZFS as it supports all the features LXD needs to offer the fastest and most reliable container experience. This includes per-container disk quotas, immediate snapshot/restore, optimized migration (send/receive) and instant container creation from an image. It is also considered more mature than btrfs.

To use ZFS with LXD, you first need ZFS on your system.

If using Ubuntu 16.04, simply install it with:

sudo apt install zfsutils-linux

On Ubuntu 15.10, you can install it with:

sudo apt install zfsutils-linux zfs-dkms

And on older releases, you can use the zfsonlinux PPA:

sudo apt-add-repository ppa:zfs-native/stable
sudo apt update
sudo apt install ubuntu-zfs

To configure LXD to use it, simply run:

sudo lxd init

This will ask you a few questions about what kind of zfs configuration you’d like for your LXD and then configure it for you.

btrfs

If ZFS isn’t available, then btrfs offers the same level of integration with the exception that it doesn’t properly report disk usage inside the container (quotas do apply though).
btrfs also has the nice property that it can nest properly which ZFS doesn’t yet. That is, if you plan on using LXD inside LXD, btrfs is worth considering.

LXD doesn’t need any configuration to use btrfs, you just need to make sure that /var/lib/lxd is stored on a btrfs filesystem and LXD will automatically make use of it for you.

LVM

If ZFS and btrfs aren’t an option for you, you can still get some of their benefits by using LVM instead. LXD uses LVM with thin provisioning, creating an LV for each image and container and using LVM snapshots as needed.

To configure LXD to use LVM, create a LVM VG and run:

lxc config set storage.lvm_vg_name "THE-NAME-OF-YOUR-VG"

By default LXD uses ext4 as the filesystem for all the LVs. You can change that to XFS if you’d like:

lxc config set storage.lvm_fstype xfs

Simple directory

If none of the above are an option for you, LXD will still work but without any of those advanced features. It will simply create a directory for each container, unpack the image tarballs for each container creation and do a full filesystem copy on container copy or snapshot.

All features are supported except for disk quotas, but this is very wasteful of disk space and also very slow. If you have no other choice, it will work, but you should really consider one of the alternatives above.

 

More daemon configuration

The complete list of configuration options for the LXD daemon can be found here.

Network configuration

By default LXD doesn’t listen to the network. The only way to talk to it is over a local unix socket at /var/lib/lxd/unix.socket.

To have it listen to the network, there are two useful keys to set:

lxc config set core.https_address [::]
lxc config set core.trust_password some-secret-string

The first instructs LXD to bind the “::” IPv6 address, namely, all addresses on the machine. You can obviously replace this by a specific IPv4 or IPv6 address and can append the TCP port you’d like it to bind (defaults to 8443).

The second sets a password which is used for remote clients to add themselves to the LXD certificate trust store. When adding the LXD host, they will be prompted for the password, if the password matches, the LXD daemon will store their client certificate and they’ll be trusted, never needing the password again (it can be changed or unset entirely at that point).

You can also choose not to set a password and instead manually trust each new client by having them give you their “client.crt” file (from ~/.config/lxc) and add it to the trust store yourself with:

lxc config trust add client.crt

Proxy configuration

In most setups, you’ll want the LXD daemon to fetch images from remote servers.

If you are in an environment where you must go through a HTTP(s) proxy to reach the outside world, you’ll want to set a few configuration keys or alternatively make sure that the standard PROXY environment variables are set in the daemon’s environment.

lxc config set core.proxy_http http://squid01.internal:3128
lxc config set core.proxy_https http://squid01.internal:3128
lxc config set core.proxy_ignore_hosts image-server.local

With those, all transfers initiated by LXD will use the squid01.internal HTTP proxy, except for traffic to the server at image-server.local

Image management

LXD does dynamic image caching. When instructed to create a container from a remote image, it will download that image into its image store, mark it as cached and record its origin. After a number of days without seeing any use (10 by default), the image is automatically removed. Every few hours (6 by default), LXD also goes looking for a newer version of the image and updates its local copy.

All of that can be configured through the following configuration options:

lxc config set images.remote_cache_expiry 5
lxc config set images.auto_update_interval 24
lxc config set images.auto_update_cached false

Here we are instructing LXD to override all of those defaults and instead cache images for up to 5 days since they were last used, look for image updates every 24 hours and only update images which were directly marked as such (–auto-update flag in lxc image copy) but not the images which were automatically cached by LXD.

Conclusion

At this point you should have a working version of the latest LXD release, you can now start playing with it on your own or wait for the next blog post where we’ll create our first container and play with the LXD command line tool.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net

And if you can’t wait until the next few posts to try LXD, you can take our guided tour online and try it for free right from your web browser!

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged | 33 Comments

LXD 2.0: Introduction to LXD [1/12]

This is the first blog post in this series about LXD 2.0.

LXD logo

A few common questions about LXD

What’s LXD?

At its simplest, LXD is a daemon which provides a REST API to drive LXC containers.

Its main goal is to provide a user experience that’s similar to that of virtual machines but using Linux containers rather than hardware virtualization.

 

How does LXD relate to Docker/Rkt?

This is by far the question we get the most, so lets address it immediately!

LXD focuses on system containers, also called infrastructure containers. That is, a LXD container runs a full Linux system, exactly as it would be when run on metal or in a VM.

Those containers will typically be long running and based on a clean distribution image. Traditional configuration management tools and deployment tools can be used with LXD containers exactly as you would use them for a VM, cloud instance or physical machine.

In contrast, Docker focuses on ephemeral, stateless, minimal containers that won’t typically get upgraded or re-configured but instead just be replaced entirely. That makes Docker and similar projects much closer to a software distribution mechanism than a machine management tool.

The two models aren’t mutually exclusive either. You can absolutely use LXD to provide full Linux systems to your users who can then install Docker inside their LXD container to run the software they want.

Why LXD?

We’ve been working on LXC for a number of years now. LXC is great at what it does, that is, it provides a very good set of low-level tools and a library to create and manage containers.

However that kind of low-level tools aren’t necessarily very user friendly. They require a lot of initial knowledge to understand what they do and how they work. Keeping backward compatibility with older containers and deployment methods has also prevented LXC from using some security features by default, leading to more manual configuration for users.

We see LXD as the opportunity to address those shortcomings. On top of being a long running daemon which lets us address a lot of the LXC limitations like dynamic resource restrictions, container migration and efficient live migration, it also gave us the opportunity to come up with a new default experience, that’s safe by default and much more user focused.

The main LXD components

There are a number of main components that make LXD, those are typically visible in the LXD directory structure, in its command line client and in the API structure itself.

Containers

Containers in LXD are made of:

  • A filesystem (rootfs)
  • A list of configuration options, including resource limits, environment, security options and more
  • A bunch of devices like disks, character/block unix devices and network interfaces
  • A set of profiles the container inherits configuration from (see below)
  • Some properties (container architecture, ephemeral or persistent and the name)
  • Some runtime state (when using CRIU for checkpoint/restore)

Snapshots

Container snapshots are identical to containers except for the fact that they are immutable, they can be renamed, destroyed or restored but cannot be modified in any way.

It is worth noting that because we allow storing the container runtime state, this effectively gives us the concept of “stateful” snapshots. That is, the ability to rollback the container including its cpu and memory state at the time of the snapshot.

Images

LXD is image based, all LXD containers come from an image. Images are typically clean Linux distribution images similar to what you would use for a virtual machine or cloud instance.

It is possible to “publish” a container, making an image from it which can then be used by the local or remote LXD hosts.

Images are uniquely identified by their sha256 hash and can be referenced by using their full or partial hash. Because typing long hashes isn’t particularly user friendly, images can also have any number of properties applied to them, allowing for an easy search through the image store. Aliases can also be set as a one to one mapping between a unique user friendly string and an image hash.

LXD comes pre-configured with three remote image servers (see remotes below):

  • “ubuntu:” provides stable Ubuntu images
  • “ubunt-daily:” provides daily builds of Ubuntu
  • “images:” is a community run image server providing images for a number of other Linux distributions using the upstream LXC templates

Remote images are automatically cached by the LXD daemon and kept for a number of days (10 by default) since they were last used before getting expired.

Additionally LXD also automatically updates remote images (unless told otherwise) so that the freshest version of the image is always available locally.

Profiles

Profiles are a way to define container configuration and container devices in one place and then have it apply to any number of containers.

A container can have multiple profiles applied to it. When building the final container configuration (known as expanded configuration), the profiles will be applied in the order they were defined in, overriding each other when the same configuration key or device is found. Then the local container configuration is applied on top of that, overriding anything that came from a profile.

LXD ships with two pre-configured profiles:

  • “default” is automatically applied to all containers unless an alternative list of profiles is provided by the user. This profile currently does just one thing, define a “eth0” network device for the container.
  • “docker” is a profile you can apply to a container which you want to allow to run Docker containers. It requests LXD load some required kernel modules, turns on container nesting and sets up a few device entries.

Remotes

As I mentioned earlier, LXD is a networked daemon. The command line client that comes with it can therefore talk to multiple remote LXD servers as well as image servers.

By default, our command line client comes with the following remotes defined

  • local: (default remote, talks to the local LXD daemon over a unix socket)
  • ubuntu: (Ubuntu image server providing stable builds)
  • ubuntu-daily: (Ubuntu image server providing daily builds)
  • images: (images.linuxcontainers.org image server)

Any combination of those remotes can be used with the command line client.

You can also add any number of remote LXD hosts that were configured to listen to the network. Either anonymously if they are a public image server or after going through authentication when managing remote containers.

It’s that remote mechanism that makes it possible to interact with remote image servers as well as copy or move containers between hosts.

Security

One aspect that was core to our design of LXD was to make it as safe as possible while allowing modern Linux distributions to run inside it unmodified.

The main security features used by LXD through its use of the LXC library are:

  • Kernel namespaces. Especially the user namespace as a way to keep everything the container does separate from the rest of the system. LXD uses the user namespace by default (contrary to LXC) and allows for the user to turn it off on a per-container basis (marking the container “privileged”) when absolutely needed.
  • Seccomp. To filter some potentially dangerous system calls.
  • AppArmor: To provide additional restrictions on mounts, socket, ptrace and file access. Specifically restricting cross-container communication.
  • Capabilities. To prevent the container from loading kernel modules, altering the host system time, …
  • CGroups. To restrict resource usage and prevent DoS attacks against the host.

Rather than exposing those features directly to the user as LXC would, we’ve built a new configuration language which abstracts most of those into something that’s more user friendly. For example, one can tell LXD to pass any host device into the container without having to also lookup its major/minor numbers to manually update the cgroup policy.

Communications with LXD itself are secured using TLS 1.2 with a very limited set of allowed ciphers. When dealing with hosts outside of the system certificate authority, LXD will prompt the user to validate the remote fingerprint (SSH style), then cache the certificate for future use.

The REST API

Everything that LXD does is done over its REST API. There is no other communication channel between the client and the daemon.

The REST API can be access over a local unix socket, only requiring group membership for authentication or over a HTTPs socket using a client certificate for authentication.

The structure of the REST API matches the different components described above and is meant to be very simple and intuitive to use.

When a more complex communication mechanism is required, LXD will negotiate websockets and use those for the rest of the communication. This is used for interactive console session, container migration and for event notification.

With LXD 2.0, comes the /1.0 stable API. We will not break backward compatibility within the /1.0 API endpoint however we may add extra features to it, which we’ll signal by declaring additional API extensions that the client can look for.

Containers at scale

While LXD provides a good command line client, that client isn’t meant to manage thousands of containers on multiple hosts. For that kind of use cases, we have nova-lxd which is an OpenStack plugin that makes OpenStack treat LXD containers in the exact same way it would treat VMs.

This allows for very large deployments of LXDs on a large number of hosts, using the OpenStack APIs to manage network, storage and load-balancing.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net

And if you can’t wait until the next few posts to try LXD, you can take our guided tour online and try it for free right from your web browser!

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged | 64 Comments

LXD 2.0: Blog post series [0/12]

As we are getting closer and closer to tagging the final releases of LXC, LXD and LXCFS 2.0, I figured it would be a good idea to talk a bit about everything that went into LXD since we first started that project a year and a half ago.

LXD logo

This is going to be a blog post series similar to what I’ve done for LXC 1.0 a couple years back.

The topics that will be covered are:

Related posts:

I’m hoping to post a couple of those every week for the coming month and a half leading to the Ubuntu 16.04 release.

If you can’t wait for all of those to come out to play with LXD, you can also take the guided tour and play with LXD, online through our online demo page.

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged | 83 Comments

NorthSec 2015: behind the scenes

TLDR: NorthSec is an incredible security event, our CTF simulates a whole internet for every participating team. This allows us to create just about anything, from a locked down country to millions of vulnerable IoT devices spread across the globe. However that flexibility comes at a high cost hardware-wise, as we’re getting bigger and bigger, we need more and more powerful servers and networking gear. We’re very actively looking for sponsors so get in touch with me or just buy us something on Amazon!

What’s NorthSec?

NorthSec is one of the biggest on-site Capture The Flag (CTF), security contest in North America. It’s organized yearly over a weekend in Montreal (usually in May) and since the last edition, has been accompanied by a two days security conference before the CTF itself. The rest of this post will only focus on the CTF part though.

the-room

A view of the main room at NorthSec 2015

Teams arrive at the venue on Friday evening, get setup at their table and then get introduced to this year’s scenario and given access to our infrastructure. There they will have to fight their way through challenges, each earning them points and letting them go further and further. On Sunday afternoon, the top 3 teams are awarded their prize and we wrap up for the year.

Size wise, for the past two years we’ve had a physical limit of up to 32 teams of 8 participants and then a bunch of extra unaffiliated visitors. For the 2016 edition, we’re raising this to 50 teams for a grand total of 400 participants, thanks to some shuffling at the venue making some more room for us.

Why is it special?

The above may sound pretty simple and straightforward, however there are a few important details that sets NorthSec apart from other CTFs.

  • It is entirely on-site. There are some very big online CTFs out there but very few on-site ones. Having everyone participating in the same room is valuable from a networking point of view but also ensures fairness by enforcing fixed size teams and equal network bandwidth and latency.
  • Every team gets its very own copy of the whole infrastructure. There are no shared services in the simulated world we provide them. That means one team’s actions cannot impact another.
  • Each simulation is its own virtual world with its own instance of the internet, we use hundreds of LXC containers and thousands of VLANs and networks FOR EVERY TEAM to provide the most realistic and complete environment you can think of.
World map of our fake internet

World map of our fake internet

What’s our infrastructure like?

Due to the very high bandwidth and low latency requirements, most of the infrastructure is hosted on premises and on our hardware. We do plan on offloading Windows virtual machines to a public cloud for the next edition though.

We also provide a mostly legacy free environment to our contestants, all of our challenges are connected to IPv6-only networks and run on 64bit Ubuntu LTS  in LXC with state of the art security configurations.

Our rack

Our rack, on location at NorthSec 2015

 

All in all, for 32 teams (last year’s edition), we had:

  • 48000 virtual network interfaces
  • 2000 virtual carriers
  • 16000 BGP routers
  • 17000 Ubuntu containers
  • 100 Windows virtual machines
  • 20000 routing table entries

And all of that was running on:

  • Two firewalls (DELL SC1425)
  • Two infrastructure servers (DELL SC1425)
  • One management server (HP DL380 G5)
  • Four main contest hosts (HP DL380 G5)
  • Three backup contest hosts (DELL C6100)

On average we had 7 full simulations and 21 virtual machines running on every host (the backup hosts only had one each). That means each of the main contest hosts had:

  • 10500 virtual network interfaces
  • 435 virtual carriers
  • 3500 BGP routers
  • 3700 Ubuntu containers
  • 21 Windows virtual machines
  • 4375 routing table entries

Not too bad for servers that are (SC1425) or are getting close (DL380 G5) to being 10 years old now.

Past infrastructure challenges

In the past editions we’ve found numerous bugs in the various technologies we use when put under such a crazy load:

  • A variety of switch firmware bugs when dealing with several thousand IPv6-only networks.
  • Multiple Linux IPv6 kernel bugs (and one security issue) also related to an excess of IPv6 multicast traffic.
  • Several memory leaks and other bugs in LXC and related components that become very visible when you’re running upwards of 10000 containers.
  • Several more Linux kernel bugs related to performance scaling as we create more and more namespaces and nested namespaces.

As our infrastructure staff is very invested in these technologies by being upstream developers or contributors to the main projects we use, those bugs were all rapidly reported, discussed and fixed. We always look forward to the next NorthSec as an opportunity to test the latest technology at scale in a completely controlled environment.

How can you help?

As I mentioned, we’ve been capped at 32 teams and around 300 attendees for the past two years. Our existing hardware was barely sufficient  to handle  the load during those two editions, we urgently need to refresh our hardware to offer the best possible experience to our participants.

We’re planning on replacing most if not all of our hardware with slightly more recent equivalents, also upgrading from rotating drives to SSDs and improving our network. On the software side, we’ll be upgrading to a newer Linux kernel, possibly to Ubuntu 16.04, switch from btrfs to zfs and from LXC to LXD.

We are a Canadian non-profit organization with all our staff being volunteers so we very heavily rely on sponsors to be able to make the event a success.

If you or your company would like to help by sponsoring our infrastructure, get in touch with me. We have several sponsoring levels and can get you the visibility you’d like, ranging from a mention on our website and at the event to on-site presence with a recruitment booth and even, if our interests align, inclusion of your product in some of our challenges.

We also have an Amazon wishlist of smaller (cheaper) items that we need to buy in the near future. If you buy something from the list, get in touch so we can properly thank you!

Oh and as I briefly mentioned at the beginning, we have a two days, single-track conference ahead of the CTF. We’re actively looking for speakers, if you have something interesting to present, the CFP is here.

Extra resources

Posted in Canonical voices, LXC, LXD, Planet Ubuntu | Tagged | 7 Comments

Getting started with LXD – the container lightervisor

Introduction

For the past 6 months, Serge Hallyn, Tycho Andersen, Chuck Short, Ryan Harper and myself have been very busy working on a new container project called LXD.

Ubuntu 15.04, due to be released this Thursday, will contain LXD 0.7 in its repository. This is still the early days and while we’re confident LXD 0.7 is functional and ready for users to experiment, we still have some work to do before it’s ready for critical production use.

LXD logo

So what’s LXD?

LXD is what we call our container “lightervisor”. The core of LXD is a daemon which offers a REST API to drive full system containers just like you’d drive virtual machines.

The LXD daemon runs on every container host and client tools then connect to those to manage those containers or to move or copy them to another LXD.

We provide two such clients:

  • A command line tool called “lxc”
  • An OpenStack Nova plugin called nova-compute-lxd

The former is mostly aimed at small deployments ranging from a single machine (your laptop) to a few dozen hosts. The latter seamlessly integrates inside your OpenStack infrastructure and lets you manage containers exactly like you would virtual machines.

Why LXD?

LXC has been around for about 7 years now, it evolved from a set of very limited tools which would get you something only marginally better than a chroot, all the way to the stable set of tools, stable library and active user and development community that we have today.

Over those years, a lot of extra security features were added to the Linux kernel and LXC grew support for all of them. As we saw the need for people to build their own solution on top of LXC, we’ve developed a public API and a set of bindings. And last year, we’ve put out our first long term support release which has been a great success so far.

That being said, for a while now, we’ve been wanting to do a few big changes:

  • Make LXC secure by default (rather than it being optional).
  • Completely rework the tools to make them simpler and less confusing to newcomers.
  • Rely on container images rather than using “templates” to build them locally.
  • Proper checkpoint/restore support (live migration).

Unfortunately, solving any of those means doing very drastic changes to LXC which would likely break our existing users or at least force them to rethink the way they do things.

Instead, LXD is our opportunity to start fresh. We’re keeping LXC as the great low level container manager that it is. And build LXD on top of it, using LXC’s API to do all the low level work. That achieves the best of both worlds, we keep our low level container manager with its API and bindings but skip using its tools and templates, instead replacing those by the new experience that LXD provides.

How does LXD relate to LXC, Docker, Rocket and other container projects?

LXD is currently based on top of LXC. It uses the stable LXC API to do all the container management behind the scene, adding the REST API on top and providing a much simpler, more consistent user experience.

The focus of LXD is on system containers. That is, a container which runs a clean copy of a Linux distribution or a full appliance. From a design perspective, LXD doesn’t care about what’s running in the container.

That’s very different from Docker or Rocket which are application container managers (as opposed to system container managers) and so focus on distributing apps as containers and so very much care about what runs inside the container.

There is absolutely nothing wrong with using LXD to run a bunch of full containers which then run Docker or Rocket inside of them to run their different applications. So letting LXD manage the host resources for you, applying all the security restrictions to make the container safe and then using whatever application distribution mechanism you want inside.

Getting started with LXD

The simplest way for somebody to try LXD is by using it with its command line tool. This can easily be done on your laptop or desktop machine.

On an Ubuntu 15.04 system (or by using ppa:ubuntu-lxc/lxd-stable on 14.04 or above), you can install LXD with:

sudo apt-get install lxd

Then either logout and login again to get your group membership refreshed, or use:

newgrp lxd

From that point on, you can interact with your newly installed LXD daemon.

The “lxc” command line tool lets you interact with one or multiple LXD daemons. By default it will interact with the local daemon, but you can easily add more of them.

As an easy way to start experimenting with remote servers, you can add our public LXD server at https://images.linuxcontainers.org:8443
That server is an image-only read-only server, so all you can do with it is list images, copy images from it or start containers from it.

You’ll have to do the following to: add the server, list all of its images and then start a container from one of them:

lxc remote add images images.linuxcontainers.org
lxc image list images:
lxc launch images:ubuntu/trusty/i386 ubuntu-32

What the above does is define a new “remote” called “images” which points to images.linuxcontainers.org. Then list all of its images and finally start a local container called “ubuntu-32” from the ubuntu/trusty/i386 image. The image will automatically be cached locally so that future containers are started instantly.

The “<remote name>:” syntax is used throughout the lxc client. When not specified, the default “local” remote is assumed. Should you only care about managing a remote server, the default remote can be changed with “lxc remote set-default”.

Now that you have a running container, you can check its status and IP information with:

lxc list

Or get even more details with:

lxc info ubuntu-32

To get a shell inside the container, or to run any other command that you want, you may do:

lxc exec ubuntu-32 /bin/bash

And you can also directly pull or push files from/to the container with:

lxc file pull ubuntu-32/path/to/file .
lxc file push /path/to/file ubuntu-32/

When done, you can stop or delete your container with one of those:

lxc stop ubuntu-32
lxc delete ubuntu-32

What’s next?

The above should be a reasonably comprehensive guide to how to use LXD on a single system. Of course, that’s not the most interesting thing to do with LXD. All the commands shown above can work against multiple hosts, containers can be remotely created, moved around, copied, …

LXD also supports live migration, snapshots, configuration profiles, device pass-through and more.

I intend to write some more posts to cover those use cases and features as well as highlight some of the work we’re currently busy doing.

LXD is a pretty young but very active project. We’ve had great contributions from existing LXC developers as well as newcomers.

The project is entirely developed in the open at https://github.com/lxc/lxd. We keep track of upcoming features and improvements through the project’s issue tracker, so it’s easy to see what will be coming soon. We also have a set of issues marked “Easy” which are meant for new contributors as easy ways to get to know the LXD code and contribute to the project.

LXD is an Apache2 licensed project, written in Go and which doesn’t require a CLA to contribute to (we do however require the standard DCO Signed-off-by). It can be built with both golang and gccgo and so works on almost all architectures.

Extra resources

More information can be found on the official LXD website:
https://linuxcontainers.org/lxd

The code, issues and pull requests can all be found on Github:
https://github.com/lxc/lxd

And a good overview of the LXD design and its API may be found in our specs:
https://github.com/lxc/lxd/tree/master/specs

Conclusion

LXD is a new and exciting project. It’s an amazing opportunity to think fresh about system containers and provide the best user experience possible, alongside great features and rock solid security.

With 7 releases and close to a thousand commits by 20 contributors, it’s a very active, fast paced project. Lots of things still remain to be implemented before we get to our 1.0 milestone release in early 2016 but looking at what was achieved in just 5 months, I’m confident we’ll have an incredible LXD in another 12 months!

For now, we’d welcome your feedback, so install LXD, play around with it, file bugs and let us know what’s important for you next.

Posted in Canonical voices, LXC, LXD, Planet Revolution-Linux, Planet Ubuntu | Tagged | 40 Comments