Category Archives: Planet Ubuntu

LXD client on Windows and MacOS

LXD logo

LXD on other operating systems?

While LXD and especially its API have been designed in a mostly OS-agnostic way, the only OS supported for the daemon right now is Linux (and a rather recent Linux at that).

However since all the communications between the client and daemon happen over a REST API, there is no reason why our default client wouldn’t work on other operating systems.

And it does. We in fact gate changes to the client on having it build and pass unit tests on Linux, Windows and MacOS.

This means that you can run one or more LXD daemons on Linux systems on your network and then interact with those remotely from any Linux, Windows or MacOS machine.

Setting up your LXD daemon

We’ll be connecting to the LXD daemon over the network, so you’ll need to make sure it’s listening and has a password configured so that new clients can add themselves to the trust store.

This can be done with:

lxc config set core.https_address "[::]:8443"
lxc config set core.trust_password "my-password"

In my case, that remote LXD can be reached with “djanet.maas.mtl.stgraber.net”, you’ll want to replace that with your LXD server’s FQDN or IP in the commands used below.

Windows client

Pre-built native binaries

Our Windows CI service builds a tarball for every commit. You can grab the latest one here:
https://ci.appveyor.com/project/lxc/lxd/branch/master/artifacts

Then unpack the archive and open a command prompt in the directory where you unpacked the lxc.exe binary.

Build from source

Alternatively, you can build it from source, by first installing Go using the latest MSI based installer from https://golang.org/dl/ and then Git from https://git-scm.com/downloads.

And then in a command prompt, run:

git config --global http.https://gopkg.in.followRedirects true
go get -v -x github.com/lxc/lxd/lxc

Use Ubuntu on Windows (“bash”)

For this, you need to use Windows 10 and have the Windows subsystem for Linux enabled.
With that done, start an Ubuntu shell by launching “bash”. And you’re done.
The LXD client is installed by default in the Ubuntu 16.04 image.

Interact with the remote server

Regardless of which method you picked, you’ve now got access to the “lxc” command and can add your remote server.

Using the native build does have a few restrictions to do with Windows terminal escape codes, breaking things like the arrow keys and password hiding. The Ubuntu on Windows way uses the Linux version of the LXD client and so doesn’t suffer from those limitations.

MacOS client

Even though we do have MacOS CI through Travis, they don’t host artifacts for us and so don’t have prebuilt binaries for people to download.

Build from source

Similarly to the Windows instructions, you can build the LXD client from source, by first installing Go using the latest DMG based installer from https://golang.org/dl/ and then Git from https://git-scm.com/downloads.

Once that’s done, open a new Terminal window and run:

export GOPATH=~/go
go get -v -x github.com/lxc/lxd/lxc
sudo ln -s ~/go/bin/lxc /usr/local/bin/

At which point you can use the “lxc” command.

Conclusion

The LXD client can be built on all the main operating systems and on just about every architecture, this makes it very easy for anyone to interact with existing LXD servers, whether they’re themselves using a Linux machine or not.

Thanks to our pretty strict backward compatibility rules, the version of the client doesn’t really matter. Older clients can talk to newer servers and newer clients can talk to older servers. Obviously in both cases some features will not be available, but normal container worflow operations will work fine.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged , , | 4 Comments

Ubuntu Core in LXD containers

LXD logo

What’s Ubuntu Core?

Ubuntu Core is a version of Ubuntu that’s fully transactional and entirely based on snap packages.

Most of the system is read-only. All installed applications come from snap packages and all updates are done using transactions. Meaning that should anything go wrong at any point during a package or system update, the system will be able to revert to the previous state and report the failure.

The current release of Ubuntu Core is called series 16 and was released in November 2016.

Note that on Ubuntu Core systems, only snap packages using confinement can be installed (no “classic” snaps) and that a good number of snaps will not fully work in this environment or will require some manual intervention (creating user and groups, …). Ubuntu Core gets improved on a weekly basis as new releases of snapd and the “core” snap are put out.

Requirements

As far as LXD is concerned, Ubuntu Core is just another Linux distribution. That being said, snapd does require unprivileged FUSE mounts and AppArmor namespacing and stacking, so you will need the following:

  • An up to date Ubuntu system using the official Ubuntu kernel
  • An up to date version of LXD

Creating an Ubuntu Core container

The Ubuntu Core images are currently published on the community image server.
You can launch a new container with:

stgraber@dakara:~$ lxc launch images:ubuntu-core/16 ubuntu-core
Creating ubuntu-core
Starting ubuntu-core

The container will take a few seconds to start, first executing a first stage loader that determines what read-only image to use and setup the writable layers. You don’t want to interrupt the container in that stage and “lxc exec” will likely just fail as pretty much nothing is available at that point.

Seconds later, “lxc list” will show the container IP address, indicating that it’s booted into Ubuntu Core:

stgraber@dakara:~$ lxc list
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+
|     NAME    |  STATE  |          IPV4        |                      IPV6                    |    TYPE    | SNAPSHOTS |
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+
| ubuntu-core | RUNNING | 10.90.151.104 (eth0) | 2001:470:b368:b2b5:216:3eff:fee1:296f (eth0) | PERSISTENT | 0         |
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+

You can then interact with that container the same way you would any other:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap list
Name       Version     Rev  Developer  Notes
core       16.04.1     394  canonical  -
pc         16.04-0.8   9    canonical  -
pc-kernel  4.4.0-45-4  37   canonical  -
root@ubuntu-core:~#

Updating the container

If you’ve been tracking the development of Ubuntu Core, you’ll know that those versions above are pretty old. That’s because the disk images that are used as the source for the Ubuntu Core LXD images are only refreshed every few months. Ubuntu Core systems will automatically update once a day and then automatically reboot to boot onto the new version (and revert if this fails).

If you want to immediately force an update, you can do it with:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap refresh
pc-kernel (stable) 4.4.0-53-1 from 'canonical' upgraded
core (stable) 16.04.1 from 'canonical' upgraded
root@ubuntu-core:~# snap version
snap 2.17
snapd 2.17
series 16
root@ubuntu-core:~#

And then reboot the system and check the snapd version again:

root@ubuntu-core:~# reboot
root@ubuntu-core:~# 

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap version
snap 2.21
snapd 2.21
series 16
root@ubuntu-core:~#

You can get an history of all snapd interactions with

stgraber@dakara:~$ lxc exec ubuntu-core snap changes
ID  Status  Spawn                 Ready                 Summary
1   Done    2017-01-31T05:14:38Z  2017-01-31T05:14:44Z  Initialize system state
2   Done    2017-01-31T05:14:40Z  2017-01-31T05:14:45Z  Initialize device
3   Done    2017-01-31T05:21:30Z  2017-01-31T05:22:45Z  Refresh all snaps in the system

Installing some snaps

Let’s start with the simplest snaps of all, the good old Hello World:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap install hello-world
hello-world 6.3 from 'canonical' installed
root@ubuntu-core:~# hello-world
Hello World!

And then move on to something a bit more useful:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap install nextcloud
nextcloud 11.0.1snap2 from 'nextcloud' installed

Then hit your container over HTTP and you’ll get to your newly deployed Nextcloud instance.

If you feel like testing the latest LXD straight from git, you can do so with:

stgraber@dakara:~$ lxc config set ubuntu-core security.nesting true
stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap install lxd --edge
lxd (edge) git-c6006fb from 'canonical' installed
root@ubuntu-core:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]: 

We detected that you are running inside an unprivileged container.
This means that unless you manually configured your host otherwise,
you will not have enough uid and gid to allocate to your containers.

LXD can re-use your container's own allocation to avoid the problem.
Doing so makes your nested containers slightly less safe as they could
in theory attack their parent container and gain more privileges than
they otherwise would.

Would you like to have your containers share their parent's allocation (yes/no) [default=yes]? 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
Would you like to create a new network bridge (yes/no) [default=yes]? 
What should the new bridge be called [default=lxdbr0]? 
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
LXD has been successfully configured.

And because container inception never gets old, lets run Ubuntu Core 16 inside Ubuntu Core 16:

root@ubuntu-core:~# lxc launch images:ubuntu-core/16 nested-core
Creating nested-core
Starting nested-core 
root@ubuntu-core:~# lxc list
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
|    NAME     |  STATE  |         IPV4        |                       IPV6                    |    TYPE    | SNAPSHOTS |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
| nested-core | RUNNING | 10.71.135.21 (eth0) | fd42:2861:5aad:3842:216:3eff:feaf:e6bd (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+

Conclusion

If you ever wanted to try Ubuntu Core, this is a great way to do it. It’s also a great tool for snap authors to make sure their snap is fully self-contained and will work in all environments.

Ubuntu Core is a great fit for environments where you want to ensure that your system is always up to date and is entirely reproducible. This does come with a number of constraints that may or may not work for you.

And lastly, a word of warning. Those images are considered as good enough for testing, but aren’t officially supported at this point. We are working towards getting fully supported Ubuntu Core LXD images on the official Ubuntu image server in the near future.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged | 10 Comments

LXD on Debian (using snapd)

LXD logo

Introduction

So far all my blog posts about LXD have been assuming an Ubuntu host with LXD installed from packages, as a snap or from source.

But LXD is perfectly happy to run on any Linux distribution which has the LXC library available (version 2.0.0 or higher), a recent kernel (3.13 or higher) and some standard system utilities available (rsync, dnsmasq, netcat, various filesystem tools, …).

In fact, you can find packages in the following Linux distributions (let me know if I missed one):

We have also had several reports of LXD being used on Centos and Fedora, where users built it from source using the distribution’s liblxc (or in the case of Centos, from an external repository).

One distribution we’ve seen a lot of requests for is Debian. A native Debian package has been in the works for a while now and the list of missing dependencies has been shrinking quite a lot lately.

But there is an easy alternative that will get you a working LXD on Debian today!
Use the same LXD snap package as I mentioned in a previous post, but on Debian!

Requirements

  • A Debian “testing” (stretch) system
  • The stock Debian kernel without apparmor support
  • If you want to use ZFS with LXD, then the “contrib” repository must be enabled and the “zfsutils-linux” package installed on the system

Installing snapd and LXD

Getting the latest stable LXD onto an up to date Debian testing system is just a matter of running:

apt install snapd
snap install lxd

If you never used snapd before, you’ll have to either logout and log back in to update your PATH, or just update your existing one with:

. /etc/profile.d/apps-bin-path.sh

And now it’s time to configure LXD with:

root@debian:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]:
Create a new ZFS pool (yes/no) [default=yes]?
Name of the new ZFS pool [default=lxd]:
Would you like to use an existing block device (yes/no) [default=no]?
Size in GB of the new loop device (1GB minimum) [default=15]:
Would you like LXD to be available over the network (yes/no) [default=no]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like to create a new network bridge (yes/no) [default=yes]?
What should the new bridge be called [default=lxdbr0]?
What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
LXD has been successfully configured.

And finally, you can start using LXD:

root@debian:~# lxc launch images:debian/stretch debian
Creating debian
Starting debian

root@debian:~# lxc launch ubuntu:16.04 ubuntu
Creating ubuntu
Starting ubuntu

root@debian:~# lxc launch images:centos/7 centos
Creating centos
Starting centos

root@debian:~# lxc launch images:archlinux archlinux
Creating archlinux
Starting archlinux

root@debian:~# lxc launch images:gentoo gentoo
Creating gentoo
Starting gentoo

And enjoy your fresh collection of Linux distributions:

root@debian:~# lxc list
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
|   NAME    |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| archlinux | RUNNING | 10.250.240.103 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe40:7b1b (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| centos    | RUNNING | 10.250.240.109 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe87:64ff (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| debian    | RUNNING | 10.250.240.111 (eth0) | fd42:46d0:3c40:cca7:216:3eff:feb4:e984 (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| gentoo    | RUNNING | 10.250.240.164 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe27:10ca (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| ubuntu    | RUNNING | 10.250.240.80 (eth0)  | fd42:46d0:3c40:cca7:216:3eff:fedc:f0a6 (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+

Conclusion

The availability of snapd on other Linux distributions makes it a great way to get the latest LXD running on your distribution of choice.

There are still a number of problems with the LXD snap which may or may not be a blocker for your own use. The main ones at this point are:

  • All containers are shutdown and restarted on upgrades
  • No support for bash completion

If you want non-root users to have access to the LXD daemon. Simply make sure that a “lxd” group exists on your system and add whoever you want to manage LXD into that group, then restart the LXD daemon.

Extra information

The snapd website can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged | 32 Comments

Running Kubernetes inside LXD

LXD logo

Introduction

For those who haven’t heard of Kubernetes before, it’s defined by the upstream project as:

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

It is important to note the “applications” part in there. Kubernetes deploys a set of single application containers and connects them together. Those containers will typically run a single process and so are very different from the full system containers that LXD itself provides.

This blog post will be very similar to one I published last year on running OpenStack inside a LXD container. Similarly to the OpenStack deployment, we’ll be using conjure-up to setup a number of LXD containers and eventually run the Docker containers that are used by Kubernetes.

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have at least 10GB of space for the containers to use and at least 4GB of RAM.

Outside of configuring LXD itself, you will also need to bump some kernel limits with the following commands:

sudo sysctl fs.inotify.max_user_instances=1048576  
sudo sysctl fs.inotify.max_queued_events=1048576  
sudo sysctl fs.inotify.max_user_watches=1048576  
sudo sysctl vm.max_map_count=262144

Setting up the container

Similarly to OpenStack, the conjure-up deployed version of Kubernetes expects a lot more privileges and resource access than LXD would typically provide. As a result, we have to create a privileged container, with nesting enabled and with AppArmor disabled.

This means that not very much of LXD’s security features will still be in effect on this container. Depending on how you feel about this, you may choose to run this on a different machine.

Note that all of this however remains better than instructions that would have you install everything directly on your host machine. If only by making it very easy to remove it all in the end.

lxc init ubuntu:16.04 kubernetes -c security.privileged=true -c security.nesting=true -c linux.kernel_modules=ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
printf "lxc.cap.drop=\nlxc.aa_profile=unconfined\n" | lxc config set kubernetes raw.lxc -
lxc start kubernetes

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get Kubernetes going.

lxc exec kubernetes -- apt update
lxc exec kubernetes -- apt dist-upgrade -y
lxc exec kubernetes -- apt install squashfuse -y
lxc exec kubernetes -- ln -s /bin/true /usr/local/bin/udevadm
lxc exec kubernetes -- snap install conjure-up --classic

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec kubernetes -- lxd init

And that’s it for the container configuration itself, now we can deploy Kubernetes!

Deploying Kubernetes with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy Kubernetes.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec kubernetes -- sudo -u ubuntu -i conjure-up
  • Select “Kubernetes Core”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy Kubernetes. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Interact with your new Kubernetes

We can ask juju to deploy a new kubernetes workload, in this case 5 instances of “microbot”:

root@kubernetes:~# sudo -u ubuntu -i
ubuntu@kubernetes:~$ juju run-action kubernetes-worker/0 microbot replicas=5
Action queued with id: 1d1e2997-5238-4b86-873c-ad79660db43f

You can then grab the service address from the Juju action output:

ubuntu@kubernetes:~$ juju show-action-output 1d1e2997-5238-4b86-873c-ad79660db43f
results:
 address: microbot.10.97.218.226.xip.io
status: completed
timing:
 completed: 2017-01-13 10:26:14 +0000 UTC
 enqueued: 2017-01-13 10:26:11 +0000 UTC
 started: 2017-01-13 10:26:12 +0000 UTC

Now actually using the Kubernetes tools, we can check the state of our new pods:

ubuntu@kubernetes:~$ kubectl.conjure-up-kubernetes-core-be8 get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 21m
microbot-1855935831-cn4bs 0/1 ContainerCreating 0 18s
microbot-1855935831-dh70k 0/1 ContainerCreating 0 18s
microbot-1855935831-fqwjp 0/1 ContainerCreating 0 18s
microbot-1855935831-ksmmp 0/1 ContainerCreating 0 18s
microbot-1855935831-mfvst 1/1 Running 0 18s
nginx-ingress-controller-bj5gh 1/1 Running 0 21m

After a little while, you’ll see everything’s running:

ubuntu@kubernetes:~$ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 23m
microbot-1855935831-cn4bs 1/1 Running 0 2m
microbot-1855935831-dh70k 1/1 Running 0 2m
microbot-1855935831-fqwjp 1/1 Running 0 2m
microbot-1855935831-ksmmp 1/1 Running 0 2m
microbot-1855935831-mfvst 1/1 Running 0 2m
nginx-ingress-controller-bj5gh 1/1 Running 0 23m

At which point, you can hit the service URL with:

ubuntu@kubernetes:~$ curl -s http://microbot.10.97.218.226.xip.io | grep hostname
 <p class="centered">Container hostname: microbot-1855935831-fqwjp</p>

Running this multiple times will show you different container hostnames as you get load balanced between one of those 5 new instances.

Conclusion

Similar to OpenStack, conjure-up combined with LXD makes it very easy to deploy rather complex big software, very easily and in a very self-contained way.

This isn’t the kind of setup you’d want to run in a production environment, but it’s great for developers, demos and whoever wants to try those technologies without investing into hardware.

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged | 13 Comments

Running snaps in LXD containers

LXD logo

Introduction

The LXD and AppArmor teams have been working to support loading AppArmor policies inside LXD containers for a while. This support which finally landed in the latest Ubuntu kernels now makes it possible to install snap packages.

Snap packages are a new way of distributing software, directly from the upstream and with a number of security features wrapped around them so that these packages can’t interfere with each other or cause harm to your system.

Requirements

There are a lot of moving pieces to get all of this working. The initial enablement was done on Ubuntu 16.10 with Ubuntu 16.10 containers, but all the needed bits are now progressively being pushed as updates to Ubuntu 16.04 LTS.

The easiest way to get this to work is with:

  • Ubuntu 16.10 host
  • Stock Ubuntu kernel (4.8.0)
  • Stock LXD (2.4.1 or higher)
  • Ubuntu 16.10 container with “squashfuse” manually installed in it

Installing the nextcloud snap

First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it.

lxc launch ubuntu:16.10 nextcloud
lxc exec nextcloud -- apt update
lxc exec nextcloud -- apt dist-upgrade -y
lxc exec nextcloud -- apt install squashfuse -y

And then, lets install that “nextcloud” snap with:

lxc exec nextcloud -- snap install nextcloud

Finally, grab the container’s IP and access “http://<IP>” with your web browser:

stgraber@castiana:~$ lxc list nextcloud
+-----------+---------+----------------------+----------------------------------------------+
|    NAME   |  STATE  |         IPV4         |                     IPV6                     |
+-----------+---------+----------------------+----------------------------------------------+
| nextcloud | RUNNING | 10.148.195.47 (eth0) | fd42:ee2:5d34:25c6:216:3eff:fe86:4a49 (eth0) |
+-----------+---------+----------------------+----------------------------------------------+

Nextcloud Login screen

Installing the LXD snap in a LXD container

First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it.
This time with support for nested containers.

lxc launch ubuntu:16.10 lxd -c security.nesting=true
lxc exec lxd -- apt update
lxc exec lxd -- apt dist-upgrade -y
lxc exec lxd -- apt install squashfuse -y

Now lets clear the LXD that came pre-installed with the container so we can replace it by the snap.

lxc exec lxd -- apt remove --purge lxd lxd-client -y

Because we already have a stable LXD on the host, we’ll make things a bit more interesting by installing the latest build from git master rather than the latest stable release:

lxc exec lxd -- snap install lxd --edge

The rest is business as usual for a LXD user:

stgraber@castiana:~$ lxc exec lxd bash
root@lxd:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]:

We detected that you are running inside an unprivileged container.
This means that unless you manually configured your host otherwise,
you will not have enough uid and gid to allocate to your containers.

LXD can re-use your container's own allocation to avoid the problem.
Doing so makes your nested containers slightly less safe as they could
in theory attack their parent container and gain more privileges than
they otherwise would.

Would you like to have your containers share their parent's allocation (yes/no) [default=yes]?
Would you like LXD to be available over the network (yes/no) [default=no]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like to create a new network bridge (yes/no) [default=yes]?
What should the new bridge be called [default=lxdbr0]?
What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
LXD has been successfully configured.

root@lxd:~# lxd.lxc launch images:archlinux arch
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

Creating arch
Starting arch

root@lxd:~# lxd.lxc list
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4         |                      IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| arch | RUNNING | 10.106.137.64 (eth0) | fd42:2fcd:964b:eba8:216:3eff:fe8f:49ab (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+

And that’s it, you now have the latest LXD build installed inside a LXD container and running an archlinux container for you. That LXD build will update very frequently as we publish new builds to the edge channel several times a day.

Conclusion

It’s great to have snaps now install properly inside LXD containers. Production users can now setup hundreds of different containers, network them the way they want, setup their storage and resource limits through LXD and then install snap packages inside them to get the latest upstream releases of the software they want to run.

That’s not to say that everything is perfect yet. This is all built on some really recent kernel work, using unprivileged FUSE filesystem mounts and unprivileged AppArmor profile stacking and namespacing. There very likely still are some issues that need to get resolved in order to get most snaps to work identically to when they’re installed directly on the host.

If you notice discrepancies between a snap running directly on the host and a snap running inside a LXD container, you’ll want to look at the “dmesg” output, looking for any DENIED entry in there which would indicate AppArmor rejecting some request from the snap.

This typically indicates either a bug in AppArmor itself or in the way the AppArmor profiles are generated by snapd. If you find one of those issues, you can report it in #snappy on irc.freenode.net or file a bug at https://launchpad.net/snappy/+filebug so it can be investigated.

Extra information

More information on snap packages can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged | 12 Comments