LXD on Debian (using snapd)

LXD logo

Introduction

So far all my blog posts about LXD have been assuming an Ubuntu host with LXD installed from packages, as a snap or from source.

But LXD is perfectly happy to run on any Linux distribution which has the LXC library available (version 2.0.0 or higher), a recent kernel (3.13 or higher) and some standard system utilities available (rsync, dnsmasq, netcat, various filesystem tools, …).

In fact, you can find packages in the following Linux distributions (let me know if I missed one):

We have also had several reports of LXD being used on Centos and Fedora, where users built it from source using the distribution’s liblxc (or in the case of Centos, from an external repository).

One distribution we’ve seen a lot of requests for is Debian. A native Debian package has been in the works for a while now and the list of missing dependencies has been shrinking quite a lot lately.

But there is an easy alternative that will get you a working LXD on Debian today!
Use the same LXD snap package as I mentioned in a previous post, but on Debian!

Requirements

  • A Debian “testing” (stretch) system
  • The stock Debian kernel without apparmor support
  • If you want to use ZFS with LXD, then the “contrib” repository must be enabled and the “zfsutils-linux” package installed on the system

Installing snapd and LXD

Getting the latest stable LXD onto an up to date Debian testing system is just a matter of running:

apt install snapd
snap install lxd

If you never used snapd before, you’ll have to either logout and log back in to update your PATH, or just update your existing one with:

. /etc/profile.d/apps-bin-path.sh

And now it’s time to configure LXD with:

root@debian:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]:
Create a new ZFS pool (yes/no) [default=yes]?
Name of the new ZFS pool [default=lxd]:
Would you like to use an existing block device (yes/no) [default=no]?
Size in GB of the new loop device (1GB minimum) [default=15]:
Would you like LXD to be available over the network (yes/no) [default=no]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like to create a new network bridge (yes/no) [default=yes]?
What should the new bridge be called [default=lxdbr0]?
What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
LXD has been successfully configured.

And finally, you can start using LXD:

root@debian:~# lxc launch images:debian/stretch debian
Creating debian
Starting debian

root@debian:~# lxc launch ubuntu:16.04 ubuntu
Creating ubuntu
Starting ubuntu

root@debian:~# lxc launch images:centos/7 centos
Creating centos
Starting centos

root@debian:~# lxc launch images:archlinux archlinux
Creating archlinux
Starting archlinux

root@debian:~# lxc launch images:gentoo gentoo
Creating gentoo
Starting gentoo

And enjoy your fresh collection of Linux distributions:

root@debian:~# lxc list
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
|   NAME    |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| archlinux | RUNNING | 10.250.240.103 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe40:7b1b (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| centos    | RUNNING | 10.250.240.109 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe87:64ff (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| debian    | RUNNING | 10.250.240.111 (eth0) | fd42:46d0:3c40:cca7:216:3eff:feb4:e984 (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| gentoo    | RUNNING | 10.250.240.164 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe27:10ca (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| ubuntu    | RUNNING | 10.250.240.80 (eth0)  | fd42:46d0:3c40:cca7:216:3eff:fedc:f0a6 (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+

Conclusion

The availability of snapd on other Linux distributions makes it a great way to get the latest LXD running on your distribution of choice.

There are still a number of problems with the LXD snap which may or may not be a blocker for your own use. The main ones at this point are:

  • All containers are shutdown and restarted on upgrades
  • No support for bash completion

If you want non-root users to have access to the LXD daemon. Simply make sure that a “lxd” group exists on your system and add whoever you want to manage LXD into that group, then restart the LXD daemon.

Extra information

The snapd website can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged | 32 Comments

Running Kubernetes inside LXD

LXD logo

Introduction

For those who haven’t heard of Kubernetes before, it’s defined by the upstream project as:

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

It is important to note the “applications” part in there. Kubernetes deploys a set of single application containers and connects them together. Those containers will typically run a single process and so are very different from the full system containers that LXD itself provides.

This blog post will be very similar to one I published last year on running OpenStack inside a LXD container. Similarly to the OpenStack deployment, we’ll be using conjure-up to setup a number of LXD containers and eventually run the Docker containers that are used by Kubernetes.

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have at least 10GB of space for the containers to use and at least 4GB of RAM.

Outside of configuring LXD itself, you will also need to bump some kernel limits with the following commands:

sudo sysctl fs.inotify.max_user_instances=1048576  
sudo sysctl fs.inotify.max_queued_events=1048576  
sudo sysctl fs.inotify.max_user_watches=1048576  
sudo sysctl vm.max_map_count=262144

Setting up the container

Similarly to OpenStack, the conjure-up deployed version of Kubernetes expects a lot more privileges and resource access than LXD would typically provide. As a result, we have to create a privileged container, with nesting enabled and with AppArmor disabled.

This means that not very much of LXD’s security features will still be in effect on this container. Depending on how you feel about this, you may choose to run this on a different machine.

Note that all of this however remains better than instructions that would have you install everything directly on your host machine. If only by making it very easy to remove it all in the end.

lxc init ubuntu:16.04 kubernetes -c security.privileged=true -c security.nesting=true -c linux.kernel_modules=ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
printf "lxc.cap.drop=\nlxc.aa_profile=unconfined\n" | lxc config set kubernetes raw.lxc -
lxc start kubernetes

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get Kubernetes going.

lxc exec kubernetes -- apt update
lxc exec kubernetes -- apt dist-upgrade -y
lxc exec kubernetes -- apt install squashfuse -y
lxc exec kubernetes -- ln -s /bin/true /usr/local/bin/udevadm
lxc exec kubernetes -- snap install conjure-up --classic

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec kubernetes -- lxd init

And that’s it for the container configuration itself, now we can deploy Kubernetes!

Deploying Kubernetes with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy Kubernetes.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec kubernetes -- sudo -u ubuntu -i conjure-up
  • Select “Kubernetes Core”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy Kubernetes. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Interact with your new Kubernetes

We can ask juju to deploy a new kubernetes workload, in this case 5 instances of “microbot”:

root@kubernetes:~# sudo -u ubuntu -i
ubuntu@kubernetes:~$ juju run-action kubernetes-worker/0 microbot replicas=5
Action queued with id: 1d1e2997-5238-4b86-873c-ad79660db43f

You can then grab the service address from the Juju action output:

ubuntu@kubernetes:~$ juju show-action-output 1d1e2997-5238-4b86-873c-ad79660db43f
results:
 address: microbot.10.97.218.226.xip.io
status: completed
timing:
 completed: 2017-01-13 10:26:14 +0000 UTC
 enqueued: 2017-01-13 10:26:11 +0000 UTC
 started: 2017-01-13 10:26:12 +0000 UTC

Now actually using the Kubernetes tools, we can check the state of our new pods:

ubuntu@kubernetes:~$ kubectl.conjure-up-kubernetes-core-be8 get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 21m
microbot-1855935831-cn4bs 0/1 ContainerCreating 0 18s
microbot-1855935831-dh70k 0/1 ContainerCreating 0 18s
microbot-1855935831-fqwjp 0/1 ContainerCreating 0 18s
microbot-1855935831-ksmmp 0/1 ContainerCreating 0 18s
microbot-1855935831-mfvst 1/1 Running 0 18s
nginx-ingress-controller-bj5gh 1/1 Running 0 21m

After a little while, you’ll see everything’s running:

ubuntu@kubernetes:~$ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 23m
microbot-1855935831-cn4bs 1/1 Running 0 2m
microbot-1855935831-dh70k 1/1 Running 0 2m
microbot-1855935831-fqwjp 1/1 Running 0 2m
microbot-1855935831-ksmmp 1/1 Running 0 2m
microbot-1855935831-mfvst 1/1 Running 0 2m
nginx-ingress-controller-bj5gh 1/1 Running 0 23m

At which point, you can hit the service URL with:

ubuntu@kubernetes:~$ curl -s http://microbot.10.97.218.226.xip.io | grep hostname
 <p class="centered">Container hostname: microbot-1855935831-fqwjp</p>

Running this multiple times will show you different container hostnames as you get load balanced between one of those 5 new instances.

Conclusion

Similar to OpenStack, conjure-up combined with LXD makes it very easy to deploy rather complex big software, very easily and in a very self-contained way.

This isn’t the kind of setup you’d want to run in a production environment, but it’s great for developers, demos and whoever wants to try those technologies without investing into hardware.

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged | 13 Comments

Running snaps in LXD containers

LXD logo

Introduction

The LXD and AppArmor teams have been working to support loading AppArmor policies inside LXD containers for a while. This support which finally landed in the latest Ubuntu kernels now makes it possible to install snap packages.

Snap packages are a new way of distributing software, directly from the upstream and with a number of security features wrapped around them so that these packages can’t interfere with each other or cause harm to your system.

Requirements

There are a lot of moving pieces to get all of this working. The initial enablement was done on Ubuntu 16.10 with Ubuntu 16.10 containers, but all the needed bits are now progressively being pushed as updates to Ubuntu 16.04 LTS.

The easiest way to get this to work is with:

  • Ubuntu 16.10 host
  • Stock Ubuntu kernel (4.8.0)
  • Stock LXD (2.4.1 or higher)
  • Ubuntu 16.10 container with “squashfuse” manually installed in it

Installing the nextcloud snap

First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it.

lxc launch ubuntu:16.10 nextcloud
lxc exec nextcloud -- apt update
lxc exec nextcloud -- apt dist-upgrade -y
lxc exec nextcloud -- apt install squashfuse -y

And then, lets install that “nextcloud” snap with:

lxc exec nextcloud -- snap install nextcloud

Finally, grab the container’s IP and access “http://<IP>” with your web browser:

stgraber@castiana:~$ lxc list nextcloud
+-----------+---------+----------------------+----------------------------------------------+
|    NAME   |  STATE  |         IPV4         |                     IPV6                     |
+-----------+---------+----------------------+----------------------------------------------+
| nextcloud | RUNNING | 10.148.195.47 (eth0) | fd42:ee2:5d34:25c6:216:3eff:fe86:4a49 (eth0) |
+-----------+---------+----------------------+----------------------------------------------+

Nextcloud Login screen

Installing the LXD snap in a LXD container

First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it.
This time with support for nested containers.

lxc launch ubuntu:16.10 lxd -c security.nesting=true
lxc exec lxd -- apt update
lxc exec lxd -- apt dist-upgrade -y
lxc exec lxd -- apt install squashfuse -y

Now lets clear the LXD that came pre-installed with the container so we can replace it by the snap.

lxc exec lxd -- apt remove --purge lxd lxd-client -y

Because we already have a stable LXD on the host, we’ll make things a bit more interesting by installing the latest build from git master rather than the latest stable release:

lxc exec lxd -- snap install lxd --edge

The rest is business as usual for a LXD user:

stgraber@castiana:~$ lxc exec lxd bash
root@lxd:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]:

We detected that you are running inside an unprivileged container.
This means that unless you manually configured your host otherwise,
you will not have enough uid and gid to allocate to your containers.

LXD can re-use your container's own allocation to avoid the problem.
Doing so makes your nested containers slightly less safe as they could
in theory attack their parent container and gain more privileges than
they otherwise would.

Would you like to have your containers share their parent's allocation (yes/no) [default=yes]?
Would you like LXD to be available over the network (yes/no) [default=no]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like to create a new network bridge (yes/no) [default=yes]?
What should the new bridge be called [default=lxdbr0]?
What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
LXD has been successfully configured.

root@lxd:~# lxd.lxc launch images:archlinux arch
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

Creating arch
Starting arch

root@lxd:~# lxd.lxc list
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4         |                      IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| arch | RUNNING | 10.106.137.64 (eth0) | fd42:2fcd:964b:eba8:216:3eff:fe8f:49ab (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+

And that’s it, you now have the latest LXD build installed inside a LXD container and running an archlinux container for you. That LXD build will update very frequently as we publish new builds to the edge channel several times a day.

Conclusion

It’s great to have snaps now install properly inside LXD containers. Production users can now setup hundreds of different containers, network them the way they want, setup their storage and resource limits through LXD and then install snap packages inside them to get the latest upstream releases of the software they want to run.

That’s not to say that everything is perfect yet. This is all built on some really recent kernel work, using unprivileged FUSE filesystem mounts and unprivileged AppArmor profile stacking and namespacing. There very likely still are some issues that need to get resolved in order to get most snaps to work identically to when they’re installed directly on the host.

If you notice discrepancies between a snap running directly on the host and a snap running inside a LXD container, you’ll want to look at the “dmesg” output, looking for any DENIED entry in there which would indicate AppArmor rejecting some request from the snap.

This typically indicates either a bug in AppArmor itself or in the way the AppArmor profiles are generated by snapd. If you find one of those issues, you can report it in #snappy on irc.freenode.net or file a bug at https://launchpad.net/snappy/+filebug so it can be investigated.

Extra information

More information on snap packages can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged | 12 Comments

Network management with LXD (2.3+)

LXD logo

Introduction

When LXD 2.0 shipped with Ubuntu 16.04, LXD networking was pretty simple. You could either use that “lxdbr0” bridge that “lxd init” would have you configure, provide your own or just use an existing physical interface for your containers.

While this certainly worked, it was a bit confusing because most of that bridge configuration happened outside of LXD in the Ubuntu packaging. Those scripts could only support a single bridge and none of this was exposed over the API, making remote configuration a bit of a pain.

That was all until LXD 2.3 when LXD finally grew its own network management API and command line tools to match. This post is an attempt at an overview of those new capabilities.

Basic networking

Right out of the box, LXD 2.3 comes with no network defined at all. “lxd init” will offer to set one up for you and attach it to all new containers by default, but let’s do it by hand to see what’s going on under the hood.

To create a new network with a random IPv4 and IPv6 subnet and NAT enabled, just run:

stgraber@castiana:~$ lxc network create testbr0
Network testbr0 created

You can then look at its config with:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
config:
 ipv4.address: 10.150.19.1/24
 ipv4.nat: "true"
 ipv6.address: fd42:474b:622d:259d::1/64
 ipv6.nat: "true"
managed: true
type: bridge
usedby: []

If you don’t want those auto-configured subnets, you can go with:

stgraber@castiana:~$ lxc network create testbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true
Network testbr0 created

Which will result in:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
config:
 ipv4.address: 10.0.3.1/24
 ipv4.nat: "true"
 ipv6.address: none
managed: true
type: bridge
usedby: []

Having a network created and running won’t do you much good if your containers aren’t using it.
To have your newly created network attached to all containers, you can simply do:

stgraber@castiana:~$ lxc network attach-profile testbr0 default eth0

To attach a network to a single existing container, you can do:

stgraber@castiana:~$ lxc network attach testbr0 my-container default eth0

Now, lets say you have openvswitch installed on that machine and want to convert that bridge to an OVS bridge, just change the driver property:

stgraber@castiana:~$ lxc network set testbr0 bridge.driver openvswitch

If you want to do a bunch of changes all at once, “lxc network edit” will let you edit the network configuration interactively in your text editor.

Static leases and port security

One of the nice thing with having LXD manage the DHCP server for you is that it makes managing DHCP leases much simpler. All you need is a container-specific nic device and the right property set.

root@yak:~# lxc init ubuntu:16.04 c1
Creating c1
root@yak:~# lxc network attach testbr0 c1 eth0
root@yak:~# lxc config device set c1 eth0 ipv4.address 10.0.3.123
root@yak:~# lxc start c1
root@yak:~# lxc list c1
+------+---------+-------------------+------+------------+-----------+
| NAME |  STATE  |        IPV4       | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-------------------+------+------------+-----------+
|  c1  | RUNNING | 10.0.3.123 (eth0) |      | PERSISTENT | 0         |
+------+---------+-------------------+------+------------+-----------+

And same goes for IPv6 but with the “ipv6.address” property instead.

Similarly, if you want to prevent your container from ever changing its MAC address or forwarding traffic for any other MAC address (such as nesting), you can enable port security with:

root@yak:~# lxc config device set c1 eth0 security.mac_filtering true

DNS

LXD runs a DNS server on the bridge. On top of letting you set the DNS domain for the bridge (“dns.domain” network property), it also supports 3 different operating modes (“dns.mode”):

  • “managed” will have one DNS record per container, matching its name and known IP addresses. The container cannot alter this record through DHCP.
  • “dynamic” allows the containers to self-register in the DNS through DHCP. So whatever hostname the container sends during the DHCP negotiation ends up in DNS.
  • “none” is for a simple recursive DNS server without any kind of local DNS records.

The default mode is “managed” and is typically the safest and most convenient as it provides DNS records for containers but doesn’t let them spoof each other’s records by sending fake hostnames over DHCP.

Using tunnels

On top of all that, LXD also supports connecting to other hosts using GRE or VXLAN tunnels.

A LXD network can have any number of tunnels attached to it, making it easy to create networks spanning multiple hosts. This is mostly useful for development, test and demo uses, with production environment usually preferring VLANs for that kind of segmentation.

So say, you want a basic “testbr0” network running with IPv4 and IPv6 on host “edfu” and want to spawn containers using it on host “djanet”. The easiest way to do that is by using a multicast VXLAN tunnel. This type of tunnels only works when both hosts are on the same physical segment.

root@edfu:~# lxc network create testbr0 tunnel.lan.protocol=vxlan
Network testbr0 created
root@edfu:~# lxc network attach-profile testbr0 default eth0

This defines a “testbr0” bridge on host “edfu” and sets up a multicast VXLAN tunnel on it for other hosts to join it. In this setup, “edfu” will be the one acting as a router for that network, providing DHCP, DNS, … the other hosts will just be forwarding traffic over the tunnel.

root@djanet:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.lan.protocol=vxlan
Network testbr0 created
root@djanet:~# lxc network attach-profile testbr0 default eth0

Now you can start containers on either host and see them getting IP from the same address pool and communicate directly with each other through the tunnel.

As mentioned earlier, this uses multicast, which usually won’t do you much good when crossing routers. For those cases, you can use VXLAN in unicast mode or a good old GRE tunnel.

To join another host using GRE, first configure the main host with:

root@edfu:~# lxc network set testbr0 tunnel.nuturo.protocol gre
root@edfu:~# lxc network set testbr0 tunnel.nuturo.local 172.17.16.2
root@edfu:~# lxc network set testbr0 tunnel.nuturo.remote 172.17.16.9

And then the “client” host with:

root@nuturo:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.edfu.protocol=gre tunnel.edfu.local=172.17.16.9 tunnel.edfu.remote=172.17.16.2
Network testbr0 created
root@nuturo:~# lxc network attach-profile testbr0 default eth0

If you’d rather use vxlan, just do:

root@edfu:~# lxc network set testbr0 tunnel.edfu.id 10
root@edfu:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

And:

root@nuturo:~# lxc network set testbr0 tunnel.edfu.id 10
root@nuturo:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

The tunnel id is required here to avoid conflicting with the already configured multicast vxlan tunnel.

And that’s how you make cross-host networking easily with recent LXD!

Conclusion

LXD now makes it very easy to define anything from a simple single-host network to a very complex cross-host network for thousands of containers. It also makes it very simple to define a new network just for a few containers or add a second device to a container, connecting it to a separate private network.

While this post goes through most of the different features we support, there are quite a few more knobs that can be used to fine tune the LXD network experience.
A full list can be found here: https://github.com/lxc/lxd/blob/master/doc/configuration.md

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged | 44 Comments

LXD 2.0: LXD and OpenStack [11/12]

This is the eleventh blog post in this series about LXD 2.0.

LXD logo

Introduction

First of all, sorry for the delay. It took quite a long time before I finally managed to get all of this going. My first attempts were using devstack which ran into a number of issues that had to be resolved. Yet even after all that, I still wasn’t be able to get networking going properly.

I finally gave up on devstack and tried “conjure-up” to deploy a full Ubuntu OpenStack using Juju in a pretty user friendly way. And it finally worked!

So below is how to run a full OpenStack, using LXD containers instead of VMs and running all of this inside a LXD container (nesting!).

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have a pretty beefy CPU, around 50GB of space for the container to use and at least 16GB of RAM.

Remember, we’re running a full OpenStack here, this thing isn’t exactly light!

Setting up the container

OpenStack is made of a lof of different components, doing a lot of different things. Some require some additional privileges so to make our live easier, we’ll use a privileged container.

We’ll configure that container to support nesting, pre-load all the required kernel modules and allow it access to /dev/mem (as is apparently needed).

Please note that this means that most of the security benefit of LXD containers are effectively disabled for that container. However the containers that will be spawned by OpenStack itself will be unprivileged and use all the normal LXD security features.

lxc init ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch, nbd"
printf "lxc.cap.drop=\nlxc.aa_profile=unconfined\n" | lxc config set openstack raw.lxc -
lxc config device add openstack mem unix-char path=/dev/mem
lxc start openstack

 

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get OpenStack going.

lxc exec openstack -- apt update
lxc exec openstack -- apt dist-upgrade -y
lxc exec openstack -- apt install squashfuse -y
lxc exec openstack -- ln -s /bin/true /usr/local/bin/udevadm
lxc exec openstack -- snap install conjure-up --classic

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec openstack -- lxd init

And that’s it for the container configuration itself, now we can deploy OpenStack!

Deploying OpenStack with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy OpenStack.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec openstack -- sudo -u ubuntu -i conjure-up
  • Select “OpenStack with NovaLXD”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy OpenStack. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Conjure-Up deploying OpenStack

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Access the dashboard and spawn a container

The dashboard runs inside a container, so you can’t just hit it from your web browser.
The easiest way around this is to setup a NAT rule with:

lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to <IP>

Where “<ip>” is the dashboard IP address conjure-up gave you at the end of the installation.

You can now grab the IP address of the “openstack” container (from “lxc info openstack”) and point your web browser to: http://<container ip>/horizon

This can take a few minutes to load the first time around. Once the login screen is loaded, enter the default login and password (admin/openstack) and you’ll be greeted by the OpenStack dashboard!

oslxd-dashboard

You can now head to the “Project” tab on the left and the “Instances” page. To start a new instance using nova-lxd, click on “Launch instance”, select what image you want, network, … and your instance will get spawned.

Once it’s running, you can assign it a floating IP which will let you reach your instance from within your “openstack” container.

Conclusion

OpenStack is a pretty complex piece of software, it’s also not something you really want to run at home or on a single server. But it’s certainly interesting to be able to do it anyway, keeping everything contained to a single container on your machine.

Conjure-Up is a great tool to deploy such complex software, using Juju behind the scene to drive the deployment, using LXD containers for every individual service and finally for the instances themselves.

It’s also one of the very few cases where multiple level of container nesting actually makes sense!

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Posted in Canonical voices, LXD, Planet Ubuntu | Tagged | 30 Comments