This is the second blog post in this series about LXD 2.0.
Where to get LXD and how to install it
There are many ways to get the latest and greatest LXD. We recommend you use LXD with the latest LXC and Linux kernel to benefit from all its features but we try to degrade gracefully where possible to support older Linux distributions.
The Ubuntu archive
All new releases of LXD get uploaded to the Ubuntu development release within a few minutes of the upstream release. That package is then used to seed all the other source of packages for Ubuntu users.
If you are using the Ubuntu development release (16.04), you can simply do:
sudo apt install lxd
If you are running Ubuntu 14.04, we have backport packages available for you with:
sudo apt -t trusty-backports install lxd
The Ubuntu Core store
Users of Ubuntu Core on the stable release can install LXD with:
sudo snappy install lxd.stgraber
The official Ubuntu PPA
Users of other Ubuntu releases such as Ubuntu 15.10 can find LXD packages in the following PPA (Personal Package Archive):
sudo apt-add-repository ppa:ubuntu-lxc/stable sudo apt update sudo apt dist-upgrade sudo apt install lxd
The Gentoo archive
Gentoo has pretty recent LXD packages available too, you can install those with:
sudo emerge --ask lxd
From source
Building LXD from source isn’t very difficult if you are used to building Go projects. Note however that you will need the LXC development headers. In order to run LXD, your distribution also needs a recent Linux kernel (3.13 at least), recent LXC (1.1.5 or higher), LXCFS and a version of shadow that supports user sub-uid/gid allocations.
The latest instructions on building LXD from source can be found in the upstream README.
Networking on Ubuntu
The Ubuntu packages provide you with a “lxdbr0” bridge as a convenience. This bridge comes unconfigured by default, offering only IPv6 link-local connectivity through an HTTP proxy.
To reconfigure the bridge and add some IPv4 or IPv6 subnet to it, you can run:
sudo dpkg-reconfigure -p medium lxd
Or go through the whole LXD step by step setup (see below) with:
sudo lxd init
Storage backends
LXD supports a number of storage backends. It’s best to know what backend you want to use prior to starting to use LXD as we do not support moving existing containers or images between backends.
A feature comparison table of the different backends can be found here.
ZFS
Our recommendation is ZFS as it supports all the features LXD needs to offer the fastest and most reliable container experience. This includes per-container disk quotas, immediate snapshot/restore, optimized migration (send/receive) and instant container creation from an image. It is also considered more mature than btrfs.
To use ZFS with LXD, you first need ZFS on your system.
If using Ubuntu 16.04, simply install it with:
sudo apt install zfsutils-linux
On Ubuntu 15.10, you can install it with:
sudo apt install zfsutils-linux zfs-dkms
And on older releases, you can use the zfsonlinux PPA:
sudo apt-add-repository ppa:zfs-native/stable sudo apt update sudo apt install ubuntu-zfs
To configure LXD to use it, simply run:
sudo lxd init
This will ask you a few questions about what kind of zfs configuration you’d like for your LXD and then configure it for you.
btrfs
If ZFS isn’t available, then btrfs offers the same level of integration with the exception that it doesn’t properly report disk usage inside the container (quotas do apply though).
btrfs also has the nice property that it can nest properly which ZFS doesn’t yet. That is, if you plan on using LXD inside LXD, btrfs is worth considering.
LXD doesn’t need any configuration to use btrfs, you just need to make sure that /var/lib/lxd is stored on a btrfs filesystem and LXD will automatically make use of it for you.
LVM
If ZFS and btrfs aren’t an option for you, you can still get some of their benefits by using LVM instead. LXD uses LVM with thin provisioning, creating an LV for each image and container and using LVM snapshots as needed.
To configure LXD to use LVM, create a LVM VG and run:
lxc config set storage.lvm_vg_name "THE-NAME-OF-YOUR-VG"
By default LXD uses ext4 as the filesystem for all the LVs. You can change that to XFS if you’d like:
lxc config set storage.lvm_fstype xfs
Simple directory
If none of the above are an option for you, LXD will still work but without any of those advanced features. It will simply create a directory for each container, unpack the image tarballs for each container creation and do a full filesystem copy on container copy or snapshot.
All features are supported except for disk quotas, but this is very wasteful of disk space and also very slow. If you have no other choice, it will work, but you should really consider one of the alternatives above.
More daemon configuration
The complete list of configuration options for the LXD daemon can be found here.
Network configuration
By default LXD doesn’t listen to the network. The only way to talk to it is over a local unix socket at /var/lib/lxd/unix.socket.
To have it listen to the network, there are two useful keys to set:
lxc config set core.https_address [::] lxc config set core.trust_password some-secret-string
The first instructs LXD to bind the “::” IPv6 address, namely, all addresses on the machine. You can obviously replace this by a specific IPv4 or IPv6 address and can append the TCP port you’d like it to bind (defaults to 8443).
The second sets a password which is used for remote clients to add themselves to the LXD certificate trust store. When adding the LXD host, they will be prompted for the password, if the password matches, the LXD daemon will store their client certificate and they’ll be trusted, never needing the password again (it can be changed or unset entirely at that point).
You can also choose not to set a password and instead manually trust each new client by having them give you their “client.crt” file (from ~/.config/lxc) and add it to the trust store yourself with:
lxc config trust add client.crt
Proxy configuration
In most setups, you’ll want the LXD daemon to fetch images from remote servers.
If you are in an environment where you must go through a HTTP(s) proxy to reach the outside world, you’ll want to set a few configuration keys or alternatively make sure that the standard PROXY environment variables are set in the daemon’s environment.
lxc config set core.proxy_http http://squid01.internal:3128 lxc config set core.proxy_https http://squid01.internal:3128 lxc config set core.proxy_ignore_hosts image-server.local
With those, all transfers initiated by LXD will use the squid01.internal HTTP proxy, except for traffic to the server at image-server.local
Image management
LXD does dynamic image caching. When instructed to create a container from a remote image, it will download that image into its image store, mark it as cached and record its origin. After a number of days without seeing any use (10 by default), the image is automatically removed. Every few hours (6 by default), LXD also goes looking for a newer version of the image and updates its local copy.
All of that can be configured through the following configuration options:
lxc config set images.remote_cache_expiry 5 lxc config set images.auto_update_interval 24 lxc config set images.auto_update_cached false
Here we are instructing LXD to override all of those defaults and instead cache images for up to 5 days since they were last used, look for image updates every 24 hours and only update images which were directly marked as such (–auto-update flag in lxc image copy) but not the images which were automatically cached by LXD.
Conclusion
At this point you should have a working version of the latest LXD release, you can now start playing with it on your own or wait for the next blog post where we’ll create our first container and play with the LXD command line tool.
Extra information
The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
And if you can’t wait until the next few posts to try LXD, you can take our guided tour online and try it for free right from your web browser!
Hi,
I’m running Arch Linux and I’ve played with LXC for some time. After installing LXD and trying some stuff, I found this article and tried “lxd init”, which produced “error: You have existing containers or images. lxd init requires an empty LXD.”
“lxc list” gives no result, while “lxc image list” shows one entry, but without an alias (it has fingerprint, description, etc.)
There is nothing in /var/lib/lxd/{containers,images}, but “lxd init” still complains. How can I get my machine to a state which would allow “lxd init”?
Thanks!
Well, “lxc image list” showing one entry means that LXD think you have one image, so indeed “lxd init” will fail. If you really do have nothing under /var/lib/lxd, you can just wipe the directory clean and restart LXD, it should then be happy 🙂
That did it, but I must say it doesn’t look like a proper way to fix it 🙂
..had the same problem, fixed it this way:
lxc image delete FINGERPRINT
.. so you don’t have to wipe /var/lib/lxd
So I setup a ZFS backend and launched a container. It appears all the files are being stored in /var/lib/lxd/containers/ instead of the specified ZFS pool. It even created the containers and images folder in the ZFS pool but does not appear to be using them?
ZFS:
/mnt/magi/containers$ du -sh
1.5K .
sudo du -sh /var/lib/lxd/containers/
1.1G /var/lib/lxd/containers/
lxc info shows:
config:
storage.zfs_pool_name: magi/containers
Stéphane, thanks for your excellent work on LXD and writing these helpful how-to documents for us.
To follow up on poxin’s question, can you give us some guidance on how to use a ZFS filesystem that was created before running “lxd init”?
Alternately, is there a correct way to inject it into the default profile? I’ve got the ZFS zpool established and mounted (which happens automagically). I need to not use my boot drive and would prefer not to change the /var mountpoint.
If you look at “zfs list -t all” you’ll notice that LXD is in fact using your ZFS pool.
It’s just not mounting the filesystems under the ZFS default mountpoint but instead mounting them under /var/lib/lxd where LXD is looking for them.
The one thing that’s not stored on ZFS is the compressed version of the images (stored in /var/lib/lxd/images).
That’s because LXD supports having some containers on ZFS and some not (those that existed at the time you configured LXD to use ZFS). We may offer extra configuration for this in the future as we take another look into our storage story.
Until then, if those compressed images in /var/lib/lxd/images are a problem, you can move that one path to a zfs filesystem (zfs set mountpoint=/var/lib/lxd/images/images if it’s empty).
It is working now!!
Another good way of noticing that ZFS has our backs is to go into your container and use a command like mount or df:
Here is what I observed on my own setup!
root@unresinous-dannielle:~# df -h .
Filesystem Size Used Avail Use% Mounted on
lxd-data/containers/unresinous-dannielle 193G 369M 193G 1% /
A few more steps(info) that you should probably add(I stumbled on these issues):
To set the storage to use LVM, you need to install the tools first, otherwise it throws errors without notifying that something is missing:
`sudo apt-get install lvm2 thin-provisioning-tools`
After `sudo lxd init` you need to logout of the session and log back in or use `newgrp lxd` to be able to execute commands in the current terminal.
If you don’t do this, you get:
`error: Error calling ‘lxd forkstart’: exit status 1` etc…
To be able to start containers without “privileged” set to “true”, you need to add the “root” into subuid/subgid:
`echo “root:100000:65536” | sudo tee /etc/subuid /etc/subgid`
Otherwise you get this error message:
error: LXD doesn’t have a uid/gid allocation. In this mode, only privileged containers are supported.
After setting uidmap, you most probably also want to restart the daemon:
`systemctl restart lxd.service`
I’m trying the following without success:
$ sudo zpool create vmpool mirror /dev/sda2 /dev/sdb2
$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? no
Name of the existing ZFS pool or dataset: vmpool
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? no
LXD has been successfully configured.
$ lxc launch ubuntu:16.04 ubuntu01
Creating ubuntu01
Retrieving image: 100%
error: Failed to create ZFS filesystem: cannot create ‘/vmpool/images/d23ee1f4fd284aeaba6adeb67cccf7b871e96178d637fec96320aab7cc9634b1’: leading slash in name
I’m stuck here…
Never mind I fixed it by reinstalling lxd:
sudo apt-get remove –purge lxd
sudo rm -r /var/lib/lxd /var/log/lxd
sudo apt install lxd
The problem was probably originated because when I installed lxd I wasn’t using an appropriate kernel (long story with OVH servers).
Thanks a lot for the article. I would like to know what is the correct way of proceeding when I’m already running LXC containers on my Debian. Should I apt purge LXC before installing LXD? If not, will LXD will be aware of my old LXC containers? Thanks
Hi,
Currently I am investigating LXD with the goal of running it on PPC64 architecture for an embedded solution.
1.) Is LXD supported on PPC64(big-endian) architecture?
2.) The following link hits that the Go compiler does not support the PPC64 Architecture, has there been an update since ?
LINK: https://github.com/golang/go/issues/13192
3.) I have been looking into GCCGO, are there steps on how to compile LXD with GCCGO ?
Help with these topics will be greatly appreciated.
Thanks!
Hello !
I installed LXC on 16.04. I used the configuration with a br0 as a bridge for accessing to outside by bridge (br0).
I have IPv4 + DHCP on containers–> OK
but only IPv6 local. The IPv6 from DHCP does not work.
Should I activate some options for Ipv6 ?
While trying the following (in the network configuration section):
lxc config set core.https_address [::]
I get:
zsh: no matches found: [::]
any idea where I should look to correct this ?
I think I did all the previous steps correctly.
How can the zfs storage backend be increased in size?
After selecting zfs during sudo lxd init configuration, a size for the zfs vdev/pool was specified. How can this size be altered?
The reason I ask is that I am almost out of space!
Having trouble getting an image from the server. Not sure if I’m doing this right, but I followed the setup and got to this point:
sudo lxc launch ubuntu:
Creating correct-owl
error: Get https://cloud-images.ubuntu.com/releases/streams/v1/index.json: http: error connecting to proxy http://squid01.internal:3128: Unable to connect to: squid01.internal:3128
Which is interesting, because I was able to download an image the other day. Any suggestions?
Oops! Never mind, guys- I discovered I was using the wrong syntax to launch a container. The following works:
lxc launch ubuntu:18215f8f4536 cont1
image in list:
lxc image list ubuntu: | grep 18215f8f4536| | 18215f8f4536 | yes | ubuntu 16.10 amd64 (release) (20170307) | x86_64 | 155.95MB | Mar 7, 2017 at 12:00am (UTC) |
Hello,
firstly thanks for your topic it’s really useful for the newcomer like me.
I want to signal a dead link between “Storage backends” and “ZFS”.
Oops, we renamed that page upstream. Link fixed now. Thanks
I was following this guide + https://help.ubuntu.com/lts/serverguide/lxd.html which seems outdated
lxc config set storage.zfs_pool_name lxd
gives a deprecated message.
Basically I am trying top some disaster recovery where the OS hosts or the LXD installation gets corrupted and has to be reinstalled (the host and/or LXD)
I have a zfs pool and after reinstallation I can’t set the pool name to an existing pool. it seems that init is requiring new empty pool.
lxc config set storage.zfs_pool_name lxd
error: Setting the key “storage.zfs_pool_name” is deprecated in favor of storage pool configuration.
lxc storage create default zfs source=lxd
error: Provided ZFS pool (or dataset) isn’t empty
I somehow managed to overcome this by selecting a existing empty dataset in the LXD init (e.g. when executing lxd init… enter the name of the pool dataset: default/lxdpool) so after a new installation I can rename lxdpool to something else and execute lxd init with default/lxdpool (after removing /var/lib/lxd/ that is not being deleted when I uninstall lxd)
Am I missing something? How could I easily reuse an existing storage pool? I used the ZFS pool for something else and I didn’t want to delete it.
Thank you
PD the server guide on ubuntu site seems outdated for 17.04 2.12 (of course, it is lts, no problem)
1) A possible typo:
a) in this post:
‘lxc config set core.proxy_https http://squid01.internal:3128‘
It probably, should be:
‘lxc config set core.proxy_https https://squid01.internal:3128‘
b) in lxd V2.0.9 on my Ubuntu 16.04.2:
When I tried to autocomplete:
‘lxc config set images.rem’ ,
it gives me:
‘lxc config set images.remot_cache_expiry’
I thought, that’s it’s an alias to ‘lxc config set images.remote_cache_expiry’, so tried to set it, but it gave me an error, that this parameter doesn’t exist. When I’ve tried to manually type:
‘lxc config set images.remote_cache_expiry’ with a number, of course
it accepted the value. Autocompletion doesn’t show the full: ‘remote’ version, only the ‘remot’.
2) If an image, that has containers based on it, gets updated or autoupdated, will the containers, based on this image become corrupted? So, for this reason, we should do: ‘lxc config set images.auto_update_cached false’?
I have tried to set up lxd installed on an btrfs subvolume linked to /var/lib/lxd and I never get a btrfs option when running ‘lxd init’.
I am not sure if I am setting it up incorrectly. Sub volumes list. I was wondering if someone could point me in the right direction?
Thanks
How do I rename a zfs pool in the lxd configuration?
I used the zpool protocol to rename the actual pool from default to lxdpool but then found the lxc/lxd configuration doesn’t work with the new name.
I tried the following command
# lxc config set storage.zfs_pool_name lxdpool
But received the following error..
error: Setting the key “storage.zfs_pool_name” is deprecated in favor of storage pool configuration.
Where and how do I make this storage pool configuration
Any guidance would be appreciated.