LXD 2.0: Introduction to LXD [1/12]

This is the first blog post in this series about LXD 2.0.

LXD logo

A few common questions about LXD

What’s LXD?

At its simplest, LXD is a daemon which provides a REST API to drive LXC containers.

Its main goal is to provide a user experience that’s similar to that of virtual machines but using Linux containers rather than hardware virtualization.


How does LXD relate to Docker/Rkt?

This is by far the question we get the most, so lets address it immediately!

LXD focuses on system containers, also called infrastructure containers. That is, a LXD container runs a full Linux system, exactly as it would be when run on metal or in a VM.

Those containers will typically be long running and based on a clean distribution image. Traditional configuration management tools and deployment tools can be used with LXD containers exactly as you would use them for a VM, cloud instance or physical machine.

In contrast, Docker focuses on ephemeral, stateless, minimal containers that won’t typically get upgraded or re-configured but instead just be replaced entirely. That makes Docker and similar projects much closer to a software distribution mechanism than a machine management tool.

The two models aren’t mutually exclusive either. You can absolutely use LXD to provide full Linux systems to your users who can then install Docker inside their LXD container to run the software they want.

Why LXD?

We’ve been working on LXC for a number of years now. LXC is great at what it does, that is, it provides a very good set of low-level tools and a library to create and manage containers.

However that kind of low-level tools aren’t necessarily very user friendly. They require a lot of initial knowledge to understand what they do and how they work. Keeping backward compatibility with older containers and deployment methods has also prevented LXC from using some security features by default, leading to more manual configuration for users.

We see LXD as the opportunity to address those shortcomings. On top of being a long running daemon which lets us address a lot of the LXC limitations like dynamic resource restrictions, container migration and efficient live migration, it also gave us the opportunity to come up with a new default experience, that’s safe by default and much more user focused.

The main LXD components

There are a number of main components that make LXD, those are typically visible in the LXD directory structure, in its command line client and in the API structure itself.


Containers in LXD are made of:

  • A filesystem (rootfs)
  • A list of configuration options, including resource limits, environment, security options and more
  • A bunch of devices like disks, character/block unix devices and network interfaces
  • A set of profiles the container inherits configuration from (see below)
  • Some properties (container architecture, ephemeral or persistent and the name)
  • Some runtime state (when using CRIU for checkpoint/restore)


Container snapshots are identical to containers except for the fact that they are immutable, they can be renamed, destroyed or restored but cannot be modified in any way.

It is worth noting that because we allow storing the container runtime state, this effectively gives us the concept of “stateful” snapshots. That is, the ability to rollback the container including its cpu and memory state at the time of the snapshot.


LXD is image based, all LXD containers come from an image. Images are typically clean Linux distribution images similar to what you would use for a virtual machine or cloud instance.

It is possible to “publish” a container, making an image from it which can then be used by the local or remote LXD hosts.

Images are uniquely identified by their sha256 hash and can be referenced by using their full or partial hash. Because typing long hashes isn’t particularly user friendly, images can also have any number of properties applied to them, allowing for an easy search through the image store. Aliases can also be set as a one to one mapping between a unique user friendly string and an image hash.

LXD comes pre-configured with three remote image servers (see remotes below):

  • “ubuntu:” provides stable Ubuntu images
  • “ubunt-daily:” provides daily builds of Ubuntu
  • “images:” is a community run image server providing images for a number of other Linux distributions using the upstream LXC templates

Remote images are automatically cached by the LXD daemon and kept for a number of days (10 by default) since they were last used before getting expired.

Additionally LXD also automatically updates remote images (unless told otherwise) so that the freshest version of the image is always available locally.


Profiles are a way to define container configuration and container devices in one place and then have it apply to any number of containers.

A container can have multiple profiles applied to it. When building the final container configuration (known as expanded configuration), the profiles will be applied in the order they were defined in, overriding each other when the same configuration key or device is found. Then the local container configuration is applied on top of that, overriding anything that came from a profile.

LXD ships with two pre-configured profiles:

  • “default” is automatically applied to all containers unless an alternative list of profiles is provided by the user. This profile currently does just one thing, define a “eth0” network device for the container.
  • “docker” is a profile you can apply to a container which you want to allow to run Docker containers. It requests LXD load some required kernel modules, turns on container nesting and sets up a few device entries.


As I mentioned earlier, LXD is a networked daemon. The command line client that comes with it can therefore talk to multiple remote LXD servers as well as image servers.

By default, our command line client comes with the following remotes defined

  • local: (default remote, talks to the local LXD daemon over a unix socket)
  • ubuntu: (Ubuntu image server providing stable builds)
  • ubuntu-daily: (Ubuntu image server providing daily builds)
  • images: (images.linuxcontainers.org image server)

Any combination of those remotes can be used with the command line client.

You can also add any number of remote LXD hosts that were configured to listen to the network. Either anonymously if they are a public image server or after going through authentication when managing remote containers.

It’s that remote mechanism that makes it possible to interact with remote image servers as well as copy or move containers between hosts.


One aspect that was core to our design of LXD was to make it as safe as possible while allowing modern Linux distributions to run inside it unmodified.

The main security features used by LXD through its use of the LXC library are:

  • Kernel namespaces. Especially the user namespace as a way to keep everything the container does separate from the rest of the system. LXD uses the user namespace by default (contrary to LXC) and allows for the user to turn it off on a per-container basis (marking the container “privileged”) when absolutely needed.
  • Seccomp. To filter some potentially dangerous system calls.
  • AppArmor: To provide additional restrictions on mounts, socket, ptrace and file access. Specifically restricting cross-container communication.
  • Capabilities. To prevent the container from loading kernel modules, altering the host system time, …
  • CGroups. To restrict resource usage and prevent DoS attacks against the host.

Rather than exposing those features directly to the user as LXC would, we’ve built a new configuration language which abstracts most of those into something that’s more user friendly. For example, one can tell LXD to pass any host device into the container without having to also lookup its major/minor numbers to manually update the cgroup policy.

Communications with LXD itself are secured using TLS 1.2 with a very limited set of allowed ciphers. When dealing with hosts outside of the system certificate authority, LXD will prompt the user to validate the remote fingerprint (SSH style), then cache the certificate for future use.


Everything that LXD does is done over its REST API. There is no other communication channel between the client and the daemon.

The REST API can be access over a local unix socket, only requiring group membership for authentication or over a HTTPs socket using a client certificate for authentication.

The structure of the REST API matches the different components described above and is meant to be very simple and intuitive to use.

When a more complex communication mechanism is required, LXD will negotiate websockets and use those for the rest of the communication. This is used for interactive console session, container migration and for event notification.

With LXD 2.0, comes the /1.0 stable API. We will not break backward compatibility within the /1.0 API endpoint however we may add extra features to it, which we’ll signal by declaring additional API extensions that the client can look for.

Containers at scale

While LXD provides a good command line client, that client isn’t meant to manage thousands of containers on multiple hosts. For that kind of use cases, we have nova-lxd which is an OpenStack plugin that makes OpenStack treat LXD containers in the exact same way it would treat VMs.

This allows for very large deployments of LXDs on a large number of hosts, using the OpenStack APIs to manage network, storage and load-balancing.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net

And if you can’t wait until the next few posts to try LXD, you can take our guided tour online and try it for free right from your web browser!

About Stéphane Graber

Project leader of Linux Containers, Linux hacker, Ubuntu core developer, conference organizer and speaker.
This entry was posted in Canonical voices, LXD, Planet Ubuntu and tagged . Bookmark the permalink.

63 Responses to LXD 2.0: Introduction to LXD [1/12]

  1. Marcelo Rezende Módolo says:


    I try to list images but get this error:

    $ lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description: Ubuntu 15.10
    Release: 15.10
    Codename: wily

    $ lxc version

    $ lxc image list images:
    error: Get https://images.linuxcontainers.org:8443/1.0/images?recursion=1: x509: certificate signed by unknown authority

    Very good introduction!

    Marcelo Módolo

    1. You may have an old certificate from a previously broken LXD client.

      rm ~/.lxc/config/servercerts/client.crt should sort it out for you and have LXD use your system CA again.

      1. Marcelo Rezende Módolo says:


        It’s working now!


  2. Jonathan Ballet says:

    Hi Stéphane,

    In the last “Extra information” section, you link towards https://github.com/lxd which actually contains nothing 🙂

  3. UNIX admin says:

    Sounds like SmartOS with Solaris zones is really putting the pressure on you.

    How do you compare with SmartOS and the Solaris zones technology, which it provides?

    1. Doesn’t really put pressure on us as most folks want to run Linux rather than Solaris and the Linux zones on SmartOS while a nice technical hack, don’t provide good enough syscall coverage to run complex modern software.

      1. UNIX admin says:

        (My question to compare and contrast LXC and SmartOS zones remains unanswered; You have answered a question I have not asked you.)

        Theo Schlossnagle: “it works so well that the entire build environment for a previous company of ours builds all of its production RPM’s, the whole build cluster, all done on an lx-brand and
        they didn’t know”:

        Bryan Cantrill: “do you know that, um, of the four nuclear reactors in Poland, two of them are actually run on lx-brands?”

        The lx-branded zones work because the Linux kernel now has a committed ABI kernel, because of Torvalds:

        “and the Linux zones on SmartOS while a nice technical hack, don’t provide good enough syscall coverage to run complex modern software.”
        That is a very interesting assertion, considering that, among other operating systems, Joyent provides an Ubuntu image for SmartOS:


        …so what you have implied is that Ubuntu is not complex, or not modern, or both?

        “And then most importantly we got 64-bit Ubuntu 14.04 booted in October, and coming down in 2015, it’s actually hard to find things that don’t work.”

        @36:48: http://www.joyent.com/developers/videos/docker-and-the-future-of-containers-in-production

        do you have any comment on that?

        1. There are also a bunch of syscalls which are almost impossible for them to ever really support, such as those that do namespace management. So right now, there is for example no way to run Docker, rkt or LXC INSIDE a lx zone whereas on Linux you can nest as many of those as you want.

          Since the lx zone in SmartOS is emulating the Linux kernel interface and that interface is expanded with every kernel release, they by definition always have to play catch up.
          Some syscalls are trivial to map to a Solaris equivalent, some not so much.

          Thankfully the Linux C libraries usually do the job of falling back to something older/slower for you so you don’t get outright breakage when upgrading to more recent software but the fact remain that by its very design, SmartOS is bound to lag several months behind upstream Linux.

          Anyway, this is mostly a SmartOS lx zone vs Linux containers thing so not very relevant to LXD.

          LXD is a daemon and REST API to manage containers. It could absolutely be used to drive Solaris zones, Freebsd jails or even Windows containers. Obviously or main focus is Linux containers, but when doing API design we keep the other container implementations in mind so we may one day support them.

          1. UNIX admin says:

            SmartOS actually has docker running across the entire datacenter. It’s called Triton.

            You appear to have missed the part where Linus mandated a stable ABI, so the syscalls won’t be changing any more. Also, they have all been mapped.

            But my question still stands: with the technologies in SmartOS at one’s disposal, which *technical* advantages would LXC substrate offer over SmartOS?

            What do LXC and LXD offer that SmartOS doesn’t, and do they do it any better than SmartOS and zones do?

          2. They don’t actually run Docker at all, they have a service which pretends to be the Docker API but that does zones under the scene.

            If you get a clean Ubuntu system on SmartOS and try to install docker, rkt or LXC, none of them will work.

            As for the stable ABI, you misunderstand what it means. The ABI guarantee in the Linux kernel is that no symbol will change after they are introduced, so a given syscall will not get extra arguments in later releases, it however doesn’t prevent the addition or removal of extra syscalls and the average Linux kernel release usually comes with 2-3 new syscalls.

            The technical advantage of SmartOS is that this isn’t Solaris. So you get all the existing Linux kernel drivers and everything which Linux provides that Solaris doesn’t.

            I expect SmartOS to be able to handle most normal cloud workloads and so for those kind of users who don’t really care about what’s the kernel behind it and won’t mind it when their software provider tells them to get a real Linux kernel when they hit a bug, it should be fine.

            If you however care about passthrough of some fancy devices that the Solaris kernel doesn’t know about, say, if you need GPU accelerated workloads or fancy DPDK network management or well, any of the thousands of drivers that Linux has, then using real Linux containers will get you all of that, plus you’ll be running a real Linux kernel which upstream software developers are used to and support.

  4. On the Github page you listed at the bottom of the article, this is what I see:
    “This organization has no public repositories.”

    Is this intentional, or a mistake?

    1. Not intentional, just a typo, fixed.


  5. Paolo Bonzini says:

    How does LXD differ from what libvirt was doing with Linux containers three or so years ago?

    1. It always felt like container support in libvirt was an afterthought with their main focus being virtual machines.

      That’s perfectly fine but it also means that they were mostly bound by what virtual machines can do and so didn’t have a good way to expose some things that are unique to containers.

      With LXD we focus solely on containers, which means that we can make a lot of assumptions that libvirt couldn’t. We can assume that we can extend and shrink any resource live, that we can inspect every process running inside the container, access the filesystem at will, inject any task we want inside the container, …

      We also wanted a good REST API that’s intuitive for folks to script and use. libvirt’s API has always felt a bit weird to me as it’s basically all about grabbing and sending big chunks of XML over SSH rather than something more friendly like REST.

      1. Paolo Bonzini says:

        Actually, libvirt has a lot of container-specific features, usable with both LXC and OpenVZ: network namespaces, specifying mounts (the filesystem element) for the container, block and character device passthrough, virtual networks and routes are all supported _only_ by LXC.

        Libvirt is a low level interface; the idea is to have a layer above libvirt to provide things such as a REST API; for example both oVirt and OpenStack provide one and they both use libvirt. You can take a look at https://lwn.net/Articles/657284/ which explains how the XML helped keeping libvirt’s ABI and API backwards-compatible!

        On the other hand, libvirt indeed doesn’t provide the inspection features that you mention. Thanks for the answer!

  6. skunk says:

    is lxd somehow bound to systemd or does it work as meant with a traditional init system (eg. openrc) too?
    thank you

    1. LXD doesn’t care about what’s running in the container at all, so any init system should work fine.
      We frequently use it with upstart and sysvinit and it looks like openrc works fine too.

      stgraber@castiana:~$ lxc launch images:gentoo/current/amd64 gentoo
      Creating gentoo
      Retrieving image: 100%
      Starting gentoo
      stgraber@castiana:~$ lxc exec gentoo bash
      gentoo ~ # ps fauxww
      root 351 0.0 0.0 18180 2232 ? Ss 18:59 0:00 bash
      root 358 0.0 0.0 17448 1348 ? R+ 18:59 0:00 \_ ps fauxww
      root 1 0.0 0.0 4184 820 ? Ss+ 18:56 0:00 init [3]
      root 340 0.0 0.0 12604 1172 ? Ss 18:56 0:00 /sbin/agetty --noclear 115200 console linux
      gentoo ~ #

      1. skunk says:

        i meant on the host system not on the guests…
        i ask because i run lxc containers on gentoo hosts and want to keep them systemd free…
        if lxd will never hard depend on systemd for managing containers i’ll look into it to replace lxc
        thank you

        1. No, we won’t depend on systemd for LXD itself.

          We do integrate with it to some extent, like support socket activation and ship systemd units in the Ubuntu package, but all of that is absolutely optional and LXD works great on non-systemd systems such as Ubuntu 14.04 where a lot of our users run it (on upstart).

  7. nick says:

    Hi Stéphane,

    This series of Articles about LXD are a great work and I’m going back over it. However, I’m still struggling with what seems to be a complete retirement of App Containers and the abandonment of LXC as it was known before LXD. With LXD it seems like Canonical is leaving the App Container to Docker. I know of no other way to interpret the LXD message.

    With the release of LXD it seems that the original meaning of LXC has been changed (from LinuX Container, traditionally an App Container) to better align with LXD, so that it now reflects and describes the two aspects of the maturing platform: LXC = The Client, and LXD = The Daemon (as seen in the MAN page name for each). That would be all fine except that LXD is marketed as both a peer to LXC (new container type – the full OS Container) and as a parent (new implementation and wrapper of LXC).

    So, with LXC (App Containers) remaining un-addressed in the branding of LXD as OS Container, and the technical model of LXD as Daemon vs LXC Client, the message of LXD appears confused.

    If we look at the new technical model of Client (LXC) and Daemon (LXD), it would appear logical that with LXC (and using lxc client commands against the LXD Daemon) we should be able to:
    1. create a thinner “App Container”, either Privileged or Unprivileged and, with or without an App Image; or
    2. create a fatter “OS Container”, either Privileged or Unprivileged, and, with or without an OS Image.

    Creating either Container type without an Image gives the Administrator the option of installing into it as one would install an app into a traditional LXC Container, or an OS into a traditional Virtual Machine.

    Unfortunately, none of the high level documentation about LXD presents the LXC/LXD model and certainly makes no mention about how to approach the creation of App and OS Containers as PEERS in the new model. All I can glean from the official LXD documentation is that it’s not possible to use LXD for any other type – you must use an OS image to create an OS Container.

    So, it would be fantastic to see this addressed directly, so that users can better understand LXD and move forward to create either App or OS Containers, starting with or without an Image.

    I think that there is still a great need for a non-Docker App Container technology. I’m sure that there are many users who don’t need or want to install an OS into every LXD container.

    I hope this makes sense.


    1. Yes, Canonical has no current interest in application container itself and we’re quite happy to support Docker for that. That’s why we’ve made it possible for users to run Docker inside LXD.

      LXC was primarily focused on full system containers right from the start, it was meant as an alternative to OpenVZ using only mainline kernel features. The fact that people could use it for application containers was more of a side effect than a design decision.

      We are not retiring the old lxc-* tools and folks are still more than welcome to use the LXC library to build application container technology. But the fact is, application containers just aren’t interesting to us as a Linux distribution and we believe there already are plenty of good implementation out there and so no need for us to spend time competing with them.

      1. As for alternatives to Docker, I thought Rkt was doing a pretty good job at that. Not that I’ve really used either much outside of running their hello-world equivalent to make sure they work 🙂

  8. nick says:

    Thanks, Stéphane – that clears things up pretty well.

    It seems that Snappy and LXD might be the way to go. I took a quick look at Rkt and CoreOS but figured that the super-lightweight Ubuntu Core OS running in LXD would give the best of both worlds: OS Containers that are almost as light as App Containers, yet as secure as Virtual Machines.

    Is Snappy Core being officially released in 16.04?


  9. Anandkumar says:

    Hi Stéphane,

    Thanks for all your write up about lxd.
    I am looking at LXD containers recently.
    My idea is to create a hadoop cluster using LXD containers.

    A month ago, I launched a centos container on ubuntu host.
    That time i was able to see that the container had an ip address associated with it like this:

    ubuntu@xx.xx.xx.xx:~$ lxc list
    | my-centos | RUNNING | (eth0) | | PERSISTENT | 0 |

    But now last couple of days whenever i create a container I do not see any ip address associated with it like this:

    ubuntu@xx.xx.xx.xx:~$ lxc list
    | container1 | RUNNING | | | PERSISTENT | 0 |

    Entering into a container:

    ubuntu@xx.xx.xx.xx:~$ lxc exec container1 /bin/bash
    [root@container1 ~]#

    I do not see any ip address in ifconfig as well.

    [root@container1 ~]# ifconfig -a
    eth0 Link encap:Ethernet HWaddr 00:16:3E:90:21:89
    inet6 addr: fe80::216:3eff:fe90:2189/64 Scope:Link
    RX packets:8 errors:0 dropped:0 overruns:0 frame:0
    TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:648 (648.0 b) TX bytes:2358 (2.3 KiB)

    lo Link encap:Local Loopback
    inet addr: Mask:
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:65536 Metric:1
    RX packets:0 errors:0 dropped:0 overruns:0 frame:0
    TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

    [root@container1 ~]#

    My questions:

    1. Why container has no ip address and how to set an ip address to it or solution to this.
    2. How to ssh into the container from the host and from outside.
    3. Solution to this:
    I am unable to install anything into the container. why?

    [root@container1 ~]# yum install sudo
    Loaded plugins: fastestmirror
    Setting up Install Process
    Determining fastest mirrors
    Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=os&infra=stock error was
    14: PYCURL ERROR 5 – “Couldn’t resolve proxy ‘fe80::1%eth0]'”
    Error: Cannot find a valid baseurl for repo: base
    [root@container1 ~]#

    Please help me to resolve my problem.

    1. A package update pushed a bit over a week ago moved everyone from the old lxcbr0 bridge to a new lxdbr0 bridge.

      The same update would have shown a big warning saying that the lxdbr0 bridge comes with no configured subnet by default and that you should run “sudo dpkg-reconfigure -p medium lxd” to answer a few networking questions and get network working again.

      So I’d recommend you run “sudo dpkg-reconfigure -p medium lxd” and then restart your containers so they can get an IP from the new bridge.

      This change was required to be able to decouple LXD from the old LXC tools, moving it onto its own bridge. The warning message was highlighting the reasons for that, the effect it would have on existing containers and how to configure it so things would work as usual.

      1. Anandkumar says:

        Hi Stephene,

        Great. Really Thanks for the help.
        Thanks for your time.
        It worked well. Now i get the ip-address associated with my container.

        But unable to ssh into it. please have a look at my issue and provide me a solution.

        My LXD host is ubuntu14.04
        My container is Centos6.5
        My workstation is Centos6.5

        ubuntu@ip-187-22-33-55:~$ lxc list
        | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
        | container1 | RUNNING | (eth0) | | PERSISTENT | 0 |

        ubuntu@ip-187-22-33-55:~$ lxc exec container1 /bin/bash

        [root@container1 ~]# ifconfig -a
        eth0 Link encap:Ethernet HWaddr 00:16:3E:64:75:EF
        inet addr: Bcast:

        I created a user named “me” inside a container.
        [root@container1 ~]# adduser me
        [root@container1 ~]# passwd me

        Trying to ssh:
        ubuntu@ip-187-22-33-55:~$ ssh -v me@
        OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
        debug1: Reading configuration data /etc/ssh/ssh_config
        debug1: /etc/ssh/ssh_config line 19: Applying options for *
        debug1: Connecting to [] port 22.
        debug1: connect to address port 22: Connection refused
        ssh: connect to host port 22: Connection refused

        Then also I tried putting ~/.ssh/id_rsa.pub into container’s ~/.ssh/authorized_keys. But it didn’t work.

        Can you please explain me the steps to ssh into my container from my lxd host ?
        Also my lxd host is running on amazon ec2, so how can i ssh to the container from my local workstation(Centos)?


        1. The error suggests that sshd isn’t running in the container. It may need installing or be enabled in some way.

          As for remote access, you could setup a NAT rule with something like:
          iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 2222 -j DNAT --to

          Which would then forward traffic arriving on port 2222 of your instance to the container. You’ll also need to update any firewall in the way (or security groups) to allow port 2222 into your instance.

          1. Anandkumar says:


            It worked.. I am able to SSH into container from lxd host.

            Thanks a lot Stephane…You are awesome man..

            By the way now,

            I have two lxd hosts lh1 and lh2.
            Container1 is running inside lh1 and container2 running inside lh2.

            My local host name is ahost.

            So i use the prerouting rule which you mentioned to login from ahost to container1 and also to container2.

            I am going to make a hadoop cluster on this two containers.

            How can I make a connection between this two containers to configure hadoop cluster?

            How to SSH from container1 to container2 when they are running in two different lxd hosts?

            Can you please explain this ?


  10. Anand says:


    It worked.. I am able to SSH into container from lxd host.

    Thanks a lot Stephane…You are awesome man..

    By the way now,

    I have two lxd hosts lh1 and lh2.
    Container1 is running inside lh1 and container2 running inside lh2.

    My local host name is ahost.

    So i use the prerouting rule which you mentioned to login from ahost to container1 and also to container2.

    I am going to make a hadoop cluster on this two containers.

    How can I make a connection between this two containers to configure hadoop cluster?

    How to SSH from container1 to container2 when they are running in two different lxd hosts?

    Can you please explain this ?


  11. Anandkumar says:

    Can i install lxd on centos6.5 and how?


  12. Karl says:

    I have some ARMHF containers created and working well with LXC. I cannot find how to create a foreign-architecture container using LXD. I tried the various online approaches to convert an LXC container to an LXD package, but LXD refuses to run them.

    Is this supported? And if not, any ideas when it will be?

    Thanks for any info!

  13. Dakshi Kumar says:

    There are many articles saying LXD is more secure but none of it says how.

    Can you please elaborate on it.

  14. Stephane how can I put VLAN tagging info into my LXD config file? Here is the config file I use to connect my LXC container to my OpenvSwitch:


    oracle@g70:~$ lxc config show –expanded lxdora7a
    name: lxdora7a
    – default
    volatile.base_image: ad1d975af5bee4ef947ecca36084dbe2934277ed62e6a02c1fa60f1c902d2280
    volatile.eth0.hwaddr: 00:16:3e:da:03:3e
    volatile.last_state.idmap: ‘[{“Isuid”:true,”Isgid”:false,”Hostid”:231072,”Nsid”:0,”Maprange”:65536},{“Isuid”:false,”Isgid”:true,”Hostid”:231072,”Nsid”:0,”Maprange”:65536}]’
    host_name: lxdora7a
    name: eth0
    nictype: bridged
    parent: sw1
    type: nic
    path: /
    type: disk
    ephemeral: false


    This config automatically creates a port on my SW1 and connects the container but DHCP to my isc-dhcp-server fails because my SW1 is vlan TAG=10.

    I tried various added parameters but nothing worked and I looked on the internet for clues. In the end I got an IP address handed to it by starting it, then after it was started running this ovs-vsctl command: sudo ovs-vsctl set port lxdora7a tag=10

    Then connecting to the LXD container using: lxc exec lxdora7a /bin/bash
    and then running “ifup eth0” to finally get a DHCP IP handed out.

    What can I put in my LXD config file to put the TAG=10 on it at boot time?

    1. Ryan Holt says:

      Did you ever get this sorted? Running into the same thing and would prefer to let OVS / LXD handle this ‘natively’ rather than resort to fake bridges…

  15. Harry says:

    Question of difference between LxD and Docker:

    I understand that Docker is an application container, and LxD a system/infrastructure container that runs a full Linux distribution inside of it, much like it would run in a VM.

    But what if I decide to run a Docker container corresponding to the *full* Ubuntu OS distribution? How would such a container compare with LxD? What all ‘features’, if anything, will this Docker container for Ubuntu OS be missing relative to an Ubuntu LxD container?

    Thanks in advance.

  16. rabia gharssa says:

    hi, i wish you could make a manual for LXD with the comande lines and configuration
    personaly i want to build an infrastructure based on LXD for a study project and i didn’t find any documentation, exept for that of ubuntu’s website.
    so i hope you will fix that. thx

    1. Look in the “doc” directory at https://github.com/lxc/lxd

      1. rabia gharssa says:

        thanks stéphane for the reply, i have one more question :
        are the containers meant to be used for the cloud computing or they can be used for the standard virtualization in an entreprise environnement ?
        thanks in advance.

  17. Stephane, you are a legend; would love to chat, mIRC, Slack?
    I have tried to engage Canonical in the UK but they don’t do Australian timezone and I’m already speaking to France after hours…..

    Anyway msged nsec.io as well again.
    But I will just get my guys to follow your blog setup I think and use the technologies you have specified here.

  18. dwayneclarke says:

    Thanks for your work and notes, they are a student’s handbook

  19. aventrax says:

    I have a question regards the init system. You wrote that lxd doesn’t care about the init system, but I converted a full KVM virtual machine to LXD (changing all the uids/gids using a bash script), the system works except that all the services seems down and I have to start them manually, beginning with the /etc/init.d/network.
    Yes, the VM converted is not based on systemd, but the old init. Can be the issue? How can I view the container “boot” logs?


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.