Network management with LXD (2.3+)

LXD logo

Introduction

When LXD 2.0 shipped with Ubuntu 16.04, LXD networking was pretty simple. You could either use that “lxdbr0” bridge that “lxd init” would have you configure, provide your own or just use an existing physical interface for your containers.

While this certainly worked, it was a bit confusing because most of that bridge configuration happened outside of LXD in the Ubuntu packaging. Those scripts could only support a single bridge and none of this was exposed over the API, making remote configuration a bit of a pain.

That was all until LXD 2.3 when LXD finally grew its own network management API and command line tools to match. This post is an attempt at an overview of those new capabilities.

Basic networking

Right out of the box, LXD 2.3 comes with no network defined at all. “lxd init” will offer to set one up for you and attach it to all new containers by default, but let’s do it by hand to see what’s going on under the hood.

To create a new network with a random IPv4 and IPv6 subnet and NAT enabled, just run:

stgraber@castiana:~$ lxc network create testbr0
Network testbr0 created

You can then look at its config with:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
config:
 ipv4.address: 10.150.19.1/24
 ipv4.nat: "true"
 ipv6.address: fd42:474b:622d:259d::1/64
 ipv6.nat: "true"
managed: true
type: bridge
usedby: []

If you don’t want those auto-configured subnets, you can go with:

stgraber@castiana:~$ lxc network create testbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true
Network testbr0 created

Which will result in:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
config:
 ipv4.address: 10.0.3.1/24
 ipv4.nat: "true"
 ipv6.address: none
managed: true
type: bridge
usedby: []

Having a network created and running won’t do you much good if your containers aren’t using it.
To have your newly created network attached to all containers, you can simply do:

stgraber@castiana:~$ lxc network attach-profile testbr0 default eth0

To attach a network to a single existing container, you can do:

stgraber@castiana:~$ lxc network attach my-container default eth0

Now, lets say you have openvswitch installed on that machine and want to convert that bridge to an OVS bridge, just change the driver property:

stgraber@castiana:~$ lxc network set testbr0 bridge.driver openvswitch

If you want to do a bunch of changes all at once, “lxc network edit” will let you edit the network configuration interactively in your text editor.

Static leases and port security

One of the nice thing with having LXD manage the DHCP server for you is that it makes managing DHCP leases much simpler. All you need is a container-specific nic device and the right property set.

root@yak:~# lxc init ubuntu:16.04 c1
Creating c1
root@yak:~# lxc network attach testbr0 c1 eth0
root@yak:~# lxc config device set c1 eth0 ipv4.address 10.0.3.123
root@yak:~# lxc start c1
root@yak:~# lxc list c1
+------+---------+-------------------+------+------------+-----------+
| NAME |  STATE  |        IPV4       | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-------------------+------+------------+-----------+
|  c1  | RUNNING | 10.0.3.123 (eth0) |      | PERSISTENT | 0         |
+------+---------+-------------------+------+------------+-----------+

And same goes for IPv6 but with the “ipv6.address” property instead.

Similarly, if you want to prevent your container from ever changing its MAC address or forwarding traffic for any other MAC address (such as nesting), you can enable port security with:

root@yak:~# lxc config device set c1 eth0 security.mac_filtering true

DNS

LXD runs a DNS server on the bridge. On top of letting you set the DNS domain for the bridge (“dns.domain” network property), it also supports 3 different operating modes (“dns.mode”):

  • “managed” will have one DNS record per container, matching its name and known IP addresses. The container cannot alter this record through DHCP.
  • “dynamic” allows the containers to self-register in the DNS through DHCP. So whatever hostname the container sends during the DHCP negotiation ends up in DNS.
  • “none” is for a simple recursive DNS server without any kind of local DNS records.

The default mode is “managed” and is typically the safest and most convenient as it provides DNS records for containers but doesn’t let them spoof each other’s records by sending fake hostnames over DHCP.

Using tunnels

On top of all that, LXD also supports connecting to other hosts using GRE or VXLAN tunnels.

A LXD network can have any number of tunnels attached to it, making it easy to create networks spanning multiple hosts. This is mostly useful for development, test and demo uses, with production environment usually preferring VLANs for that kind of segmentation.

So say, you want a basic “testbr0” network running with IPv4 and IPv6 on host “edfu” and want to spawn containers using it on host “djanet”. The easiest way to do that is by using a multicast VXLAN tunnel. This type of tunnels only works when both hosts are on the same physical segment.

root@edfu:~# lxc network create testbr0 tunnel.lan.protocol=vxlan
Network testbr0 created
root@edfu:~# lxc network attach-profile testbr0 default eth0

This defines a “testbr0” bridge on host “edfu” and sets up a multicast VXLAN tunnel on it for other hosts to join it. In this setup, “edfu” will be the one acting as a router for that network, providing DHCP, DNS, … the other hosts will just be forwarding traffic over the tunnel.

root@djanet:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.lan.protocol=vxlan
Network testbr0 created
root@djanet:~# lxc network attach-profile testbr0 default eth0

Now you can start containers on either host and see them getting IP from the same address pool and communicate directly with each other through the tunnel.

As mentioned earlier, this uses multicast, which usually won’t do you much good when crossing routers. For those cases, you can use VXLAN in unicast mode or a good old GRE tunnel.

To join another host using GRE, first configure the main host with:

root@edfu:~# lxc network set testbr0 tunnel.nuturo.protocol gre
root@edfu:~# lxc network set testbr0 tunnel.nuturo.local 172.17.16.2
root@edfu:~# lxc network set testbr0 tunnel.nuturo.remote 172.17.16.9

And then the “client” host with:

root@nuturo:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.edfu.protocol=gre tunnel.edfu.local=172.17.16.9 tunnel.edfu.remote=172.17.16.2
Network testbr0 created
root@nuturo:~# lxc network attach-profile testbr0 default eth0

If you’d rather use vxlan, just do:

root@edfu:~# lxc network set testbr0 tunnel.edfu.id 10
root@edfu:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

And:

root@nuturo:~# lxc network set testbr0 tunnel.edfu.id 10
root@nuturo:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

The tunnel id is required here to avoid conflicting with the already configured multicast vxlan tunnel.

And that’s how you make cross-host networking easily with recent LXD!

Conclusion

LXD now makes it very easy to define anything from a simple single-host network to a very complex cross-host network for thousands of containers. It also makes it very simple to define a new network just for a few containers or add a second device to a container, connecting it to a separate private network.

While this post goes through most of the different features we support, there are quite a few more knobs that can be used to fine tune the LXD network experience.
A full list can be found here: https://github.com/lxc/lxd/blob/master/doc/configuration.md

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

This entry was posted in Canonical voices, LXD, Planet Ubuntu and tagged . Bookmark the permalink.

24 Responses to Network management with LXD (2.3+)

  1. Nix666 says:

    Hi Stephane, Thanks for the post.Im struggling with macvlan mode. Could you please guide me in the right direction to configure macvlan mode, so that my container is reachable by other hosts on the LAN with the same IP range ?

    Thanks in advance.

    • lxc network attach-profile eth0 default eth0

      That will change your default profile to use macvlan on the host’s eth0 device. Containers should then be getting IPs from your network’s DHCP.

      Note that this won’t work if you’re using VMWare (as it filters MAC addresses) and it will also prevent communication from your host and the containers (macvlan design limitation).

      • Nix666 says:

        Thanks, will have a look.

      • Feng Yu says:

        Hi Stéphane Graber. I find your command is not working for wlan.

        $ lxc network attach-profile wlp2s0 default eth0

        $ lxc profile show default
        name: default
        config: {}
        description: Default LXD profile
        devices:
        eth0:
        nictype: macvlan
        parent: wlp2s0
        type: nic
        usedby:
        – /1.0/containers/ubuntu-xenial

        But container would not get IP from DHCP
        $ lxc list
        +—————+———+——+——+————+———–+
        | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
        +—————+———+——+——+————+———–+
        | ubuntu-xenial | RUNNING | | | PERSISTENT | 0 |
        +—————+———+——+——+————+———–+

  2. wtayyeb says:

    Hi,
    could use it for two hosts on different subnets?
    and in case of security could it be secure? or needs other modules like IPSEC?

  3. Awesome stuff! Very useful for my project https://exploit.courses to provide users with SSH access to the container inside a VM. This was really a missing piece of functionality for LXD.

  4. Maximus says:

    In case using GRE, the “client” host nuturo doesn’t handle DHCP service (I think cause of ipv4.address=none and ipv6.address=none). That leads the failure when setting static leases and port security for containers on nuturo host (edfu no problem of course). DHCP traffic will be broadcast to edfu host.

    How can we solve it?

    • Right, static assignment won’t work for the other hosts. Port security should be fine though as right now it only prevents MAC spoofing which will keep working. It doesn’t prevent you from keeping your own MAC and steal another container’s IP right now. This should probably be changed to also cover IP spoofing, which will then indeed be a problem as the remote host(s) won’t know what the expected IP address is and so won’t be able to lock it down…

      As I mentioned, tunnelling is a bit of a nice to have which is cool for demos and test environments. In production, you’d probably want to use a full fledged SDN which would have proper cross-host configuration, providing you with a bridge that can be used with LXD and letting the SDN take care of giving only a single fixed IP per bridge port.

      • Maximus says:

        Thanks for you reply.

        For “true port security”, I’m using ebtables with 2 rules: one for binding the MAC address to the interface and one for binding the IP address with MAC address.

        Open Daylight and Open vSwitch sounds good, but the Open Daylight policy (flow) so complicated, at least for me 😀

  5. Mateusz Korniak says:

    How to set static ip# and ipv4.gateway=auto? under LXD 2.6? I get failure:

    # lxc config device set ubuntu-1604-macvlan-test virt ipv4.address 10.30.3.205
    error: The device doesn’t exist

    My config:
    # lxc profile show default
    name: default
    config: {}
    description: Default LXD profile
    devices:
    virt:
    nictype: macvlan
    parent: eth0
    type: nic
    usedby:
    – /1.0/containers/ubuntu-1604-macvlan-test

    # lxc config show –expanded ubuntu-1604-macvlan-test
    name: ubuntu-1604-macvlan-test
    profiles:
    – default
    config:
    image.architecture: amd64
    image.description: ubuntu 16.04 LTS amd64 (release) (20161130)
    image.label: release
    image.os: ubuntu
    image.release: xenial
    image.serial: “20161130”
    image.version: “16.04”
    volatile.base_image: fc6d723a6e662a5a4fe213eae6b7f4c79ee7dd566c99856d96a5ca677a99b15d
    volatile.idmap.base: “0”
    volatile.idmap.next: ‘[{“Isuid”:true,”Isgid”:false,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000},{“Isuid”:false,”Isgid”:true,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000}]’
    volatile.last_state.idmap: ‘[{“Isuid”:true,”Isgid”:false,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000},{“Isuid”:false,”Isgid”:true,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000}]’
    volatile.last_state.power: RUNNING
    volatile.virt.hwaddr: 00:16:3e:e3:22:75
    volatile.virt.name: eth0
    devices:
    root:
    path: /
    type: disk
    virt:
    nictype: macvlan
    parent: eth0
    type: nic
    ephemeral: false

    Strange, I am unable to see virt device in conainer device list:
    # lxc config device list ubuntu-1604-macvlan-test
    root: disk

  6. Maximus says:

    Switch to Open vSwitch and using a SDN controller can make the LXD networking more flexible.
    Anyone have idea to make “floating IP” works while keeping the container running with 1 NIC. This is likely Digital Ocean floating IP feature.

  7. Frans Meulenbroeks says:

    the subcommand network does not seem to exist any more in 2.0.8 (the version I got when installing lxd on 16.04)

    Also I was hoping to find some info on how to use a 2nd network interface

    • The “network” subcommand is available in LXD 2.3 and higher, LXD 2.0.x is lower than 2.3 so 2.0.8 doesn’t have this feature.

      You can install LXD 2.8 (current feature release) on Ubuntu 16.04 by doing “apt install -t xenial-backports lxd”. Do note that it’s not possible to downgrade versions, so if you decide to upgrade to the latest feature release, you will be jumping to new releases every month and won’t be able to get back to 2.0.x without loosing all your containers and images.

      As for adding a second network interface. You can do that with “lxc config device add nic [nic options]…”. Or if you are on a LXD version which has the new network API, “lxc network attach’ will do that for you.

  8. LdxLover says:

    server ubuntu 16.04
    lxd 2.8
    (my server )ens3 Link encap:Ethernet
    inet addr:69.12.89.124
    (bridge)lxdbr0 Link encap:Ethernet
    inet addr:10.253.210.1
    (ldx testvm)eth0 Link encap:Ethernet
    inet addr:10.253.210.198

    I want to bind 10.253.210.198:22 with 69.12.89.124:2222

    So that I can get access to testvm directly via ssh by login 69.12.89.124:2222

    I am admiring your hard work on lxd and looking forward to your reply. Thank you.

  9. ramane says:

    In a static address environment in default profile i use

    eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
    eth1:
    name: eth1
    nictype: bridged
    parent: br1
    type: nic

    On the host I need the device for firewalling based on ports.
    The generated name vethxxxxx is diffucult to handle, I would prefer to give it a name like veth0c1 and veth1c1.
    How can I use a own name/naming scheme for the generated veth devices on the host’s side.

    • We can’t use predictable names for those devices since a network interface name is limited to 15 characters and a container’s name is limited to 64 characters.

      You can however tell LXD what name to use on the host by adding the device to the container directly rather than through a profile and setting the host_name property.

      lxc config device add c1 eth0 nic nictype=bridged parent=br0 name=eth0 host_name=veth0c1
      lxc config device add c1 eth1 nic nictype=bridged parent=br0 name=eth1 host_name=veth1c1

      The obvious downside to this approach is that you have to do it for every one of the containers where you care about the host device name.

  10. lxc network show br0
    error: cannot open ‘lxd’: dataset does not exist

    • That error means that LXD couldn’t find the ZFS storage pool called “lxd” on your system.
      This shows up because as part of the “lxc network show” command, LXD has to check what containers are using the network.

      It sounds like there’s something badly broken with storage on your machine if that ZFS pool isn’t available anymore.

  11. mg says:

    Hi,
    I’m trying to set up nested LXD containers (c2 inside c1 inside VM). The lxd bridge inside c1 seems to support only IPv6.
    My “outer” container c1 is running fine (Ubuntu 16.04.2 LTS, almost unmodified image, except for static eth0 setup). Inside c1 I install a current lxd (2.9.2) and execute the following commands:

    # lxd init (default settings, new network bridge = no)

    # lxc network create lxdbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true
    -> error: Failed to list ipv4 rules for lxdbr0 (table )

    # service lxd restart

    # lxc network attach-profile lxdbr0 default eth0

    # lxc image copy ubuntu:x local: –copy-aliases –auto-update

    # lxc launch x c2

    Afterwards, container c2 has an IPv6 address:
    # lxc info c2

    Ips:
    eth0: inet6 fe80::216:3eff:fe9f:d731 vethIW5TNY

    Is there a relation to the error message mentioned above? Is this an intended behaviour?

    • That error suggests that iptables isn’t working, which in turns prevents most of that bridge from starting.

      If you run “iptables -L -n” in your container you may be getting a clearer error.

      • mg says:

        # iptables -L -n
        Chain INPUT (policy ACCEPT)
        target prot opt source destination
        ACCEPT tcp — 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 /* generated for LXD network lxdbr0 */
        ACCEPT udp — 0.0.0.0/0 0.0.0.0/0 udp dpt:53 /* generated for LXD network lxdbr0 */
        ACCEPT udp — 0.0.0.0/0 0.0.0.0/0 udp dpt:67 /* generated for LXD network lxdbr0 */

        Chain FORWARD (policy ACCEPT)
        target prot opt source destination
        ACCEPT all — 0.0.0.0/0 0.0.0.0/0 /* generated for LXD network lxdbr0 */
        ACCEPT all — 0.0.0.0/0 0.0.0.0/0 /* generated for LXD network lxdbr0 */

        Chain OUTPUT (policy ACCEPT)
        target prot opt source destination
        ACCEPT tcp — 0.0.0.0/0 0.0.0.0/0 tcp spt:53 /* generated for LXD network lxdbr0 */
        ACCEPT udp — 0.0.0.0/0 0.0.0.0/0 udp spt:53 /* generated for LXD network lxdbr0 */
        ACCEPT udp — 0.0.0.0/0 0.0.0.0/0 udp spt:67 /* generated for LXD network lxdbr0 */

        Is this configuration correct?

Leave a Reply

Your email address will not be published. Required fields are marked *