Network management with LXD (2.3+)

LXD logo


When LXD 2.0 shipped with Ubuntu 16.04, LXD networking was pretty simple. You could either use that “lxdbr0” bridge that “lxd init” would have you configure, provide your own or just use an existing physical interface for your containers.

While this certainly worked, it was a bit confusing because most of that bridge configuration happened outside of LXD in the Ubuntu packaging. Those scripts could only support a single bridge and none of this was exposed over the API, making remote configuration a bit of a pain.

That was all until LXD 2.3 when LXD finally grew its own network management API and command line tools to match. This post is an attempt at an overview of those new capabilities.

Basic networking

Right out of the box, LXD 2.3 comes with no network defined at all. “lxd init” will offer to set one up for you and attach it to all new containers by default, but let’s do it by hand to see what’s going on under the hood.

To create a new network with a random IPv4 and IPv6 subnet and NAT enabled, just run:

stgraber@castiana:~$ lxc network create testbr0
Network testbr0 created

You can then look at its config with:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
 ipv4.nat: "true"
 ipv6.address: fd42:474b:622d:259d::1/64
 ipv6.nat: "true"
managed: true
type: bridge
usedby: []

If you don’t want those auto-configured subnets, you can go with:

stgraber@castiana:~$ lxc network create testbr0 ipv6.address=none ipv4.address= ipv4.nat=true
Network testbr0 created

Which will result in:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
 ipv4.nat: "true"
 ipv6.address: none
managed: true
type: bridge
usedby: []

Having a network created and running won’t do you much good if your containers aren’t using it.
To have your newly created network attached to all containers, you can simply do:

stgraber@castiana:~$ lxc network attach-profile testbr0 default eth0

To attach a network to a single existing container, you can do:

stgraber@castiana:~$ lxc network attach testbr0 my-container default eth0

Now, lets say you have openvswitch installed on that machine and want to convert that bridge to an OVS bridge, just change the driver property:

stgraber@castiana:~$ lxc network set testbr0 bridge.driver openvswitch

If you want to do a bunch of changes all at once, “lxc network edit” will let you edit the network configuration interactively in your text editor.

Static leases and port security

One of the nice thing with having LXD manage the DHCP server for you is that it makes managing DHCP leases much simpler. All you need is a container-specific nic device and the right property set.

root@yak:~# lxc init ubuntu:16.04 c1
Creating c1
root@yak:~# lxc network attach testbr0 c1 eth0
root@yak:~# lxc config device set c1 eth0 ipv4.address
root@yak:~# lxc start c1
root@yak:~# lxc list c1
| NAME |  STATE  |        IPV4       | IPV6 |    TYPE    | SNAPSHOTS |
|  c1  | RUNNING | (eth0) |      | PERSISTENT | 0         |

And same goes for IPv6 but with the “ipv6.address” property instead.

Similarly, if you want to prevent your container from ever changing its MAC address or forwarding traffic for any other MAC address (such as nesting), you can enable port security with:

root@yak:~# lxc config device set c1 eth0 security.mac_filtering true


LXD runs a DNS server on the bridge. On top of letting you set the DNS domain for the bridge (“dns.domain” network property), it also supports 3 different operating modes (“dns.mode”):

  • “managed” will have one DNS record per container, matching its name and known IP addresses. The container cannot alter this record through DHCP.
  • “dynamic” allows the containers to self-register in the DNS through DHCP. So whatever hostname the container sends during the DHCP negotiation ends up in DNS.
  • “none” is for a simple recursive DNS server without any kind of local DNS records.

The default mode is “managed” and is typically the safest and most convenient as it provides DNS records for containers but doesn’t let them spoof each other’s records by sending fake hostnames over DHCP.

Using tunnels

On top of all that, LXD also supports connecting to other hosts using GRE or VXLAN tunnels.

A LXD network can have any number of tunnels attached to it, making it easy to create networks spanning multiple hosts. This is mostly useful for development, test and demo uses, with production environment usually preferring VLANs for that kind of segmentation.

So say, you want a basic “testbr0” network running with IPv4 and IPv6 on host “edfu” and want to spawn containers using it on host “djanet”. The easiest way to do that is by using a multicast VXLAN tunnel. This type of tunnels only works when both hosts are on the same physical segment.

root@edfu:~# lxc network create testbr0 tunnel.lan.protocol=vxlan
Network testbr0 created
root@edfu:~# lxc network attach-profile testbr0 default eth0

This defines a “testbr0” bridge on host “edfu” and sets up a multicast VXLAN tunnel on it for other hosts to join it. In this setup, “edfu” will be the one acting as a router for that network, providing DHCP, DNS, … the other hosts will just be forwarding traffic over the tunnel.

root@djanet:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.lan.protocol=vxlan
Network testbr0 created
root@djanet:~# lxc network attach-profile testbr0 default eth0

Now you can start containers on either host and see them getting IP from the same address pool and communicate directly with each other through the tunnel.

As mentioned earlier, this uses multicast, which usually won’t do you much good when crossing routers. For those cases, you can use VXLAN in unicast mode or a good old GRE tunnel.

To join another host using GRE, first configure the main host with:

root@edfu:~# lxc network set testbr0 tunnel.nuturo.protocol gre
root@edfu:~# lxc network set testbr0 tunnel.nuturo.local
root@edfu:~# lxc network set testbr0 tunnel.nuturo.remote

And then the “client” host with:

root@nuturo:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.edfu.protocol=gre tunnel.edfu.local= tunnel.edfu.remote=
Network testbr0 created
root@nuturo:~# lxc network attach-profile testbr0 default eth0

If you’d rather use vxlan, just do:

root@edfu:~# lxc network set testbr0 10
root@edfu:~# lxc network set testbr0 tunnel.edfu.protocol vxlan


root@nuturo:~# lxc network set testbr0 10
root@nuturo:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

The tunnel id is required here to avoid conflicting with the already configured multicast vxlan tunnel.

And that’s how you make cross-host networking easily with recent LXD!


LXD now makes it very easy to define anything from a simple single-host network to a very complex cross-host network for thousands of containers. It also makes it very simple to define a new network just for a few containers or add a second device to a container, connecting it to a separate private network.

While this post goes through most of the different features we support, there are quite a few more knobs that can be used to fine tune the LXD network experience.
A full list can be found here:

Extra information

The main LXD website is at:
Development happens on Github at:
Mailing-list support happens on:
IRC support happens in: #lxcontainers on
Try LXD online:

About Stéphane Graber

Project leader of Linux Containers, Linux hacker, Ubuntu core developer, conference organizer and speaker.
This entry was posted in Canonical voices, LXD, Planet Ubuntu and tagged . Bookmark the permalink.

44 Responses to Network management with LXD (2.3+)

  1. Nix666 says:

    Hi Stephane, Thanks for the post.Im struggling with macvlan mode. Could you please guide me in the right direction to configure macvlan mode, so that my container is reachable by other hosts on the LAN with the same IP range ?

    Thanks in advance.

    1. lxc network attach-profile eth0 default eth0

      That will change your default profile to use macvlan on the host’s eth0 device. Containers should then be getting IPs from your network’s DHCP.

      Note that this won’t work if you’re using VMWare (as it filters MAC addresses) and it will also prevent communication from your host and the containers (macvlan design limitation).

      1. Nix666 says:

        Thanks, will have a look.

        1. Nix666 says:

          Got it working. Thank you again!

      2. Feng Yu says:

        Hi Stéphane Graber. I find your command is not working for wlan.

        $ lxc network attach-profile wlp2s0 default eth0

        $ lxc profile show default
        name: default
        config: {}
        description: Default LXD profile
        nictype: macvlan
        parent: wlp2s0
        type: nic
        – /1.0/containers/ubuntu-xenial

        But container would not get IP from DHCP
        $ lxc list
        | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
        | ubuntu-xenial | RUNNING | | | PERSISTENT | 0 |

  2. wtayyeb says:

    could use it for two hosts on different subnets?
    and in case of security could it be secure? or needs other modules like IPSEC?

  3. Awesome stuff! Very useful for my project to provide users with SSH access to the container inside a VM. This was really a missing piece of functionality for LXD.

  4. Maximus says:

    In case using GRE, the “client” host nuturo doesn’t handle DHCP service (I think cause of ipv4.address=none and ipv6.address=none). That leads the failure when setting static leases and port security for containers on nuturo host (edfu no problem of course). DHCP traffic will be broadcast to edfu host.

    How can we solve it?

    1. Right, static assignment won’t work for the other hosts. Port security should be fine though as right now it only prevents MAC spoofing which will keep working. It doesn’t prevent you from keeping your own MAC and steal another container’s IP right now. This should probably be changed to also cover IP spoofing, which will then indeed be a problem as the remote host(s) won’t know what the expected IP address is and so won’t be able to lock it down…

      As I mentioned, tunnelling is a bit of a nice to have which is cool for demos and test environments. In production, you’d probably want to use a full fledged SDN which would have proper cross-host configuration, providing you with a bridge that can be used with LXD and letting the SDN take care of giving only a single fixed IP per bridge port.

      1. Maximus says:

        Thanks for you reply.

        For “true port security”, I’m using ebtables with 2 rules: one for binding the MAC address to the interface and one for binding the IP address with MAC address.

        Open Daylight and Open vSwitch sounds good, but the Open Daylight policy (flow) so complicated, at least for me 😀

  5. Mateusz Korniak says:

    How to set static ip# and ipv4.gateway=auto? under LXD 2.6? I get failure:

    # lxc config device set ubuntu-1604-macvlan-test virt ipv4.address
    error: The device doesn’t exist

    My config:
    # lxc profile show default
    name: default
    config: {}
    description: Default LXD profile
    nictype: macvlan
    parent: eth0
    type: nic
    – /1.0/containers/ubuntu-1604-macvlan-test

    # lxc config show –expanded ubuntu-1604-macvlan-test
    name: ubuntu-1604-macvlan-test
    – default
    image.architecture: amd64
    image.description: ubuntu 16.04 LTS amd64 (release) (20161130)
    image.label: release
    image.os: ubuntu
    image.release: xenial
    image.serial: “20161130”
    image.version: “16.04”
    volatile.base_image: fc6d723a6e662a5a4fe213eae6b7f4c79ee7dd566c99856d96a5ca677a99b15d
    volatile.idmap.base: “0” ‘[{“Isuid”:true,”Isgid”:false,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000},{“Isuid”:false,”Isgid”:true,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000}]’
    volatile.last_state.idmap: ‘[{“Isuid”:true,”Isgid”:false,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000},{“Isuid”:false,”Isgid”:true,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000}]’
    volatile.last_state.power: RUNNING
    volatile.virt.hwaddr: 00:16:3e:e3:22:75 eth0
    path: /
    type: disk
    nictype: macvlan
    parent: eth0
    type: nic
    ephemeral: false

    Strange, I am unable to see virt device in conainer device list:
    # lxc config device list ubuntu-1604-macvlan-test
    root: disk

  6. Maximus says:

    Switch to Open vSwitch and using a SDN controller can make the LXD networking more flexible.
    Anyone have idea to make “floating IP” works while keeping the container running with 1 NIC. This is likely Digital Ocean floating IP feature.

  7. Frans Meulenbroeks says:

    the subcommand network does not seem to exist any more in 2.0.8 (the version I got when installing lxd on 16.04)

    Also I was hoping to find some info on how to use a 2nd network interface

    1. The “network” subcommand is available in LXD 2.3 and higher, LXD 2.0.x is lower than 2.3 so 2.0.8 doesn’t have this feature.

      You can install LXD 2.8 (current feature release) on Ubuntu 16.04 by doing “apt install -t xenial-backports lxd”. Do note that it’s not possible to downgrade versions, so if you decide to upgrade to the latest feature release, you will be jumping to new releases every month and won’t be able to get back to 2.0.x without loosing all your containers and images.

      As for adding a second network interface. You can do that with “lxc config device add nic [nic options]…”. Or if you are on a LXD version which has the new network API, “lxc network attach’ will do that for you.

      1. RM says:

        I tried adding a second interface with a different bridge, however the second is seen after starting the container, but it is not getting an ip address.

        [root@pb04 ~]# lxc network show vxlan0
        ipv4.nat: “true”
        raw.dnsmasq: dhcp-option=option:domain-search,”lala”
        tunnel.lan.protocol: vxlan
        description: “”
        name: vxlan0
        type: bridge
        used_by: []
        managed: true
        status: Created
        – none
        [root@pb04 ~]# lxc network show testbr0
        ipv4.nat: “true”
        raw.dnsmasq: dhcp-option=option:domain-search,lala “10”
        tunnel.test.protocol: vxlan
        description: “”
        name: testbr0
        type: bridge
        used_by: []
        managed: true
        status: Created
        – none
        [root@pb04 ~]# lxc profile show default
        limits.cpu: “4”
        limits.memory: 16GB
        description: Default LXD profile
        nictype: bridged
        parent: vxlan0
        type: nic
        nictype: bridged
        parent: testbr0
        type: nic
        path: /
        pool: cloudian
        type: disk
        name: default
        used_by: []

        [root@pb04 ~]# lxc exec cloudian-pb04-1 — ifconfig
        eth0: flags=4163 mtu 1400
        inet netmask broadcast
        inet6 fe80::216:3eff:fe3b:9c88 prefixlen 64 scopeid 0x20
        ether 00:16:3e:3b:9c:88 txqueuelen 1000 (Ethernet)
        RX packets 7 bytes 1052 (1.0 KiB)
        RX errors 0 dropped 0 overruns 0 frame 0
        TX packets 20 bytes 1970 (1.9 KiB)
        TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

        eth1: flags=4163 mtu 1400
        inet6 fe80::216:3eff:fe7d:7859 prefixlen 64 scopeid 0x20
        ether 00:16:3e:7d:78:59 txqueuelen 1000 (Ethernet)
        RX packets 2 bytes 84 (84.0 B)
        RX errors 0 dropped 0 overruns 0 frame 0
        TX packets 8 bytes 656 (656.0 B)
        TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

        lo: flags=73 mtu 65536
        inet netmask
        inet6 ::1 prefixlen 128 scopeid 0x10
        loop txqueuelen 1000 (Local Loopback)
        RX packets 2 bytes 100 (100.0 B)
        RX errors 0 dropped 0 overruns 0 frame 0
        TX packets 2 bytes 100 (100.0 B)
        TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

        [root@pb04 ~]#

        Also why is eth0 getting ipaddress from a bridge it is not part of?

  8. LdxLover says:

    server ubuntu 16.04
    lxd 2.8
    (my server )ens3 Link encap:Ethernet
    inet addr:
    (bridge)lxdbr0 Link encap:Ethernet
    inet addr:
    (ldx testvm)eth0 Link encap:Ethernet
    inet addr:

    I want to bind with

    So that I can get access to testvm directly via ssh by login

    I am admiring your hard work on lxd and looking forward to your reply. Thank you.

    1. You’d do that the same way you would for a VM, with a good old iptables NAT rule on the host.

  9. ramane says:

    In a static address environment in default profile i use

    name: eth0
    nictype: bridged
    parent: br0
    type: nic
    name: eth1
    nictype: bridged
    parent: br1
    type: nic

    On the host I need the device for firewalling based on ports.
    The generated name vethxxxxx is diffucult to handle, I would prefer to give it a name like veth0c1 and veth1c1.
    How can I use a own name/naming scheme for the generated veth devices on the host’s side.

    1. We can’t use predictable names for those devices since a network interface name is limited to 15 characters and a container’s name is limited to 64 characters.

      You can however tell LXD what name to use on the host by adding the device to the container directly rather than through a profile and setting the host_name property.

      lxc config device add c1 eth0 nic nictype=bridged parent=br0 name=eth0 host_name=veth0c1
      lxc config device add c1 eth1 nic nictype=bridged parent=br0 name=eth1 host_name=veth1c1

      The obvious downside to this approach is that you have to do it for every one of the containers where you care about the host device name.

      1. ramane says:

        work perfectly! Thnx

  10. lxc network show br0
    error: cannot open ‘lxd’: dataset does not exist

    1. That error means that LXD couldn’t find the ZFS storage pool called “lxd” on your system.
      This shows up because as part of the “lxc network show” command, LXD has to check what containers are using the network.

      It sounds like there’s something badly broken with storage on your machine if that ZFS pool isn’t available anymore.

  11. mg says:

    I’m trying to set up nested LXD containers (c2 inside c1 inside VM). The lxd bridge inside c1 seems to support only IPv6.
    My “outer” container c1 is running fine (Ubuntu 16.04.2 LTS, almost unmodified image, except for static eth0 setup). Inside c1 I install a current lxd (2.9.2) and execute the following commands:

    # lxd init (default settings, new network bridge = no)

    # lxc network create lxdbr0 ipv6.address=none ipv4.address= ipv4.nat=true
    -> error: Failed to list ipv4 rules for lxdbr0 (table )

    # service lxd restart

    # lxc network attach-profile lxdbr0 default eth0

    # lxc image copy ubuntu:x local: –copy-aliases –auto-update

    # lxc launch x c2

    Afterwards, container c2 has an IPv6 address:
    # lxc info c2

    eth0: inet6 fe80::216:3eff:fe9f:d731 vethIW5TNY

    Is there a relation to the error message mentioned above? Is this an intended behaviour?

    1. That error suggests that iptables isn’t working, which in turns prevents most of that bridge from starting.

      If you run “iptables -L -n” in your container you may be getting a clearer error.

      1. mg says:

        # iptables -L -n
        Chain INPUT (policy ACCEPT)
        target prot opt source destination
        ACCEPT tcp — tcp dpt:53 /* generated for LXD network lxdbr0 */
        ACCEPT udp — udp dpt:53 /* generated for LXD network lxdbr0 */
        ACCEPT udp — udp dpt:67 /* generated for LXD network lxdbr0 */

        Chain FORWARD (policy ACCEPT)
        target prot opt source destination
        ACCEPT all — /* generated for LXD network lxdbr0 */
        ACCEPT all — /* generated for LXD network lxdbr0 */

        Chain OUTPUT (policy ACCEPT)
        target prot opt source destination
        ACCEPT tcp — tcp spt:53 /* generated for LXD network lxdbr0 */
        ACCEPT udp — udp spt:53 /* generated for LXD network lxdbr0 */
        ACCEPT udp — udp spt:67 /* generated for LXD network lxdbr0 */

        Is this configuration correct?

  12. ssahlender says:


    I’m struggling a litte bit to get the bridged network working. My containers don’t get any IP. I want that the containers get the IP from my network wide dhcp server.

    I think the basic problem is that if I run:

    ip a

    I see that eth0 has an IP but not lxdbr0 … is this normal because if I check other tutorials allways the bridge interface has the ip and not eth0.

    And the question is how to change it … I don’t see lxdbr0 in /etc/network/interfaces


    1. Patrick Goetz says:

      lxdbr0 is created by lxc, so you won’t find it in /etc/network/interfaces.

      You can have this created for you using the `lxc init` command, but it sounds like you already have a bridge for which you can edit the configuration. When you type:

      # lxc network show lxdbr0

      You should see something like this:

      ipv4.nat: “true”
      ipv6.address: none
      description: “”
      name: lxdbr0
      type: bridge
      – /1.0/containers/archon
      – /1.0/containers/archon2
      managed: true
      status: Created
      – none

      You can then edit this using this command:

      # lxc network edit lxdbr0

  13. vikrant says:

    I was trying to see a setup which uses macsec with unecrypted gre tunnels between two hosts on cloud providers. Do you have any recommendation? I didn’t see any examples yet with ubuntu and lxd.

  14. Tim says:

    I think the attach-profile command you’ve given is wrong, it should be:
    lxc network attach-profile local:lxdbr0 default eth0
    (you didn’t specify local:)

    Also, when doing this on an existing configuration I get:
    $ sudo lxc network attach-profile local:lxdbr0 default eth0
    error: device already exists

    Is there a way around this?

    Finally, does the /etc/default/lxd-bridge config file still have any effect in LXD 2.13? The lxd-bridge service has gone away, so I was wondering is there another config file I can use to configure bridges?

  15. del says:

    I have almost same situation like ssahlender.
    I would like to know if there is an LXD mechanism that allows created bridge to take its IP address from external DHCP server and respectively containers that have interfaces attached to that bridge to get their addresses via external DHCP.
    This setup works but with manual intervention (not with LXD instruments).

    1. If you want to connect your containers to the outside network, macvlan is usually the easiest way to do so.
      Simply “lxc network attach-profile eth0 default eth0 eth0”

      The main downside of this is that your containers will not be able to talk to the host as that’s the main limitation of macvlan.
      If you want to avoid that limitation, you indeed need to use bridged networking.

      LXD can absolutely create a bridge without DHCP or anything on the LXD side and just bridge to a physical device, for example:
      lxc network create testbr0 ipv4.address=none ipv6.address=none bridge.external_interfaces=eth0

      The main issue with this approach is that this can only work if the external interface is unconfigured.

      If you have configuration on that interface, then you need to setup the bridge at the system level and just tell LXD to use it.
      We don’t want to have LXD start deconfiguring and reconfiguring system interfaces.

  16. tlc says:

    To put containers on the local LAN, do we still need to define a bridge manually in /etc/network/interfaces ? Or can ‘lxc network’ do that too?

  17. Leo Arias says:

    Trusty doesn’t have the lxc network command. I’ve asked in askubuntu how to achieve basic networking in trusty:

  18. Andy says:


    Thank you for the great tutorial! I want to setup some lxd containers with 2 NICs. So I copied the default profile and added a device (eth1 in this case) with a different bridge as parent.

    When I now create a new container with this profile the 2 NICs are visible in the new container. But only eth0 has an IP4 address. To get eth1 an IP4 address assigned to eth1 I have to add

    auto eth1
    iface eth1 inet dhcp

    to /etc/network/interfaces.d/50-cloud-init.cfg. After a restart of the container lxc list shows me, that both devices got IP4 addresses assigned.

    How can I get LXD to have automatically all NICs IP4 addresses assigned via DHCP?

    Best regards!


    1. ag says:

      Did you find a solution to let LXD start the second NIC? I’m using LXD 2.18 now, still same issue, only the first NIC started in the container. thanks

  19. John says:

    Why does it have to be that when a product evolves it becomes much harder to use to do simple things?
    For example, it looks like it used to be so simple and easy to launch a container and have it request a DHCP address from your local lan DHCP server so that it can see the internet on my local lan be seen by all my systems on my local lan.
    I have been trying to glean from all the different iterations of documentation for LXC/LXD (what is the difference again, how can you tell which one you are using when it seems that you are using both lxd and lxc commands?) but I fail to understand how to accomplish this most basic of tasks in the latest stable version.
    I understand that there is great power presented in the new network capabilities and I hope to eventually learn how to use this great power but I am a firm believer in learning how to walk before learning how to run. The first rung of this ladder seems to be just out of my grasp. If I just continue to stumble I am afraid I will be forced to continue to use the virtual machine solutions and never realize the power of containers.

  20. del says:

    I’m running LXD-2.0.10 and my question is : Is it possible to add second bridge to lxd?
    I would like to attach different networks to different containers.

    1. Robonoob says:

      Hi Del,

      Do you have a solution on that now?
      I am also trying to do the same thing, but not success yet.

      I am using netplan on ubuntu 18.04, LXD 3 to setup 2 nic connected to 2 networks and I created 2 network bridges for each of the nic using lxc network command. However, I can just get containers using 1 bridge connected to a network, but I can’t get containers using another bridge to connect to another network.

  21. Stefan Sahlender says:

    @Stéphane Graber
    Thanks for your reply. With macvlan it is working fine … the containers get the ip from my local dhcp.
    But just for understanding … what is the reason, that is is not working with the bridged interface?
    – If I want to go around the disadvantages of macvlan ( can’t access the parent host ) … and – If I understand it right … the containers need to have static ip’s or? But how do I set them up (the static ip’s)?

  22. Gabrie says:

    Hi, i’m in trouble.. when i type lxc network show lxdbr3 for exemple, the system show me a config file of this bridge, but where is located? and can i have more than one bridge in lxd and how i can assign a bridge or two bridge to a conteiner?

    Thank you!

  23. R Hudumula says:

    I am trying to connect the containers to the outside network using bridged networking. The issue I have with this config is the containers cant talk to other VMs in the same subnet as the replies to ARP requests originating from the containers do not arrive.

    Here is my config:

    auto eth0
    iface eth0 inet manual

    auto br0
    iface br0 inet dhcp
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 20

    I modified the default lxd profile to use this br0 instead of lxdbr0.

  24. Julio S says:

    For the case of VXLAN, why do the hosts need to be on the same physical segment? Isn’t one of the benefits of VXLAN to allow you to create (i.e. tunnel) an L2 network across L3 networks?

  25. Cyrille says:

    Bonjour Stéphane,

    Firstly, thanks for your work on LXC/LXD !

    Then I would like to share a problem I have, if someone could help.
    I made 2 LXCs on 2 different servers (servers are on the same physical network). No problem to make PING between them: LXCs are bridged in order to reach external network and GRE tunnels are made to allow communication between the 2 LXCs.

    My problem comes when a SCTP connexion is on the process to be established:
    – 1st LXC is sending a SCTP INIT
    – 2nd LXC is answering a SCTP INIT ACK
    – 1st LXC is not sending a SCTP COOKIE ECHO
    – of course, then, 2nd LXC is not answering a SCTP COOKIE ACK.

    When implementing the 2 LXCs on the same server (with bridge and no GRE tunnel, because they can reach directly), the full SCTP procedure is completed without problem.

    Is it a simple problem of GRE tunnel that would prevent SCTP ?
    Is there some other reason linked to LXC configuration ?
    What kind of test I should make to find the way ?

    If somebody would like to help a newbie network tester…

    Thank you,


  26. Dan says:

    I am trying to configure an Ubuntu 16.04 container to have the same IP address as the host. The reason is we want to use criu to migrate an application from the host to the container and criu requires the container to have the same IP address as the host for TCP re-connection to work. Can this be done with LXD using a VLAN?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.