Introduction
When LXD 2.0 shipped with Ubuntu 16.04, LXD networking was pretty simple. You could either use that “lxdbr0” bridge that “lxd init” would have you configure, provide your own or just use an existing physical interface for your containers.
While this certainly worked, it was a bit confusing because most of that bridge configuration happened outside of LXD in the Ubuntu packaging. Those scripts could only support a single bridge and none of this was exposed over the API, making remote configuration a bit of a pain.
That was all until LXD 2.3 when LXD finally grew its own network management API and command line tools to match. This post is an attempt at an overview of those new capabilities.
Basic networking
Right out of the box, LXD 2.3 comes with no network defined at all. “lxd init” will offer to set one up for you and attach it to all new containers by default, but let’s do it by hand to see what’s going on under the hood.
To create a new network with a random IPv4 and IPv6 subnet and NAT enabled, just run:
stgraber@castiana:~$ lxc network create testbr0 Network testbr0 created
You can then look at its config with:
stgraber@castiana:~$ lxc network show testbr0 name: testbr0 config: ipv4.address: 10.150.19.1/24 ipv4.nat: "true" ipv6.address: fd42:474b:622d:259d::1/64 ipv6.nat: "true" managed: true type: bridge usedby: []
If you don’t want those auto-configured subnets, you can go with:
stgraber@castiana:~$ lxc network create testbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true Network testbr0 created
Which will result in:
stgraber@castiana:~$ lxc network show testbr0 name: testbr0 config: ipv4.address: 10.0.3.1/24 ipv4.nat: "true" ipv6.address: none managed: true type: bridge usedby: []
Having a network created and running won’t do you much good if your containers aren’t using it.
To have your newly created network attached to all containers, you can simply do:
stgraber@castiana:~$ lxc network attach-profile testbr0 default eth0
To attach a network to a single existing container, you can do:
stgraber@castiana:~$ lxc network attach testbr0 my-container default eth0
Now, lets say you have openvswitch installed on that machine and want to convert that bridge to an OVS bridge, just change the driver property:
stgraber@castiana:~$ lxc network set testbr0 bridge.driver openvswitch
If you want to do a bunch of changes all at once, “lxc network edit” will let you edit the network configuration interactively in your text editor.
Static leases and port security
One of the nice thing with having LXD manage the DHCP server for you is that it makes managing DHCP leases much simpler. All you need is a container-specific nic device and the right property set.
root@yak:~# lxc init ubuntu:16.04 c1 Creating c1 root@yak:~# lxc network attach testbr0 c1 eth0 root@yak:~# lxc config device set c1 eth0 ipv4.address 10.0.3.123 root@yak:~# lxc start c1 root@yak:~# lxc list c1 +------+---------+-------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+-------------------+------+------------+-----------+ | c1 | RUNNING | 10.0.3.123 (eth0) | | PERSISTENT | 0 | +------+---------+-------------------+------+------------+-----------+
And same goes for IPv6 but with the “ipv6.address” property instead.
Similarly, if you want to prevent your container from ever changing its MAC address or forwarding traffic for any other MAC address (such as nesting), you can enable port security with:
root@yak:~# lxc config device set c1 eth0 security.mac_filtering true
DNS
LXD runs a DNS server on the bridge. On top of letting you set the DNS domain for the bridge (“dns.domain” network property), it also supports 3 different operating modes (“dns.mode”):
- “managed” will have one DNS record per container, matching its name and known IP addresses. The container cannot alter this record through DHCP.
- “dynamic” allows the containers to self-register in the DNS through DHCP. So whatever hostname the container sends during the DHCP negotiation ends up in DNS.
- “none” is for a simple recursive DNS server without any kind of local DNS records.
The default mode is “managed” and is typically the safest and most convenient as it provides DNS records for containers but doesn’t let them spoof each other’s records by sending fake hostnames over DHCP.
Using tunnels
On top of all that, LXD also supports connecting to other hosts using GRE or VXLAN tunnels.
A LXD network can have any number of tunnels attached to it, making it easy to create networks spanning multiple hosts. This is mostly useful for development, test and demo uses, with production environment usually preferring VLANs for that kind of segmentation.
So say, you want a basic “testbr0” network running with IPv4 and IPv6 on host “edfu” and want to spawn containers using it on host “djanet”. The easiest way to do that is by using a multicast VXLAN tunnel. This type of tunnels only works when both hosts are on the same physical segment.
root@edfu:~# lxc network create testbr0 tunnel.lan.protocol=vxlan Network testbr0 created root@edfu:~# lxc network attach-profile testbr0 default eth0
This defines a “testbr0” bridge on host “edfu” and sets up a multicast VXLAN tunnel on it for other hosts to join it. In this setup, “edfu” will be the one acting as a router for that network, providing DHCP, DNS, … the other hosts will just be forwarding traffic over the tunnel.
root@djanet:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.lan.protocol=vxlan Network testbr0 created root@djanet:~# lxc network attach-profile testbr0 default eth0
Now you can start containers on either host and see them getting IP from the same address pool and communicate directly with each other through the tunnel.
As mentioned earlier, this uses multicast, which usually won’t do you much good when crossing routers. For those cases, you can use VXLAN in unicast mode or a good old GRE tunnel.
To join another host using GRE, first configure the main host with:
root@edfu:~# lxc network set testbr0 tunnel.nuturo.protocol gre root@edfu:~# lxc network set testbr0 tunnel.nuturo.local 172.17.16.2 root@edfu:~# lxc network set testbr0 tunnel.nuturo.remote 172.17.16.9
And then the “client” host with:
root@nuturo:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.edfu.protocol=gre tunnel.edfu.local=172.17.16.9 tunnel.edfu.remote=172.17.16.2 Network testbr0 created root@nuturo:~# lxc network attach-profile testbr0 default eth0
If you’d rather use vxlan, just do:
root@edfu:~# lxc network set testbr0 tunnel.edfu.id 10 root@edfu:~# lxc network set testbr0 tunnel.edfu.protocol vxlan
And:
root@nuturo:~# lxc network set testbr0 tunnel.edfu.id 10 root@nuturo:~# lxc network set testbr0 tunnel.edfu.protocol vxlan
The tunnel id is required here to avoid conflicting with the already configured multicast vxlan tunnel.
And that’s how you make cross-host networking easily with recent LXD!
Conclusion
LXD now makes it very easy to define anything from a simple single-host network to a very complex cross-host network for thousands of containers. It also makes it very simple to define a new network just for a few containers or add a second device to a container, connecting it to a separate private network.
While this post goes through most of the different features we support, there are quite a few more knobs that can be used to fine tune the LXD network experience.
A full list can be found here: https://github.com/lxc/lxd/blob/master/doc/configuration.md
Extra information
The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it
Hi Stephane, Thanks for the post.Im struggling with macvlan mode. Could you please guide me in the right direction to configure macvlan mode, so that my container is reachable by other hosts on the LAN with the same IP range ?
Thanks in advance.
lxc network attach-profile eth0 default eth0
That will change your default profile to use macvlan on the host’s eth0 device. Containers should then be getting IPs from your network’s DHCP.
Note that this won’t work if you’re using VMWare (as it filters MAC addresses) and it will also prevent communication from your host and the containers (macvlan design limitation).
Thanks, will have a look.
Got it working. Thank you again!
Hi Stéphane Graber. I find your command is not working for wlan.
$ lxc network attach-profile wlp2s0 default eth0
$ lxc profile show default
name: default
config: {}
description: Default LXD profile
devices:
eth0:
nictype: macvlan
parent: wlp2s0
type: nic
usedby:
– /1.0/containers/ubuntu-xenial
But container would not get IP from DHCP
$ lxc list
+—————+———+——+——+————+———–+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+—————+———+——+——+————+———–+
| ubuntu-xenial | RUNNING | | | PERSISTENT | 0 |
+—————+———+——+——+————+———–+
Hi,
could use it for two hosts on different subnets?
and in case of security could it be secure? or needs other modules like IPSEC?
Awesome stuff! Very useful for my project https://exploit.courses to provide users with SSH access to the container inside a VM. This was really a missing piece of functionality for LXD.
In case using GRE, the “client” host nuturo doesn’t handle DHCP service (I think cause of ipv4.address=none and ipv6.address=none). That leads the failure when setting static leases and port security for containers on nuturo host (edfu no problem of course). DHCP traffic will be broadcast to edfu host.
How can we solve it?
Right, static assignment won’t work for the other hosts. Port security should be fine though as right now it only prevents MAC spoofing which will keep working. It doesn’t prevent you from keeping your own MAC and steal another container’s IP right now. This should probably be changed to also cover IP spoofing, which will then indeed be a problem as the remote host(s) won’t know what the expected IP address is and so won’t be able to lock it down…
As I mentioned, tunnelling is a bit of a nice to have which is cool for demos and test environments. In production, you’d probably want to use a full fledged SDN which would have proper cross-host configuration, providing you with a bridge that can be used with LXD and letting the SDN take care of giving only a single fixed IP per bridge port.
Thanks for you reply.
For “true port security”, I’m using ebtables with 2 rules: one for binding the MAC address to the interface and one for binding the IP address with MAC address.
Open Daylight and Open vSwitch sounds good, but the Open Daylight policy (flow) so complicated, at least for me 😀
How to set static ip# and ipv4.gateway=auto? under LXD 2.6? I get failure:
# lxc config device set ubuntu-1604-macvlan-test virt ipv4.address 10.30.3.205
error: The device doesn’t exist
My config:
# lxc profile show default
name: default
config: {}
description: Default LXD profile
devices:
virt:
nictype: macvlan
parent: eth0
type: nic
usedby:
– /1.0/containers/ubuntu-1604-macvlan-test
# lxc config show –expanded ubuntu-1604-macvlan-test
name: ubuntu-1604-macvlan-test
profiles:
– default
config:
image.architecture: amd64
image.description: ubuntu 16.04 LTS amd64 (release) (20161130)
image.label: release
image.os: ubuntu
image.release: xenial
image.serial: “20161130”
image.version: “16.04”
volatile.base_image: fc6d723a6e662a5a4fe213eae6b7f4c79ee7dd566c99856d96a5ca677a99b15d
volatile.idmap.base: “0”
volatile.idmap.next: ‘[{“Isuid”:true,”Isgid”:false,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000},{“Isuid”:false,”Isgid”:true,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,”Isgid”:false,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000},{“Isuid”:false,”Isgid”:true,”Hostid”:1000000,”Nsid”:0,”Maprange”:1000000000}]’
volatile.last_state.power: RUNNING
volatile.virt.hwaddr: 00:16:3e:e3:22:75
volatile.virt.name: eth0
devices:
root:
path: /
type: disk
virt:
nictype: macvlan
parent: eth0
type: nic
ephemeral: false
Strange, I am unable to see virt device in conainer device list:
# lxc config device list ubuntu-1604-macvlan-test
root: disk
Switch to Open vSwitch and using a SDN controller can make the LXD networking more flexible.
Anyone have idea to make “floating IP” works while keeping the container running with 1 NIC. This is likely Digital Ocean floating IP feature.
the subcommand network does not seem to exist any more in 2.0.8 (the version I got when installing lxd on 16.04)
Also I was hoping to find some info on how to use a 2nd network interface
The “network” subcommand is available in LXD 2.3 and higher, LXD 2.0.x is lower than 2.3 so 2.0.8 doesn’t have this feature.
You can install LXD 2.8 (current feature release) on Ubuntu 16.04 by doing “apt install -t xenial-backports lxd”. Do note that it’s not possible to downgrade versions, so if you decide to upgrade to the latest feature release, you will be jumping to new releases every month and won’t be able to get back to 2.0.x without loosing all your containers and images.
As for adding a second network interface. You can do that with “lxc config device add nic [nic options]…”. Or if you are on a LXD version which has the new network API, “lxc network attach’ will do that for you.
I tried adding a second interface with a different bridge, however the second is seen after starting the container, but it is not getting an ip address.
[root@pb04 ~]# lxc network show vxlan0
config:
ipv4.address: 10.146.2.1/24
ipv4.nat: “true”
raw.dnsmasq: dhcp-option=option:domain-search,”lala”
tunnel.lan.protocol: vxlan
description: “”
name: vxlan0
type: bridge
used_by: []
managed: true
status: Created
locations:
– none
[root@pb04 ~]# lxc network show testbr0
config:
ipv4.address: 10.37.183.1/24
ipv4.nat: “true”
raw.dnsmasq: dhcp-option=option:domain-search,lala
tunnel.test.id: “10”
tunnel.test.protocol: vxlan
description: “”
name: testbr0
type: bridge
used_by: []
managed: true
status: Created
locations:
– none
[root@pb04 ~]# lxc profile show default
config:
limits.cpu: “4”
limits.memory: 16GB
description: Default LXD profile
devices:
eth0:
nictype: bridged
parent: vxlan0
type: nic
eth1:
nictype: bridged
parent: testbr0
type: nic
root:
path: /
pool: cloudian
type: disk
name: default
used_by: []
[root@pb04 ~]# lxc exec cloudian-pb04-1 — ifconfig
eth0: flags=4163 mtu 1400
inet 10.37.183.163 netmask 255.255.255.0 broadcast 10.37.183.255
inet6 fe80::216:3eff:fe3b:9c88 prefixlen 64 scopeid 0x20
ether 00:16:3e:3b:9c:88 txqueuelen 1000 (Ethernet)
RX packets 7 bytes 1052 (1.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 20 bytes 1970 (1.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163 mtu 1400
inet6 fe80::216:3eff:fe7d:7859 prefixlen 64 scopeid 0x20
ether 00:16:3e:7d:78:59 txqueuelen 1000 (Ethernet)
RX packets 2 bytes 84 (84.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 656 (656.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 2 bytes 100 (100.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2 bytes 100 (100.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@pb04 ~]#
Also why is eth0 getting ipaddress from a bridge it is not part of?
server ubuntu 16.04
lxd 2.8
(my server )ens3 Link encap:Ethernet
inet addr:69.12.89.124
(bridge)lxdbr0 Link encap:Ethernet
inet addr:10.253.210.1
(ldx testvm)eth0 Link encap:Ethernet
inet addr:10.253.210.198
I want to bind 10.253.210.198:22 with 69.12.89.124:2222
So that I can get access to testvm directly via ssh by login 69.12.89.124:2222
I am admiring your hard work on lxd and looking forward to your reply. Thank you.
You’d do that the same way you would for a VM, with a good old iptables NAT rule on the host.
In a static address environment in default profile i use
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
eth1:
name: eth1
nictype: bridged
parent: br1
type: nic
On the host I need the device for firewalling based on ports.
The generated name vethxxxxx is diffucult to handle, I would prefer to give it a name like veth0c1 and veth1c1.
How can I use a own name/naming scheme for the generated veth devices on the host’s side.
We can’t use predictable names for those devices since a network interface name is limited to 15 characters and a container’s name is limited to 64 characters.
You can however tell LXD what name to use on the host by adding the device to the container directly rather than through a profile and setting the host_name property.
lxc config device add c1 eth0 nic nictype=bridged parent=br0 name=eth0 host_name=veth0c1
lxc config device add c1 eth1 nic nictype=bridged parent=br0 name=eth1 host_name=veth1c1
The obvious downside to this approach is that you have to do it for every one of the containers where you care about the host device name.
work perfectly! Thnx
lxc network show br0
error: cannot open ‘lxd’: dataset does not exist
That error means that LXD couldn’t find the ZFS storage pool called “lxd” on your system.
This shows up because as part of the “lxc network show” command, LXD has to check what containers are using the network.
It sounds like there’s something badly broken with storage on your machine if that ZFS pool isn’t available anymore.
Hi,
I’m trying to set up nested LXD containers (c2 inside c1 inside VM). The lxd bridge inside c1 seems to support only IPv6.
My “outer” container c1 is running fine (Ubuntu 16.04.2 LTS, almost unmodified image, except for static eth0 setup). Inside c1 I install a current lxd (2.9.2) and execute the following commands:
# lxd init (default settings, new network bridge = no)
# lxc network create lxdbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true
-> error: Failed to list ipv4 rules for lxdbr0 (table )
# service lxd restart
# lxc network attach-profile lxdbr0 default eth0
# lxc image copy ubuntu:x local: –copy-aliases –auto-update
# lxc launch x c2
Afterwards, container c2 has an IPv6 address:
# lxc info c2
…
Ips:
eth0: inet6 fe80::216:3eff:fe9f:d731 vethIW5TNY
…
Is there a relation to the error message mentioned above? Is this an intended behaviour?
That error suggests that iptables isn’t working, which in turns prevents most of that bridge from starting.
If you run “iptables -L -n” in your container you may be getting a clearer error.
# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp — 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 /* generated for LXD network lxdbr0 */
ACCEPT udp — 0.0.0.0/0 0.0.0.0/0 udp dpt:53 /* generated for LXD network lxdbr0 */
ACCEPT udp — 0.0.0.0/0 0.0.0.0/0 udp dpt:67 /* generated for LXD network lxdbr0 */
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all — 0.0.0.0/0 0.0.0.0/0 /* generated for LXD network lxdbr0 */
ACCEPT all — 0.0.0.0/0 0.0.0.0/0 /* generated for LXD network lxdbr0 */
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp — 0.0.0.0/0 0.0.0.0/0 tcp spt:53 /* generated for LXD network lxdbr0 */
ACCEPT udp — 0.0.0.0/0 0.0.0.0/0 udp spt:53 /* generated for LXD network lxdbr0 */
ACCEPT udp — 0.0.0.0/0 0.0.0.0/0 udp spt:67 /* generated for LXD network lxdbr0 */
Is this configuration correct?
Hello,
I’m struggling a litte bit to get the bridged network working. My containers don’t get any IP. I want that the containers get the IP from my network wide dhcp server.
I think the basic problem is that if I run:
ip a
I see that eth0 has an IP but not lxdbr0 … is this normal because if I check other tutorials allways the bridge interface has the ip and not eth0.
And the question is how to change it … I don’t see lxdbr0 in /etc/network/interfaces
Regards
lxdbr0 is created by lxc, so you won’t find it in /etc/network/interfaces.
You can have this created for you using the `lxc init` command, but it sounds like you already have a bridge for which you can edit the configuration. When you type:
# lxc network show lxdbr0
You should see something like this:
config:
ipv4.address: 10.229.28.1/24
ipv4.nat: “true”
ipv6.address: none
description: “”
name: lxdbr0
type: bridge
used_by:
– /1.0/containers/archon
– /1.0/containers/archon2
managed: true
status: Created
locations:
– none
You can then edit this using this command:
# lxc network edit lxdbr0
I was trying to see a setup which uses macsec with unecrypted gre tunnels between two hosts on cloud providers. Do you have any recommendation? I didn’t see any examples yet with ubuntu and lxd.
I think the attach-profile command you’ve given is wrong, it should be:
lxc network attach-profile local:lxdbr0 default eth0
(you didn’t specify local:)
Also, when doing this on an existing configuration I get:
$ sudo lxc network attach-profile local:lxdbr0 default eth0
error: device already exists
Is there a way around this?
Finally, does the /etc/default/lxd-bridge config file still have any effect in LXD 2.13? The lxd-bridge service has gone away, so I was wondering is there another config file I can use to configure bridges?
I have almost same situation like ssahlender.
I would like to know if there is an LXD mechanism that allows created bridge to take its IP address from external DHCP server and respectively containers that have interfaces attached to that bridge to get their addresses via external DHCP.
This setup works but with manual intervention (not with LXD instruments).
If you want to connect your containers to the outside network, macvlan is usually the easiest way to do so.
Simply “lxc network attach-profile eth0 default eth0 eth0”
The main downside of this is that your containers will not be able to talk to the host as that’s the main limitation of macvlan.
If you want to avoid that limitation, you indeed need to use bridged networking.
LXD can absolutely create a bridge without DHCP or anything on the LXD side and just bridge to a physical device, for example:
lxc network create testbr0 ipv4.address=none ipv6.address=none bridge.external_interfaces=eth0
The main issue with this approach is that this can only work if the external interface is unconfigured.
If you have configuration on that interface, then you need to setup the bridge at the system level and just tell LXD to use it.
We don’t want to have LXD start deconfiguring and reconfiguring system interfaces.
To put containers on the local LAN, do we still need to define a bridge manually in /etc/network/interfaces ? Or can ‘lxc network’ do that too?
Trusty doesn’t have the lxc network command. I’ve asked in askubuntu how to achieve basic networking in trusty:
https://askubuntu.com/questions/931704/how-to-configure-the-lxd-bridge-in-trusty
Hi!
Thank you for the great tutorial! I want to setup some lxd containers with 2 NICs. So I copied the default profile and added a device (eth1 in this case) with a different bridge as parent.
When I now create a new container with this profile the 2 NICs are visible in the new container. But only eth0 has an IP4 address. To get eth1 an IP4 address assigned to eth1 I have to add
auto eth1
iface eth1 inet dhcp
to /etc/network/interfaces.d/50-cloud-init.cfg. After a restart of the container lxc list shows me, that both devices got IP4 addresses assigned.
How can I get LXD to have automatically all NICs IP4 addresses assigned via DHCP?
Best regards!
Andy
Did you find a solution to let LXD start the second NIC? I’m using LXD 2.18 now, still same issue, only the first NIC started in the container. thanks
Why does it have to be that when a product evolves it becomes much harder to use to do simple things?
For example, it looks like it used to be so simple and easy to launch a container and have it request a DHCP address from your local lan DHCP server so that it can see the internet on my local lan be seen by all my systems on my local lan.
I have been trying to glean from all the different iterations of documentation for LXC/LXD (what is the difference again, how can you tell which one you are using when it seems that you are using both lxd and lxc commands?) but I fail to understand how to accomplish this most basic of tasks in the latest stable version.
I understand that there is great power presented in the new network capabilities and I hope to eventually learn how to use this great power but I am a firm believer in learning how to walk before learning how to run. The first rung of this ladder seems to be just out of my grasp. If I just continue to stumble I am afraid I will be forced to continue to use the virtual machine solutions and never realize the power of containers.
Hello,
I’m running LXD-2.0.10 and my question is : Is it possible to add second bridge to lxd?
I would like to attach different networks to different containers.
Hi Del,
Do you have a solution on that now?
I am also trying to do the same thing, but not success yet.
I am using netplan on ubuntu 18.04, LXD 3 to setup 2 nic connected to 2 networks and I created 2 network bridges for each of the nic using lxc network command. However, I can just get containers using 1 bridge connected to a network, but I can’t get containers using another bridge to connect to another network.
@Stéphane Graber
Thanks for your reply. With macvlan it is working fine … the containers get the ip from my local dhcp.
But just for understanding … what is the reason, that is is not working with the bridged interface?
– If I want to go around the disadvantages of macvlan ( can’t access the parent host ) … and – If I understand it right … the containers need to have static ip’s or? But how do I set them up (the static ip’s)?
Regards
Hi, i’m in trouble.. when i type lxc network show lxdbr3 for exemple, the system show me a config file of this bridge, but where is located? and can i have more than one bridge in lxd and how i can assign a bridge or two bridge to a conteiner?
Thank you!
I am trying to connect the containers to the outside network using bridged networking. The issue I have with this config is the containers cant talk to other VMs in the same subnet as the replies to ARP requests originating from the containers do not arrive.
Here is my config:
auto eth0
iface eth0 inet manual
auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 20
I modified the default lxd profile to use this br0 instead of lxdbr0.
For the case of VXLAN, why do the hosts need to be on the same physical segment? Isn’t one of the benefits of VXLAN to allow you to create (i.e. tunnel) an L2 network across L3 networks?
Bonjour Stéphane,
Firstly, thanks for your work on LXC/LXD !
Then I would like to share a problem I have, if someone could help.
I made 2 LXCs on 2 different servers (servers are on the same physical network). No problem to make PING between them: LXCs are bridged in order to reach external network and GRE tunnels are made to allow communication between the 2 LXCs.
My problem comes when a SCTP connexion is on the process to be established:
– 1st LXC is sending a SCTP INIT
– 2nd LXC is answering a SCTP INIT ACK
– 1st LXC is not sending a SCTP COOKIE ECHO
– of course, then, 2nd LXC is not answering a SCTP COOKIE ACK.
When implementing the 2 LXCs on the same server (with bridge and no GRE tunnel, because they can reach directly), the full SCTP procedure is completed without problem.
Is it a simple problem of GRE tunnel that would prevent SCTP ?
Is there some other reason linked to LXC configuration ?
What kind of test I should make to find the way ?
If somebody would like to help a newbie network tester…
Thank you,
Cyrille.
I am trying to configure an Ubuntu 16.04 container to have the same IP address as the host. The reason is we want to use criu to migrate an application from the host to the container and criu requires the container to have the same IP address as the host for TCP re-connection to work. Can this be done with LXD using a VLAN?