Networking in Ubuntu 12.04 LTS

One of my focus for this cycle is to get Ubuntu’s support for complex networking working in a predictable way. The idea was to review exactly what’s happening at boot time, get a list of possible scenario that are used on servers in corporate environment and make sure these always work.

Bonding

Bonding basically means aggregating multiple physical link into one virtual link for high availability and load balancing. There are different ways of setting up such a link though the industry standard is 802.3ad (LACP – Link Aggregation Control Protocol). In that mode your server will negotiate with your switch to establish an aggregate link, then send monitoring packets to detect failure. LACP also does load balancing (MAC, IP and protocol based depending on hardware support).

One problem we had since at least Ubuntu 10.04 LTS is that Ubuntu’s boot sequence is event based, including bringing up network interfaces. The good old “ifup -a” is only done at the end of the boot sequence to try and fix anything that wasn’t brought up through events.

Unfortunately that meant that if your server takes a long time to detect the hardware, your bond would be initialised before the network cards have been detected, giving you a bond0 without a MAC address, making DHCP queries fail in pretty weird ways and making bridging or tagging fail with “Operation not permitted”.
As that all depends on hardware detection timing, it was racy, giving you random results at boot time.

Thankfully that should now be all fixed in 12.04, the new ifenslave-2.6 I uploaded a few weeks ago now initialises the bond whenever the first slave appears. If no slave appeared by the time we get to the catch-all “ifup -a”, it’ll simply wait for up to an additional minute for a slave to appear before giving up and continuing the boot sequence.
To avoid another common race condition where a bridge is brought up with a bond as one of its members before the bond is ready, ifenslave will now detect a bond is part of a bridge and add it only once ready.

Tagging

Another pretty common thing on corporate networks is the use of VLANs (802.1q), letting you create up to 4096 virtual networks on one link.
In the past, Ubuntu would rely on the catch all “ifup -a” to create any required vlan interface, once again, that’s a problem when an interface that depends on that vlan interface is initialised before the vlan interface is created.

To fix that, Ubuntu 12.04’s vlan package now ships with a udev rule that triggers the creation of the vlan interface whenever its parent interface is created.

Bridging

Bridging on Linux can be seen as creating a virtual switch on your system (including STP support).

Bridges have been working pretty well for a while on Ubuntu as we’ve been shipping a udev rule similar to the one for vlans for a few releases already. Members are simply added to the bridge as they appear on the system. The changes to ifenslave and the vlan package make sure that even bond interfaces with VLANs get properly added to bridges.

Complex network configuration example

My current test setup for networking on Ubuntu 12.04 is actually something I’ve been using on my network for years.

As you may know, I’m also working on LXC (Linux Containers), so my servers usually run somewhere between 15 and 80 containers, each of these container has a virtual ethernet interface that’s bridged.
I have one bridge per network zone, each of these network zone being a separate VLAN. These VLANs are created on top of a two gigabit link bond.

At boot time, the following happens (roughly):

  1. One of the two network interfaces appear
  2. The bond is initialised and the first interface is enslaved
  3. This triggers the creation of all the VLAN interfaces
  4. Creating the VLAN interfaces triggers the creation of all the bridges
  5. All the VLAN interfaces are added to their respective bridge
  6. The other network interface appear and gets added to the bond

My /etc/network/interfaces can be found here:
http://www.stgraber.org/download/complex-interfaces

This contains the very strict minimum needed for LACP to work. One thing worth noting is that the two physical interfaces are listed before bond0, this is to ensure that even if the events don’t work and we have to rely on the fallback “ifup -a”, the interfaces will be initialised in the right order avoiding the 60s delay.

Please note that this example will only reliably work with Ubuntu Precise (to become 12.04 LTS). It’s still a correct configuration for previous releases but race conditions may give you a random result.

I’ll be trying to push these changes to Ubuntu 11.10 as they are pretty easy to backport there, however it’d be much harder and very likely dangerous to backport these to even older releases.
For these, the only recommendation I can give is to add some “pre-up sleep 5” or similar to your bridges and vlan interfaces to make sure whatever interface they depend on exists and is ready by the time the “ifup -a” call is reached.

IPv6

Another interesting topic for 12.04 is IPv6, as a release that’ll be supported for 5 years on both servers and desktops, it’s important to get IPv6 right.

Ubuntu 12.04 LTS will be the first Ubuntu release shipping with IPv6 private extensions turned on by default. Ubuntu 11.10 already brought most of what’s needed for IPv6 support on the desktop and in the installer, supporting SLAAC (stateless autoconfiguration), stateless DHCPv6 and stateful DHCPv6.

Once we get a new ifupdown in Ubuntu Precise, we’ll have full support for IPv6 also for people that aren’t using Network Manager (mostly servers) which should at this point give us support for any IPv6 setup you may find.

The userspace has been working pretty well with IPv6 for years. I recently made my whole network dual-stack and now have all my servers and services defaulting to IPv6 for a total of 40% of my network traffic (roughly 1.5TB a month of IPv6 traffic). The only user space related problem I noticed is the lack of IPv6 support in Nagios’ nrpe plugin, meaning I can’t start converting servers to single stack IPv6 as I’d loose monitoring …

I also wrote a python script using pypcap to give me the percentage of ipv6 and ipv4 traffic going through a given interface, the script can be found here: http://www.stgraber.org/download/v6stats.py (start with: python v6stats.py eth0)

What now ?

At this point, I think Ubuntu Precise is pretty much ready as far as networking is concerned. The only remaining change is the new ifupdown and the small installer change that comes with it for DHCPv6 support.

If you have a spare machine or VM, now is the time to grab a recent build of Ubuntu Precise and make sure whatever network configuration you use in production works reliably for you and that any hack you needed in the past can now be dropped.
If your configuration doesn’t work at all or fails randomly, please file a bug so we can have a look at your configuration and fix any bug (or documentation).

About Stéphane Graber

Project leader of Linux Containers, Linux hacker, Ubuntu core developer, conference organizer and speaker.
This entry was posted in Canonical voices, IPv6, LXC, Planet Ubuntu. Bookmark the permalink.

45 Responses to Networking in Ubuntu 12.04 LTS

  1. Roger says:

    What about fixing mac addresses? I updated a machine with a Realtek and an Intel nic that was using bonding from Maverick to Natty. Linux then helpfully reprogrammed the Realtek nic to have the same Mac address as the Intel nic. (Turning off and unplugging the machine for 2 hours did not fix it.)

    Bonding refused to work with both nics having the same mac address and there is no easy way of fixing this!

    It is nice that you are fixing the dhcp issue. I had to have this:

    auto bond0
    iface bond0 inet dhcp
    pre-up ifconfig bond0 up ; /sbin/ifenslave bond0 eth0 eth1

    1. In most mode, bonding requires all network cards to have the same MAC address.
      The only case where you can have different MAC addresses on your interfaces is when using only the failover mode without the load-balancing.

      If you only want failover and want to keep your mac addresses unchanged, you’ll need to set fail_over_mac to 1 and use the active-backup mode.
      You can learn more at: http://www.kernel.org/doc/Documentation/networking/bonding.txt

      Having the MAC address stay after reboot definitely sounds like a kernel bug and a pretty bad one of that. Usually the system can temporarily override a MAC address at runtime but that’s never saved and so shouldn’t be able to override/replace the one that’s stored in the network card ROM.

      As for your current pre-up line, this one is indeed no longer needed with what’s in Precise. The bond is always brought up at the end of the initialization and the NICs are now enslaved when they appear.

      1. Roger says:

        Once the bond is up and running the mac addresses are adjusted as necessary. But the bond won’t come up if they are already the same.

        The bug is a combination of the realtek hardware and the linux kernel module clashing such that the temporary mac override becomes somewhat permanent. It was code changes between maverick and natty that did it.

  2. Gabriel says:

    Regarding IPv6 support for NRPE, I’ve found this Debian bug report:

    http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=484575

    Some people suggested patches, and then someone suggests using npre-core from the icinga project, but someone from icinga says that it’s not production-ready yet 🙁

    Sucks that we’d have to fork NRPE to have IPv6 support.

  3. jbartley says:

    bond1 supported?

    I’m running 11.10 Server (upgraded from 11.04), with:

    deb http://ppa.launchpad.net/stgraber/stgraber.net/ubuntu oneiric main
    deb-src http://ppa.launchpad.net/stgraber/stgraber.net/ubuntu oneiric main

    and I have to manually poke the system after boot with:

    ifdown bond1
    ifup bond1 &
    ifup eth3
    (note the “&” to put the “ifup bond1” into the background).

    At this point, things do work as they should.

    I *do* have a bond0 and bond1 in /proc/net/bonding after boot, and they do reflect the working (bond0) and non-working (bond1) status until I do this manual-poking routine.

    Is it me? Or do these scripts need a bit more tweaking?

    1. The PPA doesn’t contain all the updated packages for 11.10, it only contains what I needed for my personal network.
      I have updates pushed to 11.10 and now waiting for approval in the queue which should be available in the days/weeks to come.

      The fact that you need to put an ifup of a bond device in background and then bring up the slave is indeed the expected behaviour and is documented in ifenslave-2.6’s documentation.
      You should ensure you have your physical interfaces listed in /etc/network/interfaces before your bond devices, same when bringing them up manually, always bring up the slaves before the bond device.

      Now as to why your setup didn’t just work and you had to take manual actions, I’m not too sure. It’d be helpful if you could try with Ubuntu Precise (on a test machine) to see if the other changes I did to vlan, bridge-utils and ifenslave-2.6 since the upload to my PPA fix the situation for you.

      Also, please post your /etc/network/interfaces so I can have an idea of your setup.

      Thanks

  4. jbartley says:

    The problem with testing Precise is coming up with a test machine that’s in the right place with the right connections … when it goes beta then I’ll just test on the QA firewall itself (I can usually afford to blow it off the air for a few hours on a weekend … like now).

    /etc/network/interfaces (with some repetitive parts edited out) … I should add that if I simply swap strings “bond0” and “bond1” that the result is the same, in that the resulting QA-side bond0 works and corp-side bond1 needs poking … I tried that to see if at least the rest of the config was good, and it does seem to be.

    I’ve tried various “max_bonds” things but no attempt made any difference.

    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).

    # —- General config logic:
    #
    # – eth1, eth2, bond0, and br0 are corp-side (10/16, default route on this side)
    #
    # – eth3, eth4, bond1, and br1 are QA-side (10.192/10, broken down into /22’s)
    #
    # – VLAN 3072 on each side is bridged to untagged traffic on other side
    #
    # – VLANs 3073-and-up are (will be) bridged straight through

    # —- Step #1 of 6: local loopback

    auto lo
    iface lo inet loopback

    # —- Step #2 of 6: single-link physical interfaces

    # eth1 & eth2 will be aggregated together as bond0 (corp side)

    auto eth1
    iface eth1 inet manual
    bond-master bond0
    bond-primary eth1 eth2

    auto eth2
    iface eth2 inet manual
    bond-master bond0
    bond-primary eth1 eth2

    # eth3 & eth4 will be aggregated together as bond1 (QA side)

    auto eth3
    iface eth3 inet manual
    bond-master bond1
    bond-primary eth3 eth4

    auto eth4
    iface eth4 inet manual
    bond-master bond1
    bond-primary eth3 eth4

    # —- Step #3 of 6: aggregate single-link physical interfaces into multi-link physical interfaces

    # Corp side

    auto bond0
    iface bond0 inet manual
    bond-slaves none
    bond-mode 802.3ad
    bond-miimon 100

    # QA side

    auto bond1
    iface bond1 inet manual
    bond-slaves none
    bond-mode 802.3ad
    bond-miimon 100

    # —- Step #4 of 6: assign virtual interfaces to (single- and multi-link) physical interfaces
    # (and assign primary IPv4 and/or IPv6 addresses, if/as needed)

    # Corp-side VLANs

    auto bond0.3072
    iface bond0.3072 inet manual

    # QA-side VLANs

    auto bond1.3072
    iface bond1.3072 inet manual

    # —- Step #5 of 6: bridge various physical and/or virtual interfaces, if/as needed
    # (and assign primary IPv4 and/or IPv6 addresses, if/as needed)

    # Corp side

    auto br0
    iface br0 inet static
    bridge_ports bond0 bond1.3072
    address 10.0.1.123
    netmask 255.255.0.0
    gateway 10.0.1.2
    iface br0 inet6 static
    address fd96:3833:4a56:0000::0a00:017b
    netmask 64

    # QA side

    auto br1
    iface br1 inet static
    bridge_ports bond0.3072 bond1
    address 10.192.0.1
    netmask 255.255.252.0
    iface br1 inet6 static
    address fd96:3833:4a56:0001::0ac0:0001
    netmask 64

    # —- Step #6 of 6: assign secondary IPv4 and/or IPv6 addresses, if/as needed

    # Corp side

    # (none on corp side yet)

    # QA side

    auto br1:1
    iface br1:1 inet static
    address 10.192.0.255
    netmask 255.255.252.0

    # —- End of /etc/network/interfaces

    1. Thanks for your config. I did a quick run on a clean 12.04 install.

      The config I’m using is: http://paste.ubuntu.com/814443/
      That’s essentially your config without the comments (limited screen space in the VM) and without the bond-primary calls (as these shouldn’t be needed with 802.3ad).

      As I’m running that in a virtual machine (don’t have a physical machine with 4 NICs around sadly), I won’t be able to test 802.3ad in details but can at least make sure the rest of the initialisation is done properly.

      ifconfig: http://paste.ubuntu.com/814474
      ip route show: http://paste.ubuntu.com/814475
      ip -6 route show: http://paste.ubuntu.com/814475

      During these tests I actually noticed a small bug in ifenslave where there was a possible race condition between the bonding kernel module loading and ifenslave trying to create a bond interface.
      I fixed this in Precise in 1.1.0-19ubuntu5 and may consider pushing it to -updates for 11.10 once the current ifenslave has been approved and if someone else sees that issue, this bug having always been around and probably got visible for me as I’m running my VM entirely from RAM.

      So it looks like your config should be working fine on 12.04 and possible on 11.10 once the new vlan, bridge-utils and ifenslave-2.6 find their way into -updates.

      1. jbartley says:

        Thanks. I’ll see what I can do about setting up an isolated VM that won’t be exactly the same but will hopefully reproduce the issue starting with straight 11.10, and then see what happens as I add your PPA to that, and then also what upgrading to Precise does.

  5. Scott says:

    Hi!

    I found this blog entry in trying to resolve VLAN tagging problems with maverick and beyond. It appears there was a change in kernel 2.6.32-33 or so that broke VLAN tagging and it has not yet been fixed.

    I configure the vlan using “vconfig add eth0 201” then in interfaces as eth0.201. Tagged packets appear never to be processed – they appear to pass directly to eth0. Checking /proc/net/vlan/eth0.201 indicates 0 packets received and tcpdump on eth0 show those packets whilst on eth0.201 does not. The reorder_hdr flag has no effect.

    It is very frustrating – I know I’m not the only one trying to used tagged VLANs with Ubuntu but it appears no one has resolved or fixed this problem.

    I am surprised you have not run into it with the configuration you have described here.

    I’ve tested with maverick, natty, oneiric, and a precise build from 2012-03-12.

  6. Scott says:

    Hi, ignore my previous comment – I do have it working properly in precise now. I started over from fresh and double checked each step.

    Thanks!

  7. Andrea says:

    I’ve read your interfaces file and I have the following doubt:

    – What is the difference between declaring the following:
    “iface eth[0-1] inet manual
    bond-master bond0
    [..]
    iface bond0 inet manual
    bond-slaves none”

    VS

    “iface eth[0-1] inet manual
    {nothing after}
    [..]
    iface bond0 inet manual
    bond-slaves eth0 eth1”

    ?!

    As an aside, I’ve also tried to increase the MTU @ 9000, but it doesn’t seem to work in this exotic scenarios…

    1. Short answer: we support the first, not the second.

      Ubuntu uses event based network initialization, which means an interface gets initialized when the kernel says it appeared. Obviously we only get such event for physical devices, so that means that what triggers the bond initialization is a slave appearing.
      The best way to get things right from our side was to have all we needed to figure out the bonds in the slave entry, which we have with bond-master.

      If we wanted to support the second case, using bond-slaves, we’d need for any interface showing up to iterate through all the /etc/network/interfaces entries, try and figure out if it’s a bond member, then only setup part of that bond as you can’t be sure the other interfaces are ready to be bonded too.

      From the README file:
      “””
      Previous versions of the package supported specifying the slaves all in the
      stanza for the bonding interface, using the “bond_slaves” option. However,
      in such a configuration there is a race condition between bringing up the
      hardware driver for the ethernet devices and attempting to bring up the
      bonded interface; the bonding interface needs to be initiated from the slave
      interfaces instead.
      “””

      /usr/share/doc/ifenslave-2.6/README.Debian.gz and /usr/share/doc/ifenslave-2.6/examples/ are must-read for anyone trying to setup a complex bond setup as there are a lot of options that don’t always do the same thing depending on the bonding mode. http://www.kernel.org/doc/Documentation/networking/bonding.txt covers the kernel side of these options.

      1. Lioh says:

        Is this really still valid. From my observations it’s working file if the hardware interfaces are not configured at all and no bond-master is set but bond-slaves is configured in the bonding device configuration.

  8. Andrea says:

    Hi Stéphane,

    fantastic reading material indeed.

    Here’s my /etc/network/interfaces:

    auto eth0
    iface eth0 inet manual
    bond-master bond0
    pre-up ifconfig eth0 mtu 9000

    #auto eth1
    #iface eth1 inet manual
    # bond-master bond0
    # pre-up ifconfig eth1 mtu 9000

    iface bond0 inet manual
    bond-mode 802.3ad
    bond-miimon 100
    bond_slaves none
    pre-up ifconfig bond0 mtu 9000

    auto br10
    iface br10 inet manual
    bridge_ports bond0.10
    bridge_stp off

    auto bond0.10
    iface bond0.10 inet static
    address 192.168.10.10
    netmask 255.255.255.0

    – During boot I get the dreaded 60 secs. delay.

    – After boot, my ifconfig looks like so:

    bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
    UP BROADCAST MASTER MULTICAST MTU:1500 Metric:1
    RX packets:0 errors:0 dropped:0 overruns:0 frame:0
    TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

    br10 Link encap:Ethernet HWaddr 02:54:AD:22:C1:E7
    inet6 addr: fe80::907b:e7ff:fe4c:f7b3/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:0 errors:0 dropped:0 overruns:0 frame:0
    TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:0 (0.0 B) TX bytes:468 (468.0 B)

    So, basically, no MTU 9000 and eth0 seems down.

    P.S.: my eth0 uses e1000e.

  9. andy says:

    thank you for putting this page together, it helped me to sanity test a configuration that was proving problematic. it appears that creating a bridge on a bonded interface fails when using dhcp. i logged Bug #1003656 with a workaround.

  10. peter says:

    Are double tagged VLAN interfaces, aka q-in-q, supported on ubuntu 12.04?

  11. Dave says:

    Hi Stéphane,

    I am trying to setup VLANs on a bonded interface on precise but networking fails to come up when booting. I can’t see what’s wrong and I was hoping you might be able to point out what the problem is. This is my interfaces:


    auto lo
    iface lo inet loopback

    auto eth0
    iface eth0 inet manual
    pre-up ifconfig $IFACE up
    post-down ifconfig $IFACE down
    bond-master bond0

    auto eth1
    iface eth1 inet manual
    pre-up ifconfig $IFACE up
    post-down ifconfig $IFACE down
    bond-master bond0

    auto bond0
    iface bond0 inet static
    pre-up ifconfig $IFACE up
    post-down ifconfig $IFACE down
    bond-mode 802.3ad
    bond-miimon 100
    bond-lacp-rate 1
    bond-slaves none

    auto vlan10
    iface vlan10 inet static
    vlan_raw_device bond0
    address 192.168.1.1
    netmask 255.255.255.0
    network 192.168.1.0
    broadcast 192.168.1.255
    gateway 192.168.1.5
    dns-nameservers 2001:abcd:abcd:abcd::1 192.168.1.1
    dns-search daveoxley.co.uk

    iface vlan10 inet6 static
    vlan_raw_device bond0
    address 2001:abcd:abcd:abcd::1
    netmask 64
    up ip -6 route add 2001:abcd:abcd:abcd/56 dev vlan10
    down ip -6 route del 2001:abcd:abcd:abcd/56 dev vlan10

    auto vlan4
    iface vlan4 inet static
    vlan_raw_device bond0
    address 192.168.2.1
    netmask 255.255.255.252
    network 192.168.2.0
    broadcast 192.168.2.3

    auto vlan5
    iface vlan5 inet static
    vlan_raw_device bond0
    address 192.168.0.1
    netmask 255.255.255.252
    network 192.168.0.0
    broadcast 192.168.0.3

    auto vlan6
    iface vlan6 inet manual
    vlan_raw_device bond0
    pre-up ifconfig $IFACE up
    post-down ifconfig $IFACE down

    auto ppp0
    iface ppp0 inet ppp
    unit 0
    pre-up /sbin/ifconfig vlan5 up # line maintained by pppoeconf
    provider internode-provider
    up ip -6 route add 2000::/3 dev ppp0
    down ip -6 route del 2000::/3

    auto ppp1
    iface ppp1 inet ppp
    unit 1
    pre-up /sbin/ifconfig vlan6 up # line maintained by pppoeconf
    provider dcsi-provider


    Jul 25 21:50:02 baldrick kernel: [ 12.009768] igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
    Jul 25 21:50:02 baldrick kernel: [ 12.011794] 8021q: adding VLAN 0 to HW filter on device eth0
    Jul 25 21:50:02 baldrick kernel: [ 12.093844] ADDRCONF(NETDEV_UP): eth1: link is not ready
    Jul 25 21:50:02 baldrick kernel: [ 12.093847] 8021q: adding VLAN 0 to HW filter on device eth1
    Jul 25 21:50:02 baldrick kernel: [ 12.094348] igb: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
    Jul 25 21:50:02 baldrick kernel: [ 12.097506] ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    Jul 25 21:50:02 baldrick kernel: [ 12.222062] bonding: Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
    Jul 25 21:50:02 baldrick kernel: [ 12.239492] ADDRCONF(NETDEV_UP): bond0: link is not ready
    Jul 25 21:50:02 baldrick kernel: [ 12.239495] 8021q: adding VLAN 0 to HW filter on device bond0
    Jul 25 21:50:02 baldrick kernel: [ 12.239984] 8021q: VLANs not supported on bond0
    Jul 25 21:50:02 baldrick kernel: [ 12.245991] 8021q: VLANs not supported on bond0
    Jul 25 21:50:02 baldrick kernel: [ 12.252015] 8021q: VLANs not supported on bond0
    Jul 25 21:50:02 baldrick kernel: [ 12.258194] 8021q: VLANs not supported on bond0

    1. Nothing obviously wrong there, though a few suggestions:
      – The pre-up/post-down shouldn’t be needed, I believe we have updated the scripts to bring the interface up for you
      – You really should avoid calling pppX your ppp interfaces in /etc/network/interfaces as it’s causing some side effects when pppX disappears from the system but ifupdown still believe it’s up. You could simply rename ppp0 to internode-provider and ppp1 to dcsi-provider, that should work fine (and the interfaces created by pon will still be ppp0 and ppp1).
      – Did you try using bond0.VLAN instead of using vlan_raw_device? I’m wondering if that wouldn’t avoid any potential race condition in that area.

      What’s the result of doing “ifdown -a” and then “ifup -a”? Does that bring everything online properly?

      1. Dave says:

        Hi Stéphane,

        Thanks for the help. I’ve made the recommended changes to my interfaces 😉

        Unfortunately using bond0.VLAN for my interfaces has the same effect during boot. It just hangs and eventually boots the machine with no networking.

        ifup -a however did bring up the interfaces as expected. I was using bond0.VLAN naming for this test.

        Here’s the interfaces that I tried:


        auto lo
        iface lo inet loopback

        auto eth0
        iface eth0 inet manual

        auto eth1
        iface eth1 inet manual
        bond-master bond0

        auto eth2
        iface eth2 inet manual
        bond-master bond0

        auto bond0
        iface bond0 inet manual
        bond-mode 802.3ad
        bond-miimon 100
        bond-lacp-rate 1
        bond-slaves none

        auto bond0.10
        iface bond0.10 inet static
        address 192.168.1.1
        netmask 255.255.255.0
        network 192.168.1.0
        broadcast 192.168.1.255
        dns-nameservers 2001:abcd:abcd:abcd::1 192.168.1.1
        dns-search daveoxley.co.uk

        iface bond0.10 inet6 static
        address 2001:abcd:abcd:abcd::1
        netmask 64
        up ip -6 route add 2001:abcd:abcd:abcd/56 dev vlan10
        down ip -6 route del 2001:abcd:abcd:abcd/56 dev vlan10

        auto bond0.4
        iface bond0.4 inet static
        address 192.168.2.1
        netmask 255.255.255.252
        network 192.168.2.0
        broadcast 192.168.2.3

        auto bond0.5
        iface bond0.5 inet static
        address 192.168.0.1
        netmask 255.255.255.252
        network 192.168.0.0
        broadcast 192.168.0.3

        auto bond0.6
        iface bond0.6 inet manual

        auto internode-ppp
        iface internode-ppp inet ppp
        unit 0
        pre-up /sbin/ifconfig bond0.5 up # line maintained by pppoeconf
        provider internode-provider
        up ip -6 route add 2000::/3 dev ppp0
        down ip -6 route del 2000::/3

        auto dcsi-ppp
        iface dcsi-ppp inet ppp
        unit 1
        pre-up /sbin/ifconfig bond0.6 up # line maintained by pppoeconf
        provider dcsi-provider

        Cheers,
        Dave.

  12. Scott says:

    Stephane!
    Great reading, thank you for all your work.

    I wonder if you can sanity check the configuration below…

    I’m trying to get several boxes with Precise server configured for hosting KVM virtual machines.
    Two or four NICS bonded with tagged vlan’s and three bridges.
    using the interfaces file below I can manually start and stop networking and get the interfaces up but they do not survive a reboot and I get the dreaded “waiting 60 more seconds for network…”

    Also KVM guests have intermittent connectivity when hosted on this box.

    I get “received packet with own address” errors in the host logs as well.

    ##################interfaces###################### #
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).

    # The loopback network interface
    auto lo
    iface lo inet loopback

    auto eth0
    iface eth0 inet manual
    bond-master bond0

    auto eth1
    iface eth1 inet manual
    bond-master bond0

    auto eth2
    iface eth2 inet manual
    bond-master bond0

    auto eth3
    iface eth3 inet manual
    bond-master bond0

    auto bond0
    iface bond0 inet manual
    bond-slaves none
    bond-mode 2
    bond-miimon 100

    # The primary network interface
    auto vlan168
    iface vlan168 inet manual
    vlan_raw_device bond0

    # The private net
    auto vlan169
    iface vlan169 inet manual
    vlan_raw_device bond0

    # Storage network
    auto vlan340
    iface vlan340 inet manual
    vlan_raw_device bond

    auto br168
    iface br168 inet static
    address xxxx.xxxx.xxxx.6
    netmask 255.255.255.128
    network xxxx.xxxx.xxxx.0
    gateway xxxx.xxxx.xxxx.1
    # dns-* options are provided by the reolvconf package if installed
    dns-nameservers xxxx.xxxx.xxxx.xxxx
    dns-search search.domain
    bridge_ports vlan168
    bridge_maxwait 0
    bridge_fd 0
    bridge_stp on

    auto br169
    iface br169 inet static
    address xxxx.xxxx.xxxx.134
    netmask 255.255.255.128
    gateway xxxx.xxxx.xxxx.129
    bridge_ports vlan169
    bridge_maxwait 0
    bridge_fd 0
    bridge_stp on

    auto br340
    iface br340 inet static
    address xxxx.xxxx.xxxx.6
    netmask 255.255.255.128
    gateway xxxx.xxxx.xxxx.1
    bridge_ports vlan340
    bridge_maxwait 0
    bridge_fd 0
    bridge_stp on
    ######################interfaces################## ######

    Like I said I can get the interfaces up by consoling in and issuing:

    service networking stop/start
    or
    /etc/init.d/networking stop/start

    In order to get the interfaces up.

    lscpi:

    03:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
    03:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
    04:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
    04:00.1 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)

    cat /proc/net/vlan/config:

    VLAN Dev name | VLAN ID
    Name-Type: VLAN_NAME_TYPE_PLUS_VID_NO_PAD
    vlan168 | 168 | bond0
    vlan169 | 169 | bond0
    vlan340 | 340 | bond0

    sudo lsmod | grep 8021:

    8021q 24084 0
    garp 14602 1 8021q

    some output from dmesg:

    bonding: bond0: Warning: clearing HW address of bond0 while it still has VLANs.
    [ 5645.900362] bonding: bond0: When re-adding slaves, make sure the bond’s HW address matches its VLANs’.
    [ 5649.398476] 8021q: adding VLAN 0 to HW filter on device bond0
    [ 5649.399284] 8021q: VLANs not supported on bond0
    [ 5649.407286] 8021q: VLANs not supported on bond0
    [ 5649.415180] 8021q: VLANs not supported on bond0
    [ 5649.442451] 8021q: adding VLAN 0 to HW filter on device bond0

    Any help would be greatly appreciated!

    1. Scott says:

      Hey Stephane – turns out if I use bond-mode 1 the interfaces coem up as expected – they’ve been stable for 12 hours or so, we’ll see – sorry for the noise on your blog!

  13. Nicola V says:

    Hi Stéphane, nice writeup, thanks!
    We are using a similar networking file on our 10.04 machines, but we’re naming the bridges “vlan” instead of “br”, like this:

    auto vlan160
    iface vlan160 inet manual
    bridge_ports bond0.160
    bridge_maxwait 0
    bridge_fd 0
    bridge_stp off

    This works perfectly on 10.04 but it started to behave oddly after the upgrade to 12.04. It seems like the bridges don’t come up completely, some of them are left behind (we’re using 6 of them).

    Changing the interface name from “vlan” to “bridge” fixed the problem. Is the parsing of the “vlan” name deprecated in 12.04?
    I’m asking since we have a completely automated setup with puppet and the name change would imply a massive name change in our infrastructure, with huge consequences.

    Thanks 🙂

  14. Mark says:

    We ran head-first into many of the problems mentioned in this post when upgrading from 9.10 to 10.04. Had that falling feeling every time we upgraded a server and realized that bandwidth would once again be an issue.

    Just revisited the issue with a new server with four Intel NICs in it. Was pleased as punch to see everything working as expected…and then I rebooted. None of the slaves came up. Restarted networking and everything is peachy. Rebooted again. This time, just one of the interfaces comes up.

    I would be happy to help in any way to see this issue resolved.

    If it is any consolation, I’ve had no luck going upstream to get around this. And on a machine with only two NICs, I’ve had no issues. But with four NICs…no dice.

    1. Can you file a bug at https://launchpad.net/ubuntu/+source/ifenslave-2.6/+filebug ?
      Be sure to attach /etc/network/interfaces, a tarball of /var/log/upstart/ and /var/log/syslog

      That should give me enough of a clue as to what’s going on on your system because the work I did was specifically made to avoid such situations…

      1. Mark says:

        Sure can. Since it’s been a while since I went upstream on this problem, I’m also installing the current version of Debian on an identical machine.

        1. Mark says:

          The bug is at https://bugs.launchpad.net/ubuntu/+source/ifenslave-2.6/+bug/1078387

          I’ve been all over the map with this one. Considered dumping Ubuntu for Debian (but Debian’s packages are ancient) and then Arch (but that would be painful for production servers).

          Sense prevailed and I’ll probably just restart networking in rc.local. But I’d much rather contribute to a fix.

          1. Mark says:

            So I set up an Ubuntu 12.04 LTS from scratch (toasting an Arch install), installed emacs, installed ifenslave, set up bonding, confirmed that it worked, and rebooted. All interfaces in the bond came up. Rebooted again. They came up again. Moved to a different set of four ports configured for 802.3ad on the switch. It worked. Rebooted. It worked.

            I’ll be trying to repeat the process on an identical box plugged into the same switch. Will also be comparing the configuration that works with the one I uploaded here. As I don’t think I did anything differently.

            Tentative hopefulness.

          2. Hmm, that’s odd that reinstalling magically fixed it…
            Anyway, sorry for not looking at your bug report yet but I’ve been busy with other things this week. I hope to have time to go through the new network related bug reports next week.

    2. Mark says:

      No worries on the not looking into that. It’s not like I’m paying Canonical for service…yet. (But I am expecting a call-back regarding that tomorrow.)

      Had a close look at the machine where bonding worked. The configuration files were exactly as they were when it didn’t work.

      Also set up another machine in a similar manner and confirmed that bonding works on it.

      The difference is that I let the Ubuntu installer automatically set up partitions and LVM volumes. Normally, I set up a separate /boot, /usr, /var, /tmp, and /home partitions (LVM volumes, actually) manually. I can’t be sure yet, but I think that is the issue or at least related to the issue.

      I’ll be breaking out /var, /tmp, and /home to separate partitions/volumes shortly. (Trying to do so manually during install has caused havoc when UEFI is thrown into the mix.) My guess is that won’t cause any issues. However, my bet is that if I break out /usr, things will go wrong.

      1. Hmm, that indeed could be a problem.

        In theory everything that’s needed for networking should be outside of /usr, but I don’t remember testing that theory with something other than a “standard” network configuration.

        If that’s indeed the issue, then we just need to figure out which binaries need to be moved out of /usr, should be pretty easy to fix then.

  15. Stéphane, would you mind posting some of the LXC configuration you use with the /etc/network/interfaces you posted? I’d like to be able to relate the two as I’m trying to get to grips with how to setup LXC networking for my setup.

  16. Tero Kilkanen says:

    I set up bonding for active-backup -failover purpose in Ubuntu 12.04.1 LTS.

    However, with the upstart bringing up the interfaces up in random order, I wasn’t able to get all the interfaces up properly during boot time.

    I tried several ways of putting the settings in /etc/network/interfaces. Finally I had to resort to this config:

    /etc/network/interfaces:

    —-
    auto bond0
    iface bond0 inet static
    bond-slaves none
    bond-mode active-backup
    address x.x.x.x
    netmask x.x.x.x
    broadcast x.x.x.x
    gateway x.x.x.x

    iface eth0 inet manual
    bond-master bond0
    bond-primary eth0

    iface eth1 inet manual
    bond-master bond0
    bond-primary eth0
    —-

    And then bring up the eth0 / eth1 in /etc/rc.local:

    —-
    ifup eth0
    ifup eth1
    —-

    With this setup, eth0 and eth1 join the bond properly, and the primary interface for the bond is set up correctly.

    Something should be done in upstart network configuration so that the slave interfaces could be configured only after the bond itself has been started. That is, setting up the bond interface would generate a “Master ready” -event, and that event would be a required before slave interfaces can start.

    – Tero

  17. Sumit Kumar says:

    Hi Stéphane

    I setup IPv6 in my Ubuntu 12.04 on eth0 using command

    ifconfig eth0 inet6 add 2001:db8:fedc:cdef::1/64

    but when i try to ping eth0 itself using

    ping6 2001:db8:fedc:cdef::1

    it always gives

    PING 2001:db8:fedc:cdef:0:0:0:1(2001:db8:fedc:cdef::1) 56 data bytes
    From ::1 icmp_seq=1 Destination unreachable: Address unreachable
    From ::1 icmp_seq=2 Destination unreachable: Address unreachable
    From ::1 icmp_seq=3 Destination unreachable: Address unreachable

    I think that it automatically is pinging from ::1 to `2001:db8:fedc:cdef::1

    Command

    ip addr show dev eth0

    it gives

    2: eth0: mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 00:1b:38:a1:a2:50 brd ff:ff:ff:ff:ff:ff
    inet6 2001:db8:fedc:cdef::1/64 scope global tentative
    valid_lft forever preferred_lft forever

    Command

    ip -6 route

    it gives

    2001:db8:fedc:cdef::/64 dev eth0 proto kernel metric 256
    fe80::/64 dev eth0 proto kernel metric 256

    Command

    ip6tables -L

    it gives

    Chain INPUT (policy ACCEPT)
    target prot opt source destination

    Chain FORWARD (policy ACCEPT)
    target prot opt source destination

    Chain OUTPUT (policy ACCEPT)
    target prot opt source destination

    Command

    ip6tables -F
    it gives nothing.

    Please help how to solve this.

  18. Tsaavik says:

    Ubuntu 12.04.2lts No matter what I try I still need the:
    post-up ifenslave bond0 eth0 eth1

    Or my bond0 never comes up 🙁

    # The loopback network interface
    auto lo
    iface lo inet loopback

    # The primary network interface
    auto eth0
    iface eth0 inet manual
    bond-master bond0

    # The secondary network interface
    auto eth1
    iface eth1 inet manual
    bond-master bond0

    auto bond0
    iface bond0 inet static
    bond-slaves none
    bond-mode 5
    bond-miimon 100
    address 10.64.49.27
    gateway 10.64.49.1
    netmask 255.255.255.0
    dns-nameservers 10.64.49.12
    post-up ifenslave bond0 eth0 eth1

  19. Tsaavik says:

    Turns out that the config above would bring the bond0 interface up as mode 0!
    The following config actually works and the interfaces come up with no external calls!

    note, I used — for spaces since wiki strips em, even in (code) brackets
    ————————————————————————————————

    # The loopback network interface
    auto lo
    iface lo inet loopback

    # The mode 5 bonded interface
    auto bond0
    iface bond0 inet static
    — bond-slaves none
    –bond-mode balance-tlb
    –bond-miimon 100
    –address 10.64.49.120
    –netmask 255.255.255.0
    –gateway 10.64.49.1
    –dns-nameservers 10.64.49.12
    –dns-search example.com

    # The primary network interface
    auto eth0
    iface eth0 inet manual
    –bond-master bond0

    # The secondary network interface
    auto eth1
    iface eth1 inet manual
    –bond-master bond0

  20. Nicola V says:

    Hi Stéphane, shouldn’t this nice guide be merged with the ubuntu server documentation? I’m sure a full example with bridging, bonding and tagging would be welcome in the official docs. That would be the first place a sysadmin would look for information. Having info scattered across blogs is not really convenient.

    Thanks

  21. Alfonso says:

    I need some help. Here is the situation: I’ve got two pc running on Ubuntu 12.04 with 4 NICs.
    PC1 Local site PC2 Remote site
    LAN ————— Eth0 Eth0 ————LAN
    Eth1 ——bond0 ——Eth1
    Eth2 ——bond0 —–Eth2
    Eth3 nou used Eth3 not used

    I created bond0 (balance-rr) with eth1 and eth2 and the Iperf single stream between PC1 and PC2 gives 1.89Gbps so aggregation works.
    Then I bridged Bond0 and eth0 and this time I tried iperf from LAN to LAN and I got timeouts. I resolved the issue by creating bond1 with eth0 and eth3 and then bridge bond0 and bond1. The connection is stable but data rate max around 400Mbps with iperf LAN to LAN single stream. To get near 800Mbps I need to run Iperf with 7 parallel streams which defeats the purpose of the aggregation in some way. This is interface file
    # The loopback network interface
    auto lo
    iface lo inet loopback

    # Bond0
    auto eth0
    iface eth0 inet manual
    bond-master bond0
    auto eth3
    iface eth3 inet manual
    bond-master bond0
    auto bond0
    iface bond0 inet manual
    bond-mode balance-rr
    bond-miimon 100
    bond-slaves none

    # Bond1
    auto eth1
    iface eth1 inet manual
    bond-master bond1
    auto eth2
    iface eth2 inet manual
    bond-master bond1
    auto bond1
    iface bond1 inet manual
    bond-mode balance-rr
    bond-miimon 100
    bond-slaves none
    # Bridge
    auto br0
    iface br0 inet static
    address 192.168.169.10
    netmask 255.255.255.0
    bridge_ports bond0 bond1

    Why the bridge is not performing?

  22. Alfonso says:

    I also noted by arranging several connections that data rate drops ~200Mbps per NIC.
    Lets say: Test 1 (no bonding – bridge only)
    PC1 Local ………………………………………………………PC2 remote
    LAN—-eth0-bridge-eth1——-eth1–bridge eth0–LAN
    Iperf test LAN-LAN single stream 200Mbps
    Iperf test LAN-LAN 4 parallel streams 890Mbps

    Any idea?

  23. Alfonso says:

    Let say: Test 2
    PC1
    LAN — eth0-bridge-eth1–LAN
    Iperf test LAN-LAN single stream ~400Mbps
    Iperf test LAN-LAN 2 parallel streams 890Mbps

  24. Didik Susilo says:

    Bonding works great on Ubuntu Server 64 12.04.3.
    My configuration looks like:
    # eth0 as slave
    auto eth0
    iface eth0 inet manual
    bond-master bond0

    # eth1 as slave
    auto eth1
    iface eth1 inet manual
    bond-master bond0

    # The primary network interface
    auto bond0
    iface bond0 inet static
    address 192.168.2.80
    netmask 255.255.255.0
    network 192.168.2.0
    broadcast 192.168.2.255
    gateway 192.168.2.108
    slaves eth0 eth1
    # mtu 7000
    bond-mode balance-rr
    bond-miimon 100
    bond-downdelay 200
    bond-updelay 200
    dns-nameservers 192.168.2.108

    I’m using AoE. Configuration on /etc/vblade.conf:
    bond0 1 1 /drive-f.vdi

    Problem appear on reboot, there are messages like this:
    unregister_netdevice: waiting for bond0 to become free. Usage count = 1
    These messages keep appears and reboot process is stuck.

    I try to kill vblade process before reboot.
    But the problem still the same.
    Any solutions please….
    Thanks

Leave a Reply to Roger Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.