Bonding - teaming 802.3ad LACP on Debian Server

Bonding - teaming 802.3ad LACP on Debian Server

Introduction

NIC teaming presents an interesting solution to redundancy and high availability in the server/workstation computing realms. With the ability to have multiple network interface cards, an administrator can become creative in how a particular server accessed or create a larger pipe for traffic to flow through to the particular server.

This guide will walk through teaming of two network interface cards on a Debian 11 system. we will be using the the ifenslave package to attach and detach NICs from a bonded device.

The first thing to do before any configurations, is to determine the type of bonding that the system actually needs to implemented. There are six bonding modes supported by the Linux kernel as of this writing. Some of these bond 'modes' are simple to setup and others require special configurations on the switches in which the links connect.

Mode Policy How it works Fault Tolerance Load balancing
0 Round Robin packets are sequentially transmitted/received through each interfaces one by one. No Yes
1 Active Backup one NIC active while another NIC is asleep. If the active NIC goes down, another NIC becomes active. only supported in x86 environments. Yes No
2 XOR [exclusive OR] In this mode the, the MAC address of the slave NIC is matched up against the incoming request’s MAC and once this connection is established same NIC is used to transmit/receive for the destination MAC. Yes Yes
3 Broadcast All transmissions are sent on all slaves Yes No
4 Dynamic Link Aggregation aggregated NICs act as one NIC which results in a higher throughput, but also provides failover in the case that a NIC fails. Dynamic Link Aggregation requires a switch that supports IEEE 802.3ad. Yes Yes
5 Transmit Load Balancing (TLB) The outgoing traffic is distributed depending on the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave. Yes Yes
6 Adaptive Load Balancing (ALB) Unlike Dynamic Link Aggregation, Adaptive Load Balancing does not require any particular switch configuration. Adaptive Load Balancing is only supported in x86 environments. The receiving packets are load balanced through ARP negotiation. Yes Yes

Switch Configuration Settings Depending on the Bonding Modes

Bonding Mode Configuration on the Switch
0 - balance-rr Requires static Etherchannel enabled (not LACP-negotiated)
1 - active-backup Requires autonomous ports
2 - balance-xor Requires static Etherchannel enabled (not LACP-negotiated)
3 - broadcast Requires static Etherchannel enabled (not LACP-negotiated)
4 - 802.3ad Requires LACP-negotiated Etherchannel enabled
5 - balance-tlb Requires autonomous ports
6 - balance-alb Requires autonomous ports

Install packages and load modules

ifenslave debian package - This is a tool to attach and detach slave network interfaces to a bonding device. A bonding device will act like a normal Ethernet network device to the kernel, but will send out the packets via the slave devices using a simple round-robin scheduler. This allows for simple load-balancing, identical to port-channel bonding or trunking techniques used in switches.

~] apt-get install ifenslave

Once the software installed, the kernel will need to_be told to load the bonding module both for this current installation as well as on future reboots.

~] echo 'bonding' >> /etc/modules
~] modprobe bonding

# if you want vlans:
~] echo '8021q' >> /etc/modules
~] modprobe 8021q
~] apt-get install vlan

Create the LACP 802.3ad bonded interface

Now that the kernel made aware of the necessary modules for NIC bonding, it is time to create the actual bonded interface. This is done through the interfaces file which is located at /etc/network/interfaces

This file contains the network interface settings for all of the network devices the system has connected. This example has two network cards (enp34s0f0 and enp34s0f1) attached to bond0 interface:

/etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

auto bond0
iface bond0 inet manual
        address 10.254.3.1/24
        bond-mode 802.3ad
        bond-slaves enp34s0f0 enp34s0f1
        bond-miimon 100
        bond-downdelay 200
        bond-updelay 400
        bond-lacp-rate 1
        up ifconfig bond0 10.254.3.1/24 up

The bond-mode 802.3ad (you can use bond-mode 4) is what is used to determine which bond mode is used by this particular bonded interface. In this instance bond-mode 802.3ad indicates that this bond is an 802.3ad LACP) link aggregation.

Miimon is one of the options available for monitoring the status of bond links with the other option being the usage of arp requests. This guide will use miimon. bond-miimon 100 tells the kernel to inspect the link every 100 ms. bond-downdelay 200 means that the system will wait 200 ms before concluding that the currently active interface is indeed down. The bond-updelay 400 is used to tell the system to wait on using the new active interface until 400 ms after the link is brought up. most importantly, updelay and downdelay, both of these values must be multiples of the miimon value otherwise the system will round down.

Check the bonded interface status

~] cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v5.10.0-9-amd64

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 400
Down Delay (ms): 200
Peer Notification Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 00:1b:21:79:fe:8b
Active Aggregator Info:
        Aggregator ID: 3
        Number of ports: 1
        Actor Key: 9
        Partner Key: 17
        Partner Mac Address: 04:d5:90:77:69:76

Slave Interface: enp34s0f0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:79:fe:8b
Slave queue ID: 0
Aggregator ID: 3
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 00:1b:21:79:fe:8b
    port key: 9
    port priority: 255
    port number: 1
    port state: 63
details partner lacp pdu:
    system priority: 65535
    system mac address: 04:d5:90:77:69:76
    oper key: 17
    port priority: 255
    port number: 2
    port state: 61

Slave Interface: enp34s0f1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:79:fe:8a
Slave queue ID: 0
Aggregator ID: 4
Actor Churn State: churned
Partner Churn State: none
Actor Churned Count: 1
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 00:1b:21:79:fe:8b
    port key: 9
    port priority: 255
    port number: 2
    port state: 7
details partner lacp pdu:
    system priority: 65535
    system mac address: 04:d5:90:77:70:12
    oper key: 17
    port priority: 255
    port number: 2
    port state: 13

Create linux bond with ip command from iproute2 package

Firt, load appropriate modules

~] modprobe 8021q
~] modprobe bonding
[root@sysrescue ~]# ip link show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0f0np0:  mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 7c:c2:55:99:67:ae brd ff:ff:ff:ff:ff:ff
3: enp1s0f1np1:  mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 7c:c2:55:99:67:af brd ff:ff:ff:ff:ff:ff
4: eno1:  mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
    link/ether 3c:ec:ef:d5:66:66 brd ff:ff:ff:ff:ff:ff
    altname enp198s0f0
5: eno2:  mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
    link/ether 3c:ec:ef:d5:66:67 brd ff:ff:ff:ff:ff:ff
    altname enp198s0f1

Create bond / teaming interface with name bond0

[root@sysrescue ~]# ip link add bond0 type bond
[root@sysrescue ~]# ip link set bond0 type bond miimon 100 mode active-backup

Check that bond0 interface is in /proc filesystem

[root@sysrescue ~]# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v6.6.14-1-lts

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: None
MII Status: down
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0
[root@sysrescue ~]# ip link set dev enp1s0f0np0 down master bond0
[root@sysrescue ~]# ip link set dev enp1s0f1np1 down master bond0
[root@sysrescue ~]# ip link set dev bond0 up       # firt bring up bond0 device !!!
[root@sysrescue ~]# ip link set dev enp1s0f0np0 up
[root@sysrescue ~]# ip link set dev enp1s0f1np1 up
[root@sysrescue ~]# ip link show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0f0np0:  mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 7c:c2:55:99:67:ae brd ff:ff:ff:ff:ff:ff
3: enp1s0f1np1:  mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 7c:c2:55:99:67:ae brd ff:ff:ff:ff:ff:ff permaddr 7c:c2:55:99:67:af
4: eno1:  mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
    link/ether 3c:ec:ef:d5:66:66 brd ff:ff:ff:ff:ff:ff
    altname enp198s0f0
5: eno2:  mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
    link/ether 3c:ec:ef:d5:66:67 brd ff:ff:ff:ff:ff:ff
    altname enp198s0f1
6: bond0:  mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7c:c2:55:99:67:ae brd ff:ff:ff:ff:ff:ff
root@sysrescue ~]# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v6.6.14-1-lts

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: enp1s0f0np0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: enp1s0f0np0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 7c:c2:55:99:67:ae
Slave queue ID: 0

Slave Interface: enp1s0f1np1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 7c:c2:55:99:67:af
Slave queue ID: 0

Create vlan interface with vlan id 88:

[root@sysrescue ~]# ip link add link bond0 name bond0.88 type vlan id 88
[root@sysrescue ~]# ip addr add 192.168.88.14/24 dev bond0.88 
[root@sysrescue ~]# ip link set dev bond0.88 up
[root@sysrescue ~]# ip addr show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: enp1s0f0np0:  mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 7c:c2:55:99:67:ae brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9340:1fee:e459:bb7c/64 scope link tentative noprefixroute 
       valid_lft forever preferred_lft forever
3: enp1s0f1np1:  mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 7c:c2:55:99:67:ae brd ff:ff:ff:ff:ff:ff permaddr 7c:c2:55:99:67:af
    inet6 fe80::7e42:3007:9269:3305/64 scope link tentative noprefixroute 
       valid_lft forever preferred_lft forever
4: eno1:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:ec:ef:d5:66:66 brd ff:ff:ff:ff:ff:ff
    altname enp198s0f0
5: eno2:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:ec:ef:d5:66:67 brd ff:ff:ff:ff:ff:ff
    altname enp198s0f1
6: bond0:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7c:c2:55:99:67:ae brd ff:ff:ff:ff:ff:ff
    inet6 fe80::54b6:a1ff:feb5:23b/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
7: bond0.88@bond0:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7c:c2:55:99:67:ae brd ff:ff:ff:ff:ff:ff
    inet 192.168.88.14/24 scope global bond0.88
       valid_lft forever preferred_lft forever
    inet6 fe80::7ec2:55ff:fe99:67ae/64 scope link proto kernel_ll 
       valid_lft forever preferred_lft forever
[root@sysrescue ~]# ping 192.168.88.1
PING 192.168.88.1 (192.168.88.1) 56(84) bytes of data.
64 bytes from 192.168.88.1: icmp_seq=1 ttl=255 time=0.157 ms
64 bytes from 192.168.88.1: icmp_seq=2 ttl=255 time=0.106 ms
64 bytes from 192.168.88.1: icmp_seq=3 ttl=255 time=0.106 ms
64 bytes from 192.168.88.1: icmp_seq=4 ttl=255 time=0.100 ms
64 bytes from 192.168.88.1: icmp_seq=5 ttl=255 time=0.098 ms
^C
--- 192.168.88.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4057ms
rtt min/avg/max/mdev = 0.098/0.113/0.157/0.022 ms

Sources

  • https://unixcop.com/configure-bonding-and-teaming-on-debian-11/
  • https://wiki.debian.org/Bonding
  • https://www.claudiokuenzler.com/blog/1121/debian-11-bullseye-problem-bond-bonding-lacp-interfaces
  • https://serverfault.com/questions/894828/linux-bond-mode-802-3ad-not-activated

SUBSCRIBE FOR NEW ARTICLES

@
comments powered by Disqus