Appliance - Bonding the NIC

SUMMARY

This article explains how to bond NICs on Recovery Series appliances.  This is an advanced process intended to be used only under the approval and direction of Unitrends senior support.  

ISSUE

If you are seeking to run backups faster, 10G solutions will "in theory" allow faster aggregate speeds, but please note that individual backup performance is limited by client performance and switch overhead and real speeds in excess of 10gbit is rarely ever seen outside of theoretical testing.  Most windows systems will not be able to saturate 1gbit, let alone 10.  Aggregate speeds beyond 10 gigabit are only attained by running multiple concurrent jobs.  It typically takes protection of several concurrent high performance systems or Virtual Nodes to even exceed a physical connection.  The intent of bonding thus is not expected to increase performance, but only to add network resiliency.  

Please note, connecting multiple NICs in the same VLAN on different IPs is not a bond.  Never connect multiple NICs to the same routable VLAN unless using bonding.  A Bond is a software PLUS switch configuration that provides a single IP and a single virtual MAC address seen across multiple adapters as a single object.  By connecting multiple NICs to the same VLAN without using bonding, you create unsupported TCP/IP configurations most switches will not support, and this is a configuration Linux itself does not support.  Connecting several adapters in one subnet improperly can lead to ARP casting storms, severe performance degradation of your entire network, connection instability, MAC confusion in switch infrastructure, and disconnection of services.  If Unitrends Staff ID's that you have more than 1 NIC in the same VLAN without using bonding they will ask that the redundant NICS be disabled unless a true bonding configuration is possible in your environment.  When using multiple adapters properly, it is important each be connected to independent non-routable VLANs, with a gateway configured on only one VLAN.  

RESOLUTION

UEB virtual systems do not support bonding, bonding should be done at the host level not the guest level.  

IMPORTANT: Before attempting to setup bonding, please patch the cmc_bonding script using the following commands:

mv -f /usr/bp/bin/cmc_bonding /usr/bp/bin/orig.cmc_bonding
wget -q https://sftp.kaseya.com/utilities/cmc_bonding -O /usr/bp/bin/cmc_bonding
chmod +x /usr/bp/bin/cmc_bonding

Unitrends supports bonding NICs of the same type only.  Either 2 onboard NICs, or 2 SFP ports, and all NICs in a bond need to run at the same speed.  Never bond a mix of onboard and SFP, or 1G and 10G in a single bond.  

 

IMPORTANT: Bonding is not only a software feature of the NIC, it requires switch-side support and manual switch configuration to function.  DO NOT configure bonding in the appliance unless you have verified your switch can support bonding.  Unitrends cannot assist in helping you verify this.  Most common switches DO NOT support Bonding, or, if they do, require additional licensing purchases to function for failover.  

 

NOTE: Disconnect network connections from the appliance before executing the bonding script.

 

Understanding the above, if a bond is still appropriate to ensure network reliability in the event if a single link disruption, to configure bonding on a Unitrends backup appliance, use /usr/bp/bin/cmc_bonding:

Usage:cmc_bonding [args]

action           args                                                                    
----------       --------------------------------------------------------------
create           bond-name mode miimon ipaddress gateway netmask slavesX...slavesY
                 - Creates bonding device .  Must have Min of 2 slaves, Max of 5
destroy          bond-name
                 - Destroys bonding device
add              bond-name slavesX...slavesY
                 - Adds slaves to existing .  Must have Min of 1 slave, Max of 3
remove           bond-name slavesX...slavesY
                 - Removes slaves from existing . Must have Min of 1 slave, Max of 3
mode             bond-name mode
                 - Changes bonding mode of device
view_config      bond-name
                 - Prints bonding device 's configuration
list_slaves      bond-name
                 - Shows list of registered slaves to
list_bonds
                 - Shows list of created bonding devices

An example of creating a active/active bond without LACP using the cmc_bonding script is as follows:

/usr/bp/bin/cmc_bonding create bond0 0 100 192.168.101.35 192.168.101.3 255.255.255.0 eth1 eth2 eth3

 

Bonding Attributes

Miimon:

Specifies the MII link monitoring frequency in milliseconds. This determines how often the link state of each slave is inspected for link failures. A value of zero disables MII link monitoring. A value of 100 is a good starting point.  Discuss other optional values for this setting with your switch vendor.  

Mode:

Specifies the kind of protocol used by bond driver for its slaves.

The following information on bonding modes is provided for reference only:

  • Mode 0 (balance-rr):  This mode transmits packets in a sequential order from the first available slave through the last. If two real interfaces are slaves in the bond and two packets arrive destined out of the bonded interface the first will be transmitted on the first slave and the second frame will be transmitted on the second slave. The third packet will be sent on the first and so on. This provides load balancing and fault tolerance.
  • Mode 1 (active-backup):  This mode places one of the interfaces into a backup state and will only make it active if the link is lost by the active interface. Only one slave in the bond is active at an instance of time. A different slave becomes active only when the active slave fails. This mode provides fault tolerance.
  • Mode 2 (balance-xor): Transmits based on the selected transmit hash policy, which can be altered via the xmit_hash_policy option. This mode provides load balancing and fault tolerance.
  • Mode 3 (broadcast): Transmits everything on all slave interfaces. This mode provides fault tolerance.
  • Mode 4 (802.3ad): IEEE 802.3ad Dynamic link aggregation policy (LACP) [Port-channel]. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.
  • Mode 5 (balance-tlb): Adaptive transmit load balancing. Establishes channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
  • Mode 6 (balance-alb): Adaptive load balancing. Includes balance-transmit load balancing plus receive-load balancing for IPv4 traffic, and does not require any special switch support. The receive-load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond. Thus, different peers use different hardware addresses for the server.

When utilizing Mode 4 (LACP):

Add lacp_rate=1 to BONDING_OPTS in the /etc/sysconfig/network-scripts/ifcfg-bond0 NIC configuration file to prevent ping drops during NIC outages.

Have more questions?

Contact us

Was this article helpful?
0 out of 0 found this helpful

Provide feedback for the Documentation team!

Browse this section