SUMMARY
By using tools such as iperf and iptraf we can test the over all speed of backup copy (replication).
ISSUE
How to test replication bandwidth throughput.
RESOLUTION
On the target, run iperf in server mode:
[root@targetVM ~]# iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------
Test a source that is across a VPN connection. The following iperf command options will be used:
-P = number of parallel client threads to run -r = Do a bidirectional test individually -i = pause n seconds between periodic bandwidth reports -t = time in seconds to transmit for (default 10 secs)
To test the source, run the following command:
[root@backup ~]# iperf -c targetVM -P 4 -r -i 10 -t 120 ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to UnitrendsVM, TCP port 5001 TCP window size: 96.7 KByte (default) ------------------------------------------------------------ [ 6] local 192.168.0.251 port 55062 connected with 192.168.0.11 port 5001 [ 7] local 192.168.0.251 port 55064 connected with 192.168.0.11 port 5001 [ 8] local 192.168.0.251 port 55063 connected with 192.168.0.11 port 5001 [ 5] local 192.168.0.251 port 55061 connected with 192.168.0.11 port 5001 [ ID] Interval Transfer Bandwidth [ 6] 0.0-10.0 sec 267 MBytes 224 Mbits/sec [ 8] 0.0-10.0 sec 267 MBytes 224 Mbits/sec [ 5] 0.0-10.0 sec 268 MBytes 225 Mbits/sec [ 7] 0.0-10.0 sec 268 MBytes 225 Mbits/sec [SUM] 0.0-10.0 sec 1.04 GBytes 897 Mbits/sec [ 6] 10.0-20.0 sec 265 MBytes 223 Mbits/sec [ 7] 10.0-20.0 sec 266 MBytes 223 Mbits/sec [ 8] 10.0-20.0 sec 266 MBytes 223 Mbits/sec [ 5] 10.0-20.0 sec 265 MBytes 223 Mbits/sec [SUM] 10.0-20.0 sec 1.04 GBytes 891 Mbits/sec
This output shows us that our overall throughput is 1Gb, being this is a 1Gb NIC this is the correct transfer rate. Many times with openVPN, you will see the overall throughput significantly less than what the NIC is capable of. This is an indication that OpenVPN tuning maybe required. If you are replicating on the same network as your source, for example maybe you are looking to move the target to a new location and you want to replicate the initial seed on your faster connection. For this reason we suggest disabling the OpenVPN. In the step listed below you can test the UDP packets across the non-VPN to better determine if the issue is with OpenVPN requiring tuning or if the issue is network related.
To test a source connected via a non-VPN connection (ie, local network), the following iperf command option will be added:
-b = bandwidth set target bandwidth to n bits/sec (default 1 Mbit/sec).
While this setting requires UDP (-u), the -u flag is assumed and not required to add in your command.
[root@backup ~]# iperf -c targetVM -P 4 -r -i 10 -t 120 -b 1000M WARNING: option -b implies udp testing ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 122 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to UnitrendsVM, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 122 KByte (default) ------------------------------------------------------------ [ 5] local 192.168.0.251 port 43892 connected with 192.168.0.11 port 5001 [ 3] local 192.168.0.251 port 55576 connected with 192.168.0.11 port 5001 [ 8] local 192.168.0.251 port 51076 connected with 192.168.0.11 port 5001 [ 4] local 192.168.0.251 port 35868 connected with 192.168.0.11 port 5001 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 266 MBytes 224 Mbits/sec [ 3] 0.0-10.0 sec 268 MBytes 224 Mbits/sec [ 8] 0.0-10.0 sec 267 MBytes 224 Mbits/sec [ 4] 0.0-10.0 sec 263 MBytes 221 Mbits/sec [SUM] 0.0-10.0 sec 1.04 GBytes 893 Mbits/sec [ 5] 10.0-20.0 sec 267 MBytes 224 Mbits/sec [ 3] 10.0-20.0 sec 267 MBytes 224 Mbits/sec [ 8] 10.0-20.0 sec 266 MBytes 223 Mbits/sec [ 4] 10.0-20.0 sec 266 MBytes 223 Mbits/sec [SUM] 10.0-20.0 sec 1.04 GBytes 893 Mbits/sec
This output shows us that our overall throughput is 1Gb, being this is a 1Gb NIC this is the correct transfer rate. If it is below this, this may indicate the issue is on the network and not the Unitrends appliance. It is somewhat common for a firewall to have UDP flood protection, we suggest turning this feature off since a majority of our replication is sent over UDP.
On the source you can use iptraf to monitor the actual throughput eth0. If your system does not have iptraf, it may be installed by running:
yum install iptraf
Once installed, run it:
iptraf-ng > Select Detailed interface statistics > eth0
Monitor your outgoing rates, being this is all outbound traffic on eth0 we will notice a fluctuation in speed. If this is the first time replication is being sent to the target, keep in mind the target must hash the new backups therefore your total throughput maybe delayed as we are waiting for the target to complete the hash. Once the initial seed has been sent, we will only hash the changed blocks therefore increasing the replication throughput.
NOTES
iPerf Guide - TCP, UDP and SCTP speed test tool
Default Port: Default port that iPerf uses is 5001. If this is a problem, you can change the port by using the -p option. This change must be done both the client and server using the same value.