Дано:
- есть сервер srv00 в одном датацентре, подключен к сети на скорости 1Gbit/sec
- есть сервер srv01 в другом датацентре, подключен к сети на скорости 10Gbit/sec, используется bonding.
Проблема: аномально низкая скорость:
[avz@srv00 ~]$ iperf3 -c srv01.my.zone Connecting to host srv01.my.zone, port 5201 [ 4] local xx.xxx.xx.xx port 27108 connected to yyy.yy.yy.yyy port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 5.21 MBytes 43.7 Mbits/sec 40 130 KBytes [ 4] 1.00-2.00 sec 2.45 MBytes 20.6 Mbits/sec 5 79.8 KBytes [ 4] 2.00-3.00 sec 2.02 MBytes 17.0 Mbits/sec 3 52.8 KBytes [ 4] 3.00-4.00 sec 1.96 MBytes 16.4 Mbits/sec 8 51.3 KBytes [ 4] 4.00-5.00 sec 1.96 MBytes 16.4 Mbits/sec 0 68.4 KBytes [ 4] 5.00-6.00 sec 1.53 MBytes 12.8 Mbits/sec 1 65.6 KBytes [ 4] 6.00-7.00 sec 1.96 MBytes 16.4 Mbits/sec 3 67.0 KBytes [ 4] 7.00-8.00 sec 1.53 MBytes 12.8 Mbits/sec 2 47.1 KBytes [ 4] 8.00-9.00 sec 1.47 MBytes 12.3 Mbits/sec 3 51.3 KBytes [ 4] 9.00-10.00 sec 1004 KBytes 8.22 Mbits/sec 14 31.4 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 21.1 MBytes 17.7 Mbits/sec 79 sender [ 4] 0.00-10.00 sec 20.1 MBytes 16.9 Mbits/sec receiver
Поскольку при тестах с другим соседним для srv01 сервером (в той же стойке) проблем со скоростью обнаружено не было, сразу возник вопрос а чем же они отличаются. Самым бросающимся в глаза отличием было то, что у srv01 bonding есть, а соседнего (у которого всё хорошо со скоростью) – bonding-а нету.
Смотрим текущий режим:
[root@srv01 ]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 1 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: enp2s0f0 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:01:02:a3:78:c0 Slave queue ID: 0 Slave Interface: enp2s0f1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:01:02:a3:78:c4 Slave queue ID: 0 [root@srv01 ~]# ethtool bond0 | grep Speed Speed: 20000Mb/s
В конфиге
[root@srv01 ]# grep BONDING /etc/sysconfig/network-scripts/ifcfg-Bond_connection_1 BONDING_OPTS="miimon=1 updelay=0 downdelay=0 mode=balance-rr" BONDING_MASTER="yes"
Начинаем экспериментировать, меняем режим с balance-rr на active-backup:
[root@srv01 ]# grep BONDING /etc/sysconfig/network-scripts/ifcfg-Bond_connection_1 BONDING_OPTS="miimon=1 updelay=0 downdelay=0 mode=active-backup" BONDING_MASTER="yes"
Рестартим сеть, смотрим:
[root@srv01 ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: enp2s0f0 MII Status: up MII Polling Interval (ms): 1 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: enp2s0f0 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:01:02:a3:78:c0 Slave queue ID: 0 Slave Interface: enp2s0f1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:01:02:a3:78:c4 Slave queue ID: 0
Примечательно, что скорость по данным ethtool изменилась с 20-ти на 10Гбит/сек:
[root@srv01 ~]# ethtool bond0 | grep Speed Speed: 10000Mb/s
что и логично, поскольку в этом режиме всегда активен только один сетевой интерфейс, а второй – "на подхвате".
Тестируем скорость еще раз:
[avz@srv00 ~]$ iperf3 -c srv01.my.zone Connecting to host srv01.my.zone, port 5201 [ 4] local xx.xxx.xx.xx port 27108 connected to yyy.yy.yy.yyy port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 94.5 MBytes 793 Mbits/sec 0 4.94 MBytes [ 4] 1.00-2.00 sec 114 MBytes 954 Mbits/sec 0 4.94 MBytes [ 4] 2.00-3.00 sec 112 MBytes 944 Mbits/sec 0 4.94 MBytes [ 4] 3.00-4.00 sec 110 MBytes 923 Mbits/sec 8 2.56 MBytes [ 4] 4.00-5.00 sec 81.2 MBytes 682 Mbits/sec 0 2.71 MBytes [ 4] 5.00-6.00 sec 86.2 MBytes 724 Mbits/sec 0 2.83 MBytes [ 4] 6.00-7.00 sec 88.8 MBytes 744 Mbits/sec 0 2.92 MBytes [ 4] 7.00-8.00 sec 92.5 MBytes 776 Mbits/sec 0 2.99 MBytes [ 4] 8.00-9.00 sec 92.5 MBytes 776 Mbits/sec 0 3.04 MBytes [ 4] 9.00-10.00 sec 95.0 MBytes 797 Mbits/sec 0 3.07 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 967 MBytes 811 Mbits/sec 8 sender [ 4] 0.00-10.00 sec 959 MBytes 804 Mbits/sec receiver iperf Done.
Красота, в десятки раз лучше.
Отсюда следует вывод – на стороне коммутатора режим был сконфигурен неправильно (не соответствовал конфигурации со стороны сервера).
Из официальной доки по linux-bonding-у:
The active-backup, balance-tlb and balance-alb modes do not require any specific configuration of the switch.
The balance-rr, balance-xor and broadcast modes generally require that the switch have the appropriate ports grouped together. The nomenclature for such a group differs between switches, it may be called an "etherchannel" (as in the Cisco example, above), a "trunk group" or some other similar variation. For these modes, each switch will also have its own configuration options for the switch's transmit policy to the bond. Typical choices include XOR of either the MAC or IP addresses. The transmit policy of the two peers does not need to match. For these three modes, the bonding mode really selects a transmit policy for an EtherChannel group; all three will interoperate with another EtherChannel group.