The resends worry me (I was getting zero when I had the LAGs set up), but now I am getting at least some advantage. Huh, now I get 15.4 Gbits/sec (sometimes up to 16.0). Now I run iperf3 again: Interval Transfer Bandwidth Retr Cwnd Now, on the linux side I have bonded interfaces, but to the switch, there are no bonds (no LAG). Weirdly, I next deleted the LAG groups from the smart switch. I should be able to get 20G bandwidth, right? I'm using round-robin, and I have two 10G cables between the machines (theoretically). I get something like 9.9 Gbits per second (very close to theoretical max of single 10G connection) Given these settings, I can now use iperf (iperf3) to test bandwidth between the machines: iperf3 -s (on machine1) Note the 9000 byte MTU (for jumbo packets) and balance-rr. Transmit-hash-policy: layer3+4 #REV: only good for xor ? On each box, created a bond which contained the two ports on the intel X550-T2 card. The first thing I did was create in the switch's configuration to create static LAG groups containing the two ports that each machine was connected to. The cables are connected to single intel X550-T2 NIC in each machine (which has 2 RJ45 ports on each card), which is plugged into a PCI-E x8. I have finally achieved higher throughput (bandwidth) between two machines (servers running ubuntu 18.04 server) connected via 2 bonded 10G CAT7 cables via a TP-LINK T1700X-16TS smart switch. My question is: why does setting Link Aggregation Groups on the smart switch lower the bandwidth between two machines?
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |