Open Mellich opened 1 year ago
Hi @Mellich
Could you clarify this: "It occurs using the UDP and TCP stack."
I can see a mechanism whereby backpressure from the RX pipeline causes the UDP stack to drop packets. This shouldn't happen for the TCP stack though.
Please add ILAs to the input and output of the UDP stack to give us visibility into what's going on, and share the waveforms.
Could you clarify this: "It occurs using the UDP and TCP stack."
I can see a mechanism whereby backpressure from the RX pipeline causes the UDP stack to drop packets. This shouldn't happen for the TCP stack though.
I executed the same experiment with another bitstream containing the TCP stack. Also there, the execution gets stuck after several iterations when large message sizes are used.
The issue occurred more frequently for message sizes of 512KB and above (-s 131072
). I mainly tested powers of two so far, but it seems there is no strict line where this issue occurs. It rather seems to get more likely the larger the message.
But I did not observe many differences between the UDP and TCP bitstreams, for now, so I would assume this is an issue independent of the network stack... or two different issues leading to similar behavior.
I have been able to replicate this bug on our infrastructure with the following settings: bin/test -d -u -f -x ${XCLBIN_UDP} -b 1024 -s 131072 --rsfec -n 2000
I have changed @Mellich his code a little bit in my branch so that it automatically prints some statistics if the send/recv gets stuck. I've attached the statistics that I got from our infrastructure. Most notably, the CMAC numbers seem to match, but the Network Layer numbers don't. Is this maybe a problem with VNx instead of ACCL? (This of course doesn't explain the TCP problems)
Rank 0:
CMAC stats:
Cycle count: 831533921
Tx:
bad_fcs: 0
bytes: 305785848
good_bytes: 305785848
good_packets: 195108
packets: 195108
packets_1024_1518B: 0
packets_128_255B: 0
packets_1519_1522B: 0
packets_1523_1548B: 0
packets_1549_2047B: 193006
packets_2048_4095B: 0
packets_256_511B: 0
packets_4096_8191B: 0
packets_512_1023B: 566
packets_64B: 1536
packets_65_127B: 0
packets_8192_9215B: 0
packets_large: 0
packets_small: 0
pause: 0
user_pause: 0
Rx:
bad_fcs: 0
bytes: 306265447
good_bytes: 306265447
good_packets: 194461
packets: 194461
packets_1024_1518B: 0
packets_128_255B: 0
packets_1519_1522B: 0
packets_1523_1548B: 0
packets_1549_2047B: 193347
packets_2048_4095B: 0
packets_256_511B: 13
packets_4096_8191B: 0
packets_512_1023B: 567
packets_64B: 534
packets_65_127B: 0
packets_8192_9215B: 0
packets_bad_fcs: 0
packets_fragmented: 0
packets_jabber: 0
packets_large: 0
packets_oversize: 0
packets_small: 0
packets_toolong: 0
packets_undersize: 0
pause: 0
stomped_fcs: 0
user_pause: 0
Network Layer stats:
udp out bytes = 302203248
ethhi_out_bytes = 304913256
eth_out_bytes = 305005416
udp in bytes = 301939892
udp app out bytes = 296783232
udp app in bytes = 296524608
Rank 1:
CMAC stats:
Cycle count: 3733636461
Tx:
bad_fcs: 0
bytes: 306325932
good_bytes: 306325932
good_packets: 195450
packets: 195450
packets_1024_1518B: 0
packets_128_255B: 0
packets_1519_1522B: 0
packets_1523_1548B: 0
packets_1549_2047B: 193347
packets_2048_4095B: 0
packets_256_511B: 0
packets_4096_8191B: 0
packets_512_1023B: 567
packets_64B: 1536
packets_65_127B: 0
packets_8192_9215B: 0
packets_large: 0
packets_small: 0
pause: 0
user_pause: 0
Rx:
bad_fcs: 0
bytes: 305724888
good_bytes: 305724888
good_packets: 194115
packets: 194115
packets_1024_1518B: 0
packets_128_255B: 0
packets_1519_1522B: 0
packets_1523_1548B: 0
packets_1549_2047B: 193006
packets_2048_4095B: 0
packets_256_511B: 12
packets_4096_8191B: 0
packets_512_1023B: 566
packets_64B: 531
packets_65_127B: 0
packets_8192_9215B: 0
packets_bad_fcs: 0
packets_fragmented: 0
packets_jabber: 0
packets_large: 0
packets_oversize: 0
packets_small: 0
packets_toolong: 0
packets_undersize: 0
pause: 0
stomped_fcs: 0
user_pause: 0
Network Layer stats:
udp out bytes = 302737176
ethhi_out_bytes = 305451972
eth_out_bytes = 305544132
udp in bytes = 302203248
udp app out bytes = 297307584
udp app in bytes = 296783232
Traced this problem back to ACCL applying backpressure into the POE which causes packet loss with UDP and with TCP if RX bypass is enabled.
The current work-around is to use TCP and disable RX bypass. I made this configuration default for now.
Keeping the issue open while I debug the cause for the backpressure
Repeated calls of send/recv of the following form get stuck after several iterations on two ranks:
The behavior seems to be non-deterministic and may only appear with large message sizes and high numbers of repetitions. It happens more frequently for FPGA-to-FPGA communication as shown in the example but I also observed it for CPU-to-CPU via ACCL. It occurs using the UDP and TCP stack.
Setting a sufficiently long sleep between the iterations seems to increase stability.
I modified the XRT tests to show the described behavior here: https://github.com/Mellich/ACCL/blob/f805e8f87a91878228173668553ce25f9b9eaa31/test/host/xrt/test.cpp#L347
Using the branch above, the test gets stuck reliably for me qwhen executed with the following command:
Example dump of CMAC and network layer status of the UDP version after the execution got stuck:
Message size: 1MB
Output of test:
Rank 0:
Rank 1:
All send packets are also received by the network layer of the other rank, so no data seems to get lost over the link. However, there is a discrepancy between sent and received packets on both ranks. Shouldn't the count of packets be the same in this scenario?
The
recv
should block the subsequent send, so rx and tx should stay in balance.