Closed JakkuSakura closed 3 years ago
Looks like your code forgot to call phy_wait
. So the poll loop actually spins waiting for RawSocket to have data which is very inefficient.
See example:
Thanks for pointing out. Though I do let it spin on purpose, since performance is my first priority at any cost.
while tcp.may_recv() {
tcp.recv_slice(&mut data)?;
}
doesn't help much. In the end smoltcp seems stuck waiting for something, so the bottleneck is not CPU
I have spotted serveral severe issues.
MTU on localhost is not honored when the remote machine is trying to detect the proper MTU. This is very significant in AWS. My network interface for smoltcp only supports 1500, and that on the remote machine supports 9000. Then the remote machine supposes MTU on path to be 3000 since smoltcp did not send out an icmp message Fragmentation Needed
while receiving a 2900 bytes sized packet. It simply gets a checksum error and drops.
While receiving a checksum errored TCP packet, smoltcp did not send out ACK requiring the remote to retransmit, but instead replying on the remote to time out.
Can you explain the second point a bit more?
Besides the issues you mentioned, these two points from the TCP README section are relevant when you have stuck connections:
However, I guess most remote machines don't have PLMTU activated either and this won't help here. Your remote machine probably needs some other trick to know the right path MTU if smoltcp is not to blame for not sending an ICMP out for this large packets.
I'm running tracepath
on the remote machine.
tracepath some.ip.v4.address
1?: [LOCALHOST] pmtu 9001
1: no reply
2: no reply
3: no reply
4: no reply
5: no reply
...
And I'm only getting the following repeatedly.
EthernetII src=06-db-18-9e-7d-66 dst=06-67-ae-9e-c3-0a type=IPv4
\ (truncated packet)
Jun 16 09:22:01.745 TRACE main smoltcp_raw_tcp_perf: localhost <- EthernetII src=06-db-18-9e-7d-66 dst=06-67-ae-9e-c3-0a type=IPv4
\ (truncated packet)
I suppose the remote linux machine are trying to detect the smoltcp side's MTU but couldn't get one. So it is trying to send extra large frames based on its MTU 9001.
Packetization Layer Path MTU Discovery PLPMTU is not implemented
I don't need smoltcp's PLPMTU, but I need smoltcp to response to such extra large ethernet frames with ICMP message.
Your remote machine probably needs some other trick to know the right path MTU if smoltcp is not to blame for not sending an ICMP out for this large packets.
Indeed, my network interface has trouble supporting larger frames. But the remote machine may not be something I can control, so I need to send out an ICMP message in smoltcp.
The issue has been solved by #497. Thank you all!
This reddit post says smoltcp achieves 2 Gbps throughput in benchmark. However, It is only tested in loopback. I created a real world benchmark tcp write speed tester on another machine. I can
nc remote_address 9999
to test my local tcp's read speed then. When I switch to smoltcp on top ofRawSocket
, It did connect and read something, but very soon it become so slow that it did even not complete my benchmark. Any idea why this goes wrong?benchmark.log The first ip is from my smoltcp machine, which never completes. The second ip is from my local machine, with
nc ip port
, which completes.smoltcp.log