Open M0dEx opened 1 year ago
https://tailscale.com/blog/throughput-improvements/
and
https://tailscale.com/blog/more-throughput/
might be useful in regards to optimizing TUN performance, which seems to be problem at the moment (a lot of time spent in poll_write
for the TUN interface).
The changes Tailscale made to wireguard-go are available here: https://github.com/WireGuard/wireguard-go/blob/master/tun/tcp_offload_linux.go
With an MTU of 6000, the throughput nearly triples, to about ~ 3 Gbps regardless of data transfer direction.
From the flamecharts, it is clear that more CPU time is spent encrypting the packets, but most of the time is still spent in poll_write
for the TUN interfaces.
The CPU usage also decreased to about 60 - 70 % on both Server and Client.
Server flamechart:
Client flamechart:
Server flamechart:
Client flamechart:
GSO/GRO support is work-in-progress: https://github.com/ssrlive/rust-tun/pull/45
I can try if io_uring
would improve the performance on Linux; it might require some nontrivial changes in the TUN library, but it could be worth a shot.
The performance as of 0.1.6 is worse than expected.
Between two virtual machines on the same virtualized network (capable of about 30 Gbps of throughput), Quincy only manages:
with an MTU of 1400 bytes.
Server -> Client
Initial profiling did not yield anything suspicious, other than the fact that
QuincyTunnel::process_inbound_traffic
takes more time (had more samples) thanQuincyTunnel::process_outbound_traffic
during theServer -> Client
data transfer, which is odd, as most of the data transfered should be going throughQuincyTunnel::process_inbound_traffic
.The CPU usage on the Server virtual machine is also only 60 % balanced across all cores, which could mean either either too much IO, or that the Client is the bottleneck.
The CPU usage on the Client is much higher, in the 90s.
Server flamechart:
Client flamechart:
Client -> Server
Pretty much the same behaviour as above -
QuincyClient::process_inbound_traffic
takes more time thanQuincyClient::process_outbound_traffic
, which is, again, suspicious.The CPU usage on the Server side is above 90 %, on the Client side only ~ 70 %.
Server flamechart:
Client flamechart:
Initial conclusions
It seems that the CPU usage on the receiving side is quite high, and that the receiving side spends more time in their respective
process_inbound_traffic
methods, which is highly suspicious (most of the data transfered should be handled by the respectiveprocess_outbound_traffic
methods, at least that is my initial assumption).Further investigation will be needed as to where Quincy client and server spend too much time.
TODO
process_inbound_traffic
andprocess_outbound_traffic