Open joergdeutschmann-i7 opened 1 year ago
The clocks are synchronized at the beginning of the test. Adjusting for clock drift with a synchronization at the end of the test is on my todo list. You may currently observe some clock drift with longer test durations.
Okay, so the assumption is then a symmetrical path regarding delays? Could you point me to the code where this is done?
Yes, it assumes the delays are symmetric on idle. The code is here. This function returns an offset to convert server time into client time.
On linux it's only a couple calls to sample TCP_INFO:
this one runs out of band, not useful... https://www.measurementlab.net/tests/tcp-info/ https://docs.trafficserver.apache.org/en/9.0.x/admin-guide/plugins/tcpinfo.en.html runs inband, but doesn't do any post-processing
this method could be used to hook some other tool that uses some common rust geturl or equivalent lib - https://linuxgazette.net/136/pfeiffer.html
And these calls can be used on windows:
https://learn.microsoft.com/en-us/windows/win32/winsock/sio-tcp-info https://learn.microsoft.com/en-us/windows/win32/api/mstcpip/ns-mstcpip-tcp_info_v1
just ULONG RttUs; sampled every 10-50ms ULONG MinRttUs; vs this baseline
would be a way of measuring inband. Right now crusader just measures the effectiveness of FQ, not the actual behavior of tcp flows.
First of all, a big thanks for this great software, which works really nice!
In the graphs, the latency for Up and Down is shown. I'm wondering how this is done considering that client and server are usually different machines with unsynchronized clocks?