Closed aidenp2024 closed 22 hours ago
UPDATE:I tried running the udp_latency.c on client with frequency set as 1000.This is my server side output:
----- Offset at time 0 second: 0.020414 -----
----- Offset at time 1 second: 0.043969 -----
----- Offset at time 2 second: 0.045720 -----
----- Offset at time 3 second: -0.003756 -----
----- Offset at time 4 second: -0.003689 -----
----- Offset at time 5 second: -0.010465 -----
----- Offset at time 6 second: -0.008477 -----
----- Offset at time 7 second: -0.003023 -----
----- Offset at time 8 second: -0.004933 -----
----- Offset at time 9 second: -0.008951 -----
| ------------- Summary --------------- |
Total 99751 packets are received in 99.971240 seconds
Average latency: -0.006879 second
Maximum latency: 0.065513 second
Std latency: 0.006023 second
bandwidth: 11.418880 Mbits
Jitter (Latency Max - Min): 0.077593 second
Packet loss: 0.000040
Why is the Average latency negative? Any idea But when I run rtt.py ,the Average latency is a positive value.
Also this is my setup: CLIENT -------> SWITCH -------> ROUTER -------> SERVER
UPDATE:I tried running the udp_latency.c on client with frequency set as 1000.This is my server side output:
----- Offset at time 0 second: 0.020414 ----- ----- Offset at time 1 second: 0.043969 ----- ----- Offset at time 2 second: 0.045720 ----- ----- Offset at time 3 second: -0.003756 ----- ----- Offset at time 4 second: -0.003689 ----- ----- Offset at time 5 second: -0.010465 ----- ----- Offset at time 6 second: -0.008477 ----- ----- Offset at time 7 second: -0.003023 ----- ----- Offset at time 8 second: -0.004933 ----- ----- Offset at time 9 second: -0.008951 ----- | ------------- Summary --------------- | Total 99751 packets are received in 99.971240 seconds Average latency: -0.006879 second Maximum latency: 0.065513 second Std latency: 0.006023 second bandwidth: 11.418880 Mbits Jitter (Latency Max - Min): 0.077593 second Packet loss: 0.000040
Why is the Average latency negative? Any idea But when I run rtt.py ,the Average latency is a positive value.
Also this is my setup: CLIENT -------> SWITCH -------> ROUTER -------> SERVER
The observed behavior is likely caused by clock drift between the CLIENT and SERVER.
The current implementation in udp_latency.c
only performs synchronization at the beginning, assuming no significant clock drift occurs during the test. However, when the clock offset is not stable from your output, which can significantly impact the delay measurements.
I don't exactly know what "burst" you are referring to... If you mean a large throughput, use following option to send in best effort
-f “m” means constantly send UDP packets in maximum bandwidth
The observed behavior is likely caused by clock drift between the CLIENT and SERVER.
The current implementation in udp_latency.c only performs synchronization at the beginning, assuming no significant clock drift occurs during the test. However, when the clock offset is not stable from your output, which can significantly impact the delay measurements.
oh okay.Can anything be done to stabilize the clock offset?
I don't exactly know what "burst" you are referring to... If you mean a large throughput, use following option to send in best effort
Thanks! That's what I needed, to utilize maximum bandwidth.
There are many protocols for high precision clock synchronization. You can try to use PTP or IEEE 802.1AS. The open source implementation for PTP can be found here:
Thanks for the help. I will try it out. Closing the issue :)
Hi @ChuanyuXue, Firstly thanks for developing such a wonderful tool! I just wanna know if you have any idea on how to use this/convert this to generate a bursty traffic which could be used for load testing?Any insights?/