Closed Milind-Blaze closed 5 months ago
The sample code is single threaded. Adding a "sleep(5)" command will suspend all operation for 5 seconds -- no packet will be sent or processed. But the peer's operation continue, timers fire, packets get resent, etc. This is going to cause a number of disruptions.
If you want to measure latency, you best bet is to add time stamps to you application messages. Or maybe, add code to produce a qlog on both client and server. Qlog provides a summary in JSON of the exchange of packets. It can be visualized using existing tools, see https://qvis.quictools.info/. Or you can write a simple parser in javascript or python, and analyze the data.
@huitema I see, thank you for letting me know! I am currently doing something equivalent to adding timestamps to application layer messages. This allows me to measure latency for messages when they are sent back to back.
Would you be able to give me some pointers on how to (or if there is a way to) introduce some delay between these messages without rewriting the whole sample client/server code? More specifically, how do you keep a connection idle in picoquic because that is the traffic pattern I want to replicate
(client request --> response 1 from server --> idle --> response 2 from server --> idle ... )
Thank you!
You will need to change the state machine of the application. Currently it is simple: the server get a message from the client and processes it immediately. What you will need is:
1) Store the message but not process it. 2) Use a "loop callback" to tell the underlying socket loop to wait until the time to process the message has come. 3) When the time has come, submit the message to the server.
The key is to not stop the underlying socket loop. I am afraid that this will involve a substantial coding effort.
Thank you for letting me know!
Hi all, I am using the client and server code in the sample directory to test application level latency of picoquic. I am doing this for a long running flow where the typical application pattern is a few KB of data (I will call this a chunk) followed by a pause of a few seconds and then more data and this repeated over and over again.
Each time data is sent, I want to measure the time between the server starting to send data and the receiver receiving the last bit of that data. Here, the data will be split over several packets-therefore, I am measuring the time between the the first packet being sent by the server and the last one being received by the receiver. I add a
sleep(5)
command in the server_callback to trigger a wait every time a new chunk of data is being sent.For this, I add log a timestamp at the server when a chunk is sent and one at the client when it is fully received. However, this is producing strange results-
Note that I am using a client and server over localhost.