Closed sbenyamin closed 5 years ago
That's a tricky requirement! I think the best option would be to have the talker use launch time tx so you know when a packet was put on the wire, then a the end of processing on the listener side grab the CPU's high performance counter time. Then use CPU time to gPTP time translation to figure out now many nanoseconds passed between the packet being put on the wire and the end of packet processing. I realize this is a "hand wavy" sort of description, but hopefully you get the idea.
Thank you for your response Andrew. I will look into this and execute for processing time of received packets. I would like to also figure out transmit packet processing time. I am not sure whether I can program the board to update TXSTMP for all packets.
The linux distribution file timestamping.c provided by intel guides on this issue.
Hi, I need to measure the time it takes a packet to get processed through my application and the linux stack all the way to transmission (and would rather avoid editing kernel drivers). I have searched the web as well as this forum and do not see a similar post (issue 604 has some information, but is not exactly the same problem). Basically at some point in the application, I need to sample a reference time when I start packet processing, then again sample it as the packet is about to get transmitted. I am thinking that the best solution would be to sample the IEEE1588 timer as I am about to start processing the packet (which I can not find access to in the data sheet); and then when the packet is transmitted, the MAC can store the packet transmission time in TXSTMP register (0xB618/C) with status in (0xB614) and I can read that and subtract. Can anyone help me with the following questions
Thank you in advance for your help -sam