Closed GoogleCodeExporter closed 9 years ago
Yes batch timestamping is intended behaviour. Changing this a bit, by putting a
timestamp in individual buffers as packets are processed (which does not
correspond to the actual reception) would have some significant loss of
performance at high rates, and no real advantage. Note that the software
timestamping done in the network stack does not reflect arrival times either:
even without netmap you get an interrupt from the NIC with a one or more
packets, then the napi/softintr thread starts timestamping packets as it
processes them.
The "out of order" packets (I assume you refer to packets from different
queues) phenomenon could be observed also with the standard drivers.
On passing: the D(), RD() and ND() macros are generic debugging helper not
related to
packet timestamps.
Original comment by rizzo.un...@gmail.com
on 30 Sep 2014 at 7:41
Hi Rizzo:
Thank you for your quick response. What I meant by "out of order" purely from a
post processing stand point. We are using netmap as a pcap logger. But post
processing tools such as wireshark attempts to index packets based on
timestamp, multiple packets sharing the same time timestamp would not get
presented in accurate order in wireshark. I understand the lack of accuracy of
software timestamping overall. Just out of curiousity, what is performance
penalty of having timestamp on individual buffers? I understand much
improvement has been put into netmap, is it still be faster than standard
networking stack?
Original comment by morph...@gmail.com
on 30 Sep 2014 at 8:29
The reordering is a bug in whatever presentation tool you are using. It should
use a stable sort algorithm.
With a microsecond timestamp resolution there is no way netmap or any other
capture library can guarantee that all packets have different timestamps (they
can be 67ns apart on the wire).
Original comment by rizzo.un...@gmail.com
on 1 Oct 2014 at 10:16
Original issue reported on code.google.com by
morph...@gmail.com
on 30 Sep 2014 at 7:19