under the current design, if we drop a packet (either because the code isn't keeping up or due to network issues), that will result in a new acquisition. in streaming data this is obvious, but when triggering you expect a new acquisition for each event. A dropped packet between events could go unnoticed.
one suggestion would be some node which collects a count of dropped packets (and maybe other statistics) and then allows them to be written to the file when it closes out.
under the current design, if we drop a packet (either because the code isn't keeping up or due to network issues), that will result in a new acquisition. in streaming data this is obvious, but when triggering you expect a new acquisition for each event. A dropped packet between events could go unnoticed.
one suggestion would be some node which collects a count of dropped packets (and maybe other statistics) and then allows them to be written to the file when it closes out.