The egg_reader can currently be configured to read a number of records, with the special case that if set to 0, it will continue reading until it reaches the end of the file. For some tests it would be desirable to have one or more of the following options available:
loop over the input file multiple times: This should not be a special value of N records, but instead should be an additional (bool?) flag. That way one can cover cases like "read the next N records or to the end of the file, whichever comes first" and "read N further records, even across the file boundary"
configure with a list of egg files and read each of them in in sequence: This would require some care because each file would need to be opened and closed, ideally without significant disruption to the cadence of the data output rate
Notes:
In all cases above, we should (I think) do something to break the acquisition (maybe not, if reading multiple egg files that are from the same stream... this gets complicated).
I want to have something ready soon so that it can be used to better stress test psyllid in the near term. Probably a version which loops only a single file and creates a gap in the packet ID each time the file gets looped... will that result in the acquisition ID being incremented?
I think this is probably also needed before it becomes useful to have a fast packet producer node.
The egg_reader can currently be configured to read a number of records, with the special case that if set to 0, it will continue reading until it reaches the end of the file. For some tests it would be desirable to have one or more of the following options available:
Notes:
In all cases above, we should (I think) do something to break the acquisition (maybe not, if reading multiple egg files that are from the same stream... this gets complicated).
I want to have something ready soon so that it can be used to better stress test psyllid in the near term. Probably a version which loops only a single file and creates a gap in the packet ID each time the file gets looped... will that result in the acquisition ID being incremented?
I think this is probably also needed before it becomes useful to have a fast packet producer node.