Open jsallay opened 1 year ago
I looked and see that there are 8 presently defined clocks in the c++ 20 standard. https://en.cppreference.com/w/cpp/header/chrono
I'll also note that there is a new clock_cast
function that allows us to convert from one clock to another. I don't think it would be helpful to support 8 different clocks in the pmt library. I think that we can eliminate several of them though.
The notes for the high_resolution_clock
state: "The high_resolution_clock is not implemented consistently across different standard library implementations, and its use should be avoided."
The utc_clock
, tai_clock
and gps_clock
are closely related and it is fairly simple math to convert between them without loss of precision. tai_clock
and gps_clock
both contain functions to convert to/from utc. utc_clock
has a function for converting to system_clock
. I think we really only need to consider one of these four.
file_clock
does not appear to be a relevant choice to me.
local_t
doesn't seem like a good choice to me either. User can convert a utc_clock
or other clock to local time very easily and it avoids a lot of potential confusion.
The last clock is the steady_clock
. There are times where having a guaranteed monotonically increasing clock is useful, such as measuring how long something takes. tai_clock
also appears to fulfill this condition as well.
So it looks to me like if we support only 1 clock in pmts, that tai_clock
is probably the best choice. It is easily convertible to any of the other types and it is also a steady_clock
.
If we were to support multiple clocks, then I would probably pick steady_clock
and utc_clock
.
It looks like we also need to consider the precision offered. The period
used in the clocks is system dependent but looks like it is generally 1ns
. I think this should be fixed rather than system dependent especially because the time stamp will frequently come from an SDR rather than the host system. Is there a need to go beyond nanoseconds? I know that VITA49 allows for picoseconds in the timestamp.
There's the clock specified by the source/sink (hardware or otherwise) which is sometimes absolute and sometimes a data rate. This would be the USRP, RTL, audio, etc. clocks.
Based on one of those clocks, there are derived clocks, e.g., the "other side" of a decim, interp, clock sync block.
Real or system clocks (system_clock, steady_clock) can be used as metadata where absolute time or rate needs to be filled in. In a streaming sample flowgraph, they aren't relevant for rate. For message-oriented data, they would be the main time source.
Even where using a single reference rate, there would be multiple time points throughout the flowgraph, taking latency into account.
@dkozel brought up the idea of defining an SDRClock class that I think is much better than using one of the existing classes. A typical clock class has a statically defined epoch, a template clock frequency parameter, and an int64_t number of ticks since the epoch.
We would want the user to be able to define the epoch and clock frequency so that we could work with SDRs. A note with clocks is that the tick rate is a compile time constant, so we don't want to use the sample_rate for that. We should pick a tick rate that should "cover" most all of the cases. I would opt for picoseconds. It is what VITA49 uses.
I think that we would want the following fields:
We would provide functions such as:
add_samples(int samples)
-> Update the number of ticks given the sample_rate
set_epoch(std::chrono::clock clk, int sample_number)
-> Update the epoch to a new value at a particular sample
The std::chrono
library has a clock_cast
function that would allow us to convert sdr time to most of the other clock formats.
Specifically I had been browsing the std::chrono Clock requirements after digging into the time_point
type trying to find reasonable ways of storing a duration, a time, and a datetime and distinguishing between them.
https://en.cppreference.com/w/cpp/named_req/Clock
This is/was just a musing, not even rising to a full suggestion. I think we need to ask people what they use time for in SDR applications and see what fits. Mostly I've seen it fall into either USRP master clock ticks since an arbitrary start (so just tracking the relative passing of time as determined by the ADC/DAC) or absolute (and often extremely precise) time in UTC based on a GPSDO or better.
I had actually been imagining setting the tick rate to the sample rate of the SDR, so that the clock functions the same as a USRP's time_spec_t
. This has the benefit that there are no rounding/precision errors in the timestamps. Using the LTE rate of 122.88 MS/s a 1 picosecond granularity timestamp will have a cyclical timing error with a period of 50 samples as the sample period is 8138.02083 picoseconds. The sample clock jitter may swamp that, but it doesn't seem necessary to create the issue in the first place.
Must a clock be fixed at compile time? Is that a libpmt limitation? I'd imagined dynamically creating them, and potentially changing Clock objects as the SDR reconfigures.
How will time back-propogate from sinks? Something like an audio or network sink (with backpressure) might be the worst case.
You are correct about picoseconds. I realized this shortly after posting. The period is a compile-time required value. https://en.cppreference.com/w/cpp/named_req/Clock.
We could solve it by going much lower than picoseconds. I did some quick back of envelope calculations and something on the order of 10^-21s would probably be more appropriate. I can dig around and see if there is a more dynamic way of representing the period, but I'm not seeing it.
@willcode I'm not sure I understand why time would need to propagate backwards. Generally speaking in streaming, I would think that the time would be based upon sample count, without any regard to real world clock time.
In the case where we have an SDR feeding an audio source and it can't keep up the SDR will have to drop samples. In that case, I would think that the SDR would update the time that would flow to downstream blocks.
Can you help me understand a case where a previous block would need to know timing from a sink block?
A simple case would be a signal generator fed into a SDR sink. The upstream blocks may or may not need absolute time. How about something that produces a burst every N seconds. I'm thinking about how the whole flowgraph deals with sample rate and time if it's not orchestrated "from above".
I'm still not understanding. The way that I'm thinking about (which may have flaws) would be that every source is responsible for setting its own time. That could be from a GPSDO for an SDR or it could be wall clock time for a message strobe or any other method that is appropriate.
If a processing block takes a single input, then generally it would just pass the time tags as is to its output. In the case of a filter, it would add a delay. For something like timing sync, it would change the timing reference and probably add a fractional sample offset.
For processing blocks that take multiple inputs, we would have to have a convention for how to handle time. I would vote for something like time propagates from the first input and blocks in general ignore any timing offset between inputs. (If a user needed to time align inputs, then we could provide a block that does that).
A sink block would accept the time tags from the input and do whatever it needs to with them.
In this way time would propagate through the graph from sources to sinks. With some fairly simple rules, I think we could make it easy for users to understand how time tags would change as data moves through the flowgraph.
If I had a signal generator feed into an SDR sink and a file sink. I don't think I would want the SDR sink to have any impact on the timing of the signal generator. If it did, then that would impact my file sink in ways that aren't predictable.
I'm probably putting my comments in the wrong place ... this Issue is more about representation of time and I'm thinking of orchestration of sample rate. Anyways ...
In the flowgraph Signal Source
-> USRP Sink
, backpressure is from the sink, and the source has no time source available. But the Signal Source
needs to know sample rate. Of course, we could express frequencies in terms of radians/item, but we at least pretend to users that there is a sample rate at the Signal Source
. An OFDM or DVB transmitter would be more complex examples. Absolute time isn't important in these examples.
Still wondering, if a source block's only source of real time, and its actual rate, come from downstream, is it just not allowed to think in terms of time? Or does it have one reference for time to, say, send a burst (system clock) by a rate determined by backpressure. The two won't necessarily match ... maybe it doesn't matter.
I think I'm seeing it now. It sounds like you are thinking about an SDR sink that knows the sample rate and time and a signal generator that doesn't. The SDR would provide that information to the source. I am thinking of the problem in terms of time flowing with the data as a tag.
I agree that would be really complicated and you would definitely run into cases with a mismatch coming from input and output. Once we get the format figured out, we definitely need to think through the best ways to integrate it into GR.
I did some more research on the issue of the ratio being a compile time constant and I am revising my opinion. The key is distinguishing between a std::chrono::time_point<std:clock>
and a std::chrono::duration<rep, period>
. The period of the duration
has to be known at compile time, but the internals of the time_point
can be dynamic.
So a time_point
could be initialized with a frequency and epoch. We would store the exact number of samples seen without any rounding needed. We would want to define an add_samples
function that would directly increment the sample count without worrying about the time duration covered.
The duration
object would have no concept of sample rate
or samples
. Any duration
produced from our time_point
would be converted to picoseconds since the epoch. In an application where someone wanted to regularly convert the time_point
objects to duration
objects, (such as printing the time that has elapsed), there would be no accumulated rounding error because the time_point
objects would be exact.
It would be very convenient to be able to represent timestamps in a pmt. I'm thinking that the correct way to do would be with a std::chrono::time_point. However, this requires that we pick a clock to use. We could leave it as a template parameter or we could allow for multiple choices. i.e.
system_clock
,utc_clock
,high_resolution_clock
.Please leave your thoughts.