pothosware / SoapyRTLSDR

SoapySDR RTL-SDR Support Module
https://github.com/pothosware/SoapyRTLSDR/wiki
MIT License
124 stars 29 forks source link

Sampling Timing Lost on Tune #35

Closed fuzzyTew closed 5 years ago

fuzzyTew commented 5 years ago

The rtlsdr usually has the pleasant feature that sample time stays consistent across tuning i.e. time * samplerate = samplenum .

SoapyRTLSDR prevents the user from using this timing information by automatically dropping buffers when retuning. Alternatives might include:

It's notable that dropping these buffers means a loss of useful data from before the tune happened. Additionally, it's possible there may still be buffers accumulating that have the old tuning, despite the reset, if some slow page fault has left them hanging on the libusb queue.

guruofquality commented 5 years ago

So there is this resetBuffer thing in readStream: https://github.com/pothosware/SoapyRTLSDR/blob/master/Streaming.cpp#L464 and its set every time the frequency is changed. I think its an attempt to make a GUI application prettier by skipping samples during the tune operation.

That said, I don't really like this approach. Applications that want to skip these samples should really activate/deactivate stream around the tune call. activate/deactivate are supposed to be lightweight compared to setup/close stream, so any drivers not doing that need to be corrected.

So I support removing that buffer drop in general, or somehow making it optional. Its hackish, not documented, and not consistent across other hardwares. I know rtl isnt nearly as complex as a usrp or bladerf, but if we can do better, or make soapyrtl more useful -- we should.

That said, I also like the idea of putting in a timestamp (timeNs) that says consistent across drops, overflows, intentional drops etc... Now rtl itself doesnt really timestamp the samples in the hardware, so we cant do anything if the samples are lost before they get into libusb, but the SoapyRTLSDR can definitely account for drops its responsible for.

So if thats what you were looking for in terms of feedback, I would welcome any sort of pull request to give rtl meaningful timestamps or sample timing.

fuzzyTew commented 5 years ago

@guruofquality, thanks for your prompt reply. If I were to find the time to craft a PR for this, do you have any tips on how to use or expand the API to tell the client which samples are from before the tune, and which are after? Presently the code functions such that all samples after the tune were received after the request was sent, but if the queued buffers aren't dropped, that won't be the case. Additionally in something like pothos or gnuradio this information would be good for stream tags.

guruofquality commented 5 years ago

Let me just purpose a few ways i would solve this on a platform that implemented the timestamp streaming:

1) Timed commands: The the hardware supported command time (some of the USRPs do), you would control the exact sample that the retune started at. Then you can just assume samples are good after that time + some worst case time. I use the worst case time because I dont think there is any hardware support to timestamp when the VCO tune actually completes

2) Using getHardwareTime(). If you get the hardware time before and after the tune operation (which queried asynchronously from the stream). Then you know exactly what samples were definitely before the re-tune, and you know which samples were definitely after. Any samples in-between are basically ambiguous even though there is probably some room for slop, good samples will be thrown out because getHardwareTime() isnt instantaneous. This will work no matter what is queued up in the USB buffers.

-- For RTL, there are queues both in the hardware/driver/usb layer and in SoapyRTLSDR. The timestamp can at best represent the sample count as seen by the rx thread callback. So the issue emulating this feature is that some amount of samples are queued in the hardware when the timestamp is taken after re-tuning. This should be small because there is a callback thread actively pulling it out, and the queue in the SoapyRTLSDR is really the only one that backs up. But thats still not perfect given that its not truly backed by a counter in the hardware thats also used to timestamp the samples.

3) Using activate/deactivate stream: If you de-active the stream (this should also flush buffers), call a re-tune, and then re-activate. Then all of the samples before deactivate are good for the old frequency, and any samples after the activate are good for the new frequency. Again there is slop here because the streaming is stopped and flushed, but it functions a lot like 2). And because of the timestamps, sample tracking is not really lost.

-- For RTL, the rx callback thread would continue during deactivate, but only to account for the sample timestamp. It wouldnt really do any work other than incrementing the counter and throwing out the samples. Again there is the same ambiguity here, since the stream is not truly stopped, any buffers in the hardware/driver/usb layer that didnt make it in time to get dropped during the re-tune, will show up at activateStream. Again, this maybe small, but still not perfect.

Since 1) isnt really an option for rtl, I think if the timestamp API is implemented 2 or 3 will both be options. And then you could make a higher level sort of wrapper that created a gnuradio stream tag, or something like that to indicate when the re-tune was safely completed.

So making the actively running stream keep its sample count (baring overflows) -- is easy. But timestamping samples that are asynchronous to the tune event without hardware support, its basically impossibly to do it flawlessly. I think options 2 or 3 would work on rtl with the discussed emulated timestamps -- but also provided there was some worst case setup time added for the buffering hiding in the hardware layers.

xloem commented 5 years ago

I'm thinking #2 is the way to go. With #3, you have to rely on the nanoseconds time field to predict the sample number, and the samplerate is unlikely to be an even number of nanoseconds. For precise use wrt samples, really the number of samples is needed rather than the time, which means getting every buffer. For precise timing, it seems it's left to us to develop good heuristics or get an external clock, as the rtl clock drifts around.

I'm thinking I might add some setting to disable the buffer dropping, and implement getHardwareTime() and setHardwareTime() on RTLSDR. I didn't see these API functions before. setHardwareTime() will let the user write their own timing heuristic to allow for worst-case adjustment or drift.

guruofquality commented 5 years ago

the samplerate is unlikely to be an even number of nanoseconds

Just FYI if this helps. Although multiple counts in nano seconds can represent the same sample tick, that should be OK since we can still convert it back and fourth and get the same count in ticks. The nano seconds are just there to make the time base agnostic of the sample rate. And there are provided converters to help with this: https://github.com/pothosware/SoapySDR/blob/master/include/SoapySDR/Time.hpp#L18

Sample ticks in the driver, nanoseconds for the API, and sample ticks again in the application (although sometimes not). Should be possible if that was desired.

xloem commented 5 years ago

Great, I didn't see that either, I'll just do #3.

guruofquality commented 5 years ago

getHardwareTime() and setHardwareTime() on RTLSDR. I didn't see these API functions before. setHardwareTime() will let the user write their own timing heuristic to allow for worst-case adjustment or drift.

Just another note, getHardwareTime() would probably just return the current total amount of rx samples converted into nano seconds, and setHardwareTime() -- might not really have any meaning since its not like there is a real hardware register to set. But, what it could do is save a time-delta that is added back to getHardwareTime() and to the timeNs in the readStream() function. Depends how pedantic you want to be.

Just wanted to mention it because, yes RTL time will drift relative to other clocks, but get/setHardwareTime() should be relative to RTL sample time and drift with the RTL -- and not any other clock.

xloem commented 5 years ago

I drafted an implementation in #36 where setHardwareTime() sets the absolute value returned by getHardwareTime(), which updates as more samples are received. Is this what you meant, or is it meant to hold a static delta adjustment?

guruofquality commented 5 years ago

I drafted an implementation in #36 where setHardwareTime() sets the absolute value returned by getHardwareTime(), which updates as more samples are received. Is this what you meant, or is it meant to hold a static delta adjustment?

There is more than one way to achieve this. Your way sounds like the best way to handle that. :-)

I just wanted to mention something about the drift.

xloem commented 5 years ago

@guruofquality thank you for merging my PR. I was just thinking about it and I realized that the user has no direct way of using setHardwareTimer to set the time based on time information received over the radio because they may not know how large the USB buffer queue is. Do you think that would be an issue, and if so how it might be fixed? I was thinking setHardwareTimer could set a delta ... or it could set the time for the user-end of the queue rather than the device-end ... any thoughts?

guruofquality commented 5 years ago

Thats what I was saying earlier, there is no real timestamp support so there is slop in the accuracy given the number of samples queued in the rtl driver, usb stack, and in the rtl itself. I dont think there is a fix, its just "software" emulated, so its useful when its useful, but never perfect.

The only other thing I can think of in terms of faking this is adding the activateStream() feature that uses the time to request a burst at a known time. Basically readStream() throws out samples until the requested time based on the ticks. :-)

xloem commented 5 years ago

Sorry, I meant the SoapyRTL buffer, not the USB buffer. I was thinkiing if e.g. there was a radio pulse every 1.0000 seconds, you could synchronize your local time to that pulse, but because setHardwareTime() doesn't have an obvious way of coordinating the time with the content of the buffer (it's offset by the length of the queue at the moment), this would have to be done manually. If setHardwareTime() worked off the time of the next data to be fed to the user, rather than the last data received from the radio, it could fill that purpose, but then it would be harder to set the time accurately from another source.

I don't quite see how passing a start-time to activateStream would help here .. isn't each buffer labeled with timeNs anyway? Doesn't sound like a difficult feature to implement, though.

guruofquality commented 5 years ago

If it helps, most of the devices that have hardware time basically have this condition. The time in the stream could be ancient due to buffering (if you dont read out the samples). I think this is expected behaviour though.

1) Often you see setHardwareTime() used to coordinate and initial timed transmission or a timed reception (with activate stream).

2) This can also be done with getHardwareTime(), which is used as the timebase for rx and tx (rather than changing the timebase, use the one you have already)

3) Or some trx apps use the first time seen by the rx stream to set the first transmit time.

there was a radio pulse every 1.0000 seconds, you could synchronize your local time to that pulse, but because setHardwareTime() doesn't have an obvious way of coordinating the time with the content of the buffer

Well it sounds like you would need a second hardware timestamper at the front end of the queue as well. I guess thats certainly not going to happen. :-P

I think most of these apps and devices will have this problem. TIme is only useful when its relative to something else. I think in this case you just initially set the hardware time, and then keep track of the pulses and the ticks in the received buffers.

Who would guide the timing in your application at this point? Is it the RTL clock and his sample count? Or is it the once per second pulse, is he the true keeper of time. And how should your application deal with the drift between the two (which you can now detect because of rtl tick counts).

xloem commented 5 years ago

Thanks, it sounds like the current intended implementation is the correct one.

I was thinking the pulse would guide the timing; I imagined updating the hardware clock regularly to deal with drift; but I see now that's not the intended use (and could be done using getHardwareTime()).