seeing-things / track

Automates tracking of targets with a telescope using ephemeris (TLE files) and/or optical tracking.
MIT License
7 stars 0 forks source link

Camera latency affects CameraTarget stability #191

Open bgottula opened 4 years ago

bgottula commented 4 years ago

When running the controller with only the CameraTarget on fixed stars I was seeing some significant oscillation. I was able to confirm that this is due to latency from the time a frame is captured with the camera sensor until the time when the mount's position is queried to do the necessary coordinate transforms. I tossed the mount positions into a queue and found that I had to use a mount position that was two control cycles old (0.2 s) to get good performance.

More thinking required here, but I'm concerned that the camera latency may be a function of various camera configuration setting such as exposure time, video mode, number of frames dropped recently if we aren't grabbing them as fast as the camera is producing them, etc. So we either need a reliable way to measure camera latency at startup or a reliable formula to predict the latency as a function of settings. Neither of these approaches sounds particularly easy.

Some ideas to think about further:

This problem may only affect tracking that uses camera only. If the camera is used solely in sensor fusion as a means of estimating bias terms for the TLE target then the latency is probably not as important. But there are probably still times when we will want to use camera-only target tracking.

I noticed recently that the code in CameraTarget does not get the mount position as near in time as possible to when a frame is grabbed from the camera. This is likely making the problem worse.

bgottula commented 4 years ago

I ran an experiment that showed that when capturing from the camera but not calling ASIGetVideoData fast enough to keep up with the frame rate the driver and/or hardware will buffer a few frames. In my test I called ASIGetVideoData with a timeout of -1 on the first call to block until at least one frame was received, and then called it with a timeout of 0 in a while loop until it returned a timeout error status code. By doing this I found that with the settings I was using there were nearly always 2 or 3 frames waiting on each iteration of the outer loop. Whether 2 or 3 frames seemed to be random but I suspect it may have had more to do with the specific exposure time and outer loop period I was using for the experiment. I applied a patch to ASICamera.get_frame() such that it will reject any stale waiting frames and always return the most recent frame available within the specified timeout. That should help things some.

bgottula commented 4 years ago

I did further experiments to determine how much residual latency remained after applying the patch described in the previous comment. To measure latency I pointed the camera at a clock such that when the exposure actually takes place the time is embedded in the frame's pixels. Then when the frame's data is returned from ASIGetVideoData I recorded a timestamp in software and annotated the frame with this as well. The latency is the difference between the annotation and the time shown on the clock as viewed by the camera.

For this to work the two clocks need to be tightly synchronized. I opted to use a C++ program written by @jgottula (https://gist.github.com/bgottula/f2b87e5c60f74381ad7d8af01f2cba8e) such that the same PC clock can be used as both the source for the time viewed by the camera and as the clock used to annotate the frames with the timestamp. This worked reasonably well. A few downsides:

I managed to get something working. Based on some brief tests I made the following observations:

This wasn't terribly conclusive. Fortunately I don't think high accuracy camera timing is necessary when tracking objects with accurate TLEs available, as noted in the description. I think I will put this on hold for now and come back to it another time.

bgottula commented 3 years ago

Turns out this is more important to fix than I thought since using optical tracking on fixed stars is essential to achieve accurate guidescope camera alignment and to achieve good focus on the main OTA camera.

bgottula commented 3 years ago

Another approach to measure camera latency relative to mount motion would be to do essentially a group delay measurement. To do this I would need to slew the mount in one axis back and forth in a sinusoidal pattern at a specific frequency. I could then compare the phase of this sinusoidal motion with the phase of the sinusoidal motion of a star detected in the camera frame. I could even do this at a couple of frequencies as a consistency check. The possible advantage of this approach is that it could be used with the hardware and software in a configuration that more closely matches what I use in practice. It could be done as a latency calibration step just before using optical tracking to align the guidescope and focus the main OTA.

I'd still rather avoid this level of extra complexity though...a simpler approach that is less sensitive to camera latency would be preferred. I suppose what I really need is to figure out what is equivalent to reducing loop bandwidth for model predictive control.

bgottula commented 3 years ago

Some references on stability of model predictive control systems:

bgottula commented 3 years ago

Removing the blocker label because the SensorFusion target is now mature enough for normal usage. With this new target I am able to center on stars to allow guidescope-to-OTA alignment. Stability of tracking with CameraTarget is now a lower priority since it is not anticipated that it will be used frequently for satellite tracking going forward (except as part of the SensorFusion target, in which case the stability is less strongly dependent on the camera latency).