Closed cmeyer closed 2 months ago
For the cycling cache idea, what is the specific requirement or use case it is trying to achieve? The changing from Poisson to Gaussian to turn the performance to O(1) is pretty self-explanatory.
Real acquisition can happen as fast as ~1ms, so 80ms for noise (better than 250ms for noise!) is an improvement, but still not fast enough for the simulator to run into the same performance bottlenecks as real acquisition. The goal is to have the noise not add more than a few ms per image. The maximum view rate target is around 50 fps or 20ms per frame. So something under 50ms, preferably closer to 20ms, would be good rate for the simulator. As a workaround, I can just disable noise completely in cases where I need to do performance testing - so this issue is generally a low priority task; but it might be nice to have eventually.
Closing this as complete. See follow-up issue
Poisson noise performance is dependent on "lambda" so as the exposure gets longer, the function takes longer to calculate.
Investigate whether there is a way to make the performance independent of exposure, or maybe reuse a set of calculations randomly chosen from the last N frames, and maybe use threads to calculate on a thread without blocking
acquire_image
.This is important for performance testing using the simulator.
Added: What about cycling through 17 (or another number unlikely to be a factor of our most common acquisition dimensions) noise images? This would involve using some sort of cache keyed on the data shape and also including some mechanism to garbage collect noise that hadn't been used in a while? This can be done as another PR.