Open clonejo opened 11 months ago
This also means we get more exposure for each bit of train, and thus reduce noise.
Indeed, we actually have enough raw images to get rid of obstructions in your specific case. IMO this is however not true in the general case.
Anyways, feel free to experiment with this and to submit a PR! I will not implement it.
This also means we get more exposure for each bit of train, and thus reduce noise.
Noise yes, blur no.
Hmm, I also have problems with obstructions but rather than a whole bunch of extra DSP my thinking was maybe just somehow use offset boxes that get lined back up based on the appropriate time delay from train speed:
I wonder if that would be a bit easier to work into the existing logic and/or a little bit more generalizable to various situations?
It should be easy enough to adapt the stitching logic so that it can be chosen from which image area the patches are taken for stitching.
Currently, trainbot pieces slices of the moved train together. This works well when within the used crop rectangle the view is unobstructed, and the exposure/lighting is similar in the X axis.
Unfortunately, i have to deal with some obstruction from foliage:
One can kinda reduce this by narrowing the crop rectangle, but i am already quite limited.
But since we have each piece of the train exposed at least twice (with the current movement detection code, even three times), we can stack them together. Here is an example using a better camera, manual stacking of three frames in GIMP, and then enfuse for stacking: compared to a single frame:
We could even pick out the parts that don't change between frames, and create a non-rectangular mask for picking only the unobstructed parts.
Possible implementation: We pretty much have all the pieces already, we'll just have to save each frame placed into a separate otherwise transparent image file, and then run
enfuse
over all the files.