Open Saijin-Naib opened 3 years ago
Further experiments have lead me to this arrangement of pre-processing steps. I think they strike a good balance between increasing tiepoint extraction and aesthetic improvements to the data.
I'm noticing that pre-processing the data seems to reduce the reported Error in ODM by orders of magnitude, which is quite interesting.
Please find Report.pdf attached for original data:
report.pdf
Please find Report.pdf attached for pre-processed data:
report.pdf
I'd love to contribute. I'd need to know a few things first:
As for the approach, I think pre-processing them on the fly and keeping in memory will slow things down. Images should be pre-processed and kept in a separate folder for reference during feature matching. Let me know your thoughts.
I'm not super well-versed in this, but my thoughts are these:
If I may suggest, any image enhancement for feature detection should be done on-the-fly (without storing the images onto disk). It will be faster (and much less complicated).
I'll defer to your suggestion as you guys have built it 😄 About image pre-processing, opencv can be used to shaprness and contrast increase, provided it won't be an extra dependency. I'll share some code snippets/articles here on how it can be done.
Increasing contrast using opencv Sharpening an image is pretty simple. Link Let me know your thoughts... Can't wait to get started on this and get better results :)
This line of thought was recently corroborated by a user's test dataset consisting of mostly open snow, which, when they performed automatic contrast and adaptive histogram equalization. Their approach created severe artifacting and "tiling" on the output due to not having the pipeline only use the pre-processed images for matching, but it did greatly increase their reconstruction.
I've been curious about, and researching, using a First-Order PCA on the images to match on the most significant features. This should be sensor and dataset agnostic.
@smathermather has suggested doing an IHS (Intensity/Hue/Saturation) transform and then using the Intensity channel.
In cruising around the literature, it seems like both have some prior art.
@theoway
Not sure if you've had the bandwidth to investigate any of the above.
What are your thoughts?
@theoway
Not sure if you've had the bandwidth to investigate any of the above.
What are your thoughts?
@Saijin-Naib Hey! I won't be able to research, however, can definitely help in dev 😄 Do share the relevant literature here, I'll give them a read.
Do share the relevant literature here, I'll give them a read.
Now I have to go back and find the papers, I've un-pinned the tabs haha
In the past, I've done a bit of a hacky pre-processing of images by sharpening and auto-contrasting in a tool like XNConvert. I was reminded of this today working with a particularly reticent dataset.
I think it'd be great if we (optionally? I don't think always would hurt, though) pre-processed the input images for feature matching/extraction only, to help stitch datasets with contrast/exposure/sharpness issues, and maybe even net some better matching/extraction on good data, as well!
Ideally, these would be done non-destructively (in memory on the fly? cached on disk?) for the matching, and once matching is completed, the originally submitted images would be passed to the pipeline to colorize the point cloud and generate the orthophoto and textured models.