Closed LuckyOne09 closed 7 months ago
for immersive distorted dataset, i wan to keep all the oiginal pixels (information) from the distorted image in the undistorted results.
The scale is picked by manually experimentally obversvation (changing from 0.5 to smaller). It is the roughly largest scale that can keep all the distorted pixels in the undistorted images by the opencv api. (this can be improved by a automatic script? like detecting the four corner? or opencv's estiamte K for distortion?) I did't tried it. The goal of scale is that our model can be trained with all the pixels from the distorted images by grid warping in an end to end fashion and we can directly evaluate on the same image with other mehtods.
while in the undistorted.py, we set scale equal to 0.5. This will result some black pixels around the boundary (so we mask out them in training). (it is hard to compare with same ground truth with other methods under this mode, but this is demo, so we choose this one for demo.). I believe but not tried that a larger scale can lead to good results for demo as the center object is larger.
Thank you for your quick reply! it is very helpful. but I found another question regarding the undistorted image. I noticed that the edges of the undistorted image are significantly stretched. I'm concerned about whether this affects the overall quality of the results.
Your undistorted images look like what we have. it may have bad effects. This undistorted image is just for structure from motion points.
our model's ground truth image for training is still the distorted image in this mode. We will apply a distortion grid flow to the rendered image in the render pipeline. Although there will be some inevitable resampling error by the grid sampling.
I get it! Thank you for your guidance!
Hello! Firstly, thank you for your contributions and for sharing your work with the community. I've been exploring the script
pre_immersive_distorted.py
and came across the parameterfocal_scale
being hardcoded within theimmmersivescaledict
. While experimenting with fisheye data in the immersive dataset, I'd like to understand how to dynamically obtain this scale parameter.Could you please provide guidance on how to calculate the
focal_scale
value? This would greatly assist me in processing additional fisheye data independently and utilizing your excellent STGS for further experimentation.Looking forward to your response and eager to explore the results!
Thank you once again for your efforts.