Closed pmoulon closed 7 years ago
Thank you @pmoulon for sharing these results with us!
thx a lot
@kafeiyin00:
@pmoulon I just found this paper doing a nice survey of different camera models, and could be a nice addition to your database of papers if not there already: http://www.merl.com/publications/docs/TR2011-069.pdf
@cdcseacave Thank you for the reference, I will have a look and look to add it to the database. :+1:
@kafeiyin00 I have released the code to extract N rectilinear images from a spherical panorama. The usage is the following:
Usage: openMVG_sample_pano_converter
[-i|--input_dir] the path where the spherical panoramic image are saved
[-o|--output_dir] the path where output rectilinear image will be saved
OPTIONAL:
[-r|--image_resolution] the rectilinear image size (default:1200)
[-n|--nb_split] the number of rectilinear image along the X axis (default:5)
[-D|--demo_mode] switch parameter, export a SVG file that simulate asked rectilinear
frustum configuration on the spherical image.
In order to process your scenes I copied all your image panorama to the pano directory and then run:
The demo mode produce for example:
./Linux-x86_64-RELEASE/openMVG_sample_pano_converter -i /home/pierre/Desktop/pano/ -o ./out -n 2
./Linux-x86_64-RELEASE/openMVG_sample_pano_converter -i /home/pierre/Desktop/pano/ -o ./out -n 4
Since I saw that 5 best fit your existing image reprojection, I ran the panorama conversion by removing the demo parameter and let process the entire spherical image directory:
./Linux-x86_64-RELEASE/openMVG_sample_pano_converter -i /home/pierre/Desktop/pano/ -o ./out -n 5
:tada: Cannot wait to see someone test this on some Google Panorama sequence ;-) :tada:
@kafeiyin00 Did you make any experiments with the tool I provided?
[Case sensitive] Just tried it with one of my Ricoh Dataset, got this error :
Did not find any jpg image in the provided input_dirDid not find any jpg image in the provided input_dir
Just an issue with the Uppercase ;-)
@stephane-lb you're right using JPG or jpg is not the same thing ;-)
my solution ( found on stakoverflow )
for i in $(find . -type f -name "*[A-Z]*"); do mv "$i" "$(echo $i | tr A-Z a-z)"; done
i use the pano2frame to resample https://github.com/kafeiyin00/pano_tools/blob/master/pano2frame/pano2frame.cpp
One question to the "openMVG_sample_pano_converter
", currently the vertical field of view is fixed. What would the change be made, if I want to use an optional argument to limit the vertical height? The is from the cases where some objects in the panorama are very close to cameras. Current converter will generate much more distortion to those objects near to the camera.
Did you tried to modify the hIma value, else try to change the focal length to limit the distortion? https://github.com/openMVG/openMVG/blob/70f9366c96b5f06cf6d3f97302c7f44e240a3833/src/openMVG_Samples/image_spherical_to_pinholes/main_pano_converter.cpp#L113
@yuyou Did you try the suggestion I made?
@pmoulon sorry for the late answer. Yes, I played with the argument and the result unfortunately remains the same. The hIma
value affects the height of the output resolution but I like to have a way to change the vertical rectilinear image border
, to specify the region of the original pano image, if doable.
Does it helps if you add a rotation along the X axis?
Since there is no answer, I will close the thread. @yuyou
Hi !
Thanks for this code ! I tested it successfully on Ricoh Theta S camera images. (two fish-eye images) Here are some first results.
As already stated above openMVG_sample_pano_converter does not recognize .jpeg extension as being JPEGs. I had to rename files to .jpg
Hi @polto !
Great, which parameters do you use to convert the ricoh's pictures to "normal" pictures ? Thanks.
Best Regards,
Stéphane
ps : @pmoulon would that be possible to detect the Ricoh or 360 in the exif and in this case launch this script directly ? Or you think that a SphericalCamera is needed (probably still have a branch somewhere with this ) ?
Hi @stephane-lb !
Sorry for the long delay ! I wanted to do more tests.
First I did not get it right and tried only with -n2. Than I tried -n5 , but it was not always able to reconnect all of the images, trying with -n10 produce so much overlap that it almost always work. But of course it produce way more images and the processing time is much bigger. openMVG_sample_pano_converter -i ./original_panos -o ./original_images -n 10 -r 2688
@pmoulon can it help to do less overlap but to use the produced images as a virtual rigid rig ? with the sub-poses branch ?
Regards,
@pmoulon If I want to try this using pictures taken by theta s , after I covert the spherical images to pinhole images, do I just use these pinhole images to run the increamental pipeline or I should reconnect them to be a large image? What camera model should I set? and also the focal,I see that there is a focal.txt after the operation of openMVG_sample_pano_converter,does it mean that I need to set the pinhole images'focal use this value?
Yes you can split them and use the focal value provided in the focal.txt
file.
Once this test is complete I can guide you in order to check and implement the spherical camera support in IncrementalSfM.
Very very thanks!@pmoulon I have change the spherical images to pinhole images,then run these pinhole images using SfM_SequentialPipeline.py, but the result seems not right,which I use 12 spherical images while the result is 15 resection here is the link of spherical images:https://drive.google.com/drive/folders/0B9KhOFLoD2ytejRnRlFlaFJKVjA and the convert pinhole images:https://drive.google.com/drive/folders/0B9KhOFLoD2ytcXU0X3FRdUE3dTQ can you help me find what happened? About the focal, I set the focal in SfM_SequentialPipeline.py ,like the follow: pIntrisics = subprocess.Popen( [os.path.join(OPENMVG_SFM_BIN, "openMVG_main_SfMInit_ImageListing"), "-i", input_dir, "-o", matches_dir, "-d", camera_file_params ,"-f",347] but the result in the console is when I set the focal in the code ,like focal_pixels = 347,while in the end the focal value is still -1
You must use the "
around the parameter value too.
so please edit your file to set ,"-f", "347"]
and it will be good ;-)
Yes you are right! I have got the results I think it's right, but I'm a little confused that in the result of reconstruction_requential there are number of .ply, when I test with 12 spherical images,there are 11Resection .ply files,when I test with another 8 spherical images,there are 14 Resection.ply files, but when I put them in MeshLab,I can see the right camera position ,12 spherical images is 12 green camera positions, 8 spherical iamges is 8 green camera positions, so I think the results are right ,but I don't know why? what I undersand is that each Resection.ply is corresponding to a view (a camera positon),but the corresponding ply files seem not like what I thought.
Another question: How can I get the new camera position when I captured a new image each time, I saw there is a sample that can location on a known reconstruction in OPENMVG, so can I use two or three images to do a first reconstruction ,then do the location sample to get the new camera position each time?
The number of resection file can change from one dataset to another since images are added in batch of variable size. The number of image added at each resection stage is computed depending of statistics of 2d-3D visibility.
The image localization module allow you to localize an image in a scene, but it will not extend the scene with new 3d points. So you will not be able to use it to add new scene geometry.
Hoping my last sentence help you else can you elaborate more?
yes,I know clearly what you mean, very very thank you!@pmoulon
@pmoulon It seems that the openMVG treats each image as an individual. I downloaded the sample data and used the pano_convert tool and incrementalSfm to do the sfm. The result is the same as you. But when I look closer, I find that there are several points gathering together. Maybe it will be better to fix the X、Y、Z for each pano image in bundle. Is there any plan for this?
You can now use directly the equirectangular images to perform SfM. Instructions are listed here: https://github.com/openMVG/openMVG/blob/db92617fea420363abcbad1cb3773706f75012ad/docs/sphinx/rst/software/SfM/SfM.rst#notes-about-spherical-sfm
openMVG_main_ComputeMatches and openMVG_main_ComputeFeatures the input file are the same? and the output dir of openMVG_main_ComputeMatches is always invalid?why? what is the requirement? @pmoulon @pmoulon
when I run openMVG_main_ComputeMatches always like this what's wrong with this
/home/alex/Documents/openMVG/openMVG/src/build_debug/Linux-x86_64-DEBUG/openMVG_main_ComputeMatches -i /home/alex/Documents/test_openMVG/output2/sfm_data.json -o /home/alex/Documents/test_openMVG/output4 You called : /home/alex/Documents/openMVG/openMVG/src/build_debug/Linux-x86_64-DEBUG/openMVG_main_ComputeMatches --input_file /home/alex/Documents/test_openMVG/output2/sfm_data.json --out_dir /home/alex/Documents/test_openMVG/output4 Optional parameters: --force 0 --ratio 0.8 --geometric_model f --video_mode_matching -1 --pair_list --nearest_matching_method AUTO --guided_matching 0 --cache_size unlimited
Invalid regions.
Process finished with exit code 1 @pmoulon
Your parameters seems ok.
I would advise you to first check that everything is running ok on OpenMVG provided data:
by running openMVG_Build/software/SfM/tutorial_demo.py
Then you can try with your own data and see what could be wrong.
can you specify more details how you get the point cloud result from the fisheye images? Your raw data is only the fisheye images or the fisheye panoramic images which you used five images to stitch together? So if I have only fisheye images, I need to convert them to a panoramic ones, and then what should I do next? If I do not know the intrinsic parameter of the camera, can I still use openMVG to reconstruct, please specify more details about openMVG, because I am urge to to know more details about openMVG, it will help me a lot and my following research can be based on this library. @pmoulon
It's a very good result. Can I adjust the vertical angle of the output image?
Following @kafeiyin00, @cdcseacave discussion in this thread I start a new issue there about SfM.
@kafeiyin00 is using a Ladybug5 camera that produce spherical panorama.
In OpenMVG you could either use the 5 FishEye images or use rectilinear images sampled from the panorama.
Here an introduction about rectilinear image sampling from a spherical image:
Task: