Closed kozuch closed 3 years ago
Hi @kozuch,
Nice that you want to build your own 360 camera.
I'm not sure to understand your goal. Do you want to use opensfm for stitching the 4 images taken from the same place? or do you want to take groups of 4 images from multiple places and build a 3d reconstruction?
Assuming the later, you can run the reconstruction with the all the 4 * n images together. It will not use the additional information that the cameras were attached together but should work anyway.
It is important to have some overlap between the field of view of the cameras so that they 4 points of view end up on a single reconstruction. Otherwise, you can turn the rig between shots to get the overlap.
Ideally, for synchronized cameras, you would want to add the additional constraints that the cameras are on a rig so that their relative position stays constant. That would help the accuracy of the reconstruction. However, this is not currently possible in opensfm.
For the case where the cameras are not synchronized, that constraint is hard to use because the relative position of the cameras does change. In that case, running without additional constraints is the simplest option.
Dear @paulinus,
thanks for your reply.
I'm not sure to understand your goal. Do you want to use opensfm for stitching the 4 images taken from the same place? or do you want to take groups of 4 images from multiple places and build a 3d reconstruction?
Yes you are right I want to do the latter - use groups of images (each group taken from the same place so that it covers 360 deg together) that are taken along the way and use that for reconstruction.
Assuming the later, you can run the reconstruction with the all the 4 * n images together. It will not use the additional information that the cameras were attached together but should work anyway.
Ideally, for synchronized cameras, you would want to add the additional constraints that the cameras are on a rig so that their relative position stays constant. That would help the accuracy of the reconstruction. However, this is not currently possible in opensfm.
Yes I feared this feature may not be implemented yet. Are there any plans for implementation? Or can I submit a feature request? Although I have experience with stereo vision and point clouds I am all new to SfM, I can try to look at the code myself, but that will be lengthy and tedious job for me... Or do you maybe know about software package that could potentially handle this situation?
For the case where the cameras are not synchronized, that constraint is hard to use because the relative position of the cameras does change. In that case, running without additional constraints is the simplest option.
Yes this is the problem exactly. So better to use synced images.
Hi @kozuch
Did you manage to resolve the spherical image issue? I am also trying to use spherical images from Applied Streetview 360 camera for the SfM reconstruction. If you have achieved anything, please share your experience. Thanks
@YatiAhmad I would also like to know this. It would seem that the use of 360 images is now supported on OpenSFM, according to @paulinus: here
@FarazKhalidZaki reconstruction from 360 images works out of the box. You should put the images in a project and run OpenSfM normally. You should make sure that the images are properly detected as 360 images. We use the gpano metadata to detect panoramas. To check that it has detected the panoramas open the camera_models.json file and check that the camera projection_type is equirectangular. If it is not, you can override the detection using this procedure
So it would seem that all you need to do is provide (or generate) the gpano metadata, which OpenSFM will automatically use.
I hope this is indeed true, and that this helps you also. I'll be trying this in coming weeks so will update you all.
You can start by trying this ready-to-run example https://www.dropbox.com/sh/3vabbmrhqqbagp5/AABi14O2tWMbxAX91jaaQY77a?dl=0
Thanks @paulinus, the example worked fine!
@paulinus I have tried using the ready-to-run example you mentioned. But the process stopped with the following error.
`2020-08-25 07:37:17,010 INFO: Reconstruction 0: 10 images, 1716 points
2020-08-25 07:37:17,010 INFO: Reconstruction 1: 8 images, 1733 points
2020-08-25 07:37:17,010 INFO: Reconstruction 2: 5 images, 1025 points
2020-08-25 07:37:17,010 INFO: 3 partial reconstructions in total.
2020-08-25 07:37:19,407 DEBUG: Undistorting the reconstruction
2020-08-25 07:37:20,052 DEBUG: Undistorting image 20200825_135618.jpg
2020-08-25 07:37:20,065 DEBUG: Undistorting image 20200825_135630.jpg
2020-08-25 07:37:20,193 DEBUG: Undistorting image 20200825_135613.jpg
2020-08-25 07:37:20,194 DEBUG: Undistorting image 20200825_135703.jpg
2020-08-25 07:37:20,475 DEBUG: Undistorting image 20200825_135625.jpg
2020-08-25 07:37:20,479 DEBUG: Undistorting image 20200825_135635.jpg
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/process_executor.py", line 431, in _process_worker
r = call_item()
File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/process_executor.py", line 285, in call
return self.fn(*self.args, *self.kwargs)
File "/usr/local/lib/python3.6/dist-packages/joblib/_parallel_backends.py", line 595, in call
return self.func(args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/joblib/parallel.py", line 253, in call
for func, args, kwargs in self.items]
File "/usr/local/lib/python3.6/dist-packages/joblib/parallel.py", line 253, in
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/OpenSfM/bin/opensfm", line 34, in
I have the same model of camera in the example dataset and took some photos myself. I tried with my own dataset and the same error happened. Can you help me?
Thank you very much!
@kozuch rig support has been added here : https://github.com/mapillary/OpenSfM/commit/c18914b9dfb8f218c5ae8a82fe29a9e5481b849d @YatiAhmad OpenSfM support 360 images https://www.opensfm.org/docs/geometry.html#camera-models
Hi there,
I am interested in (possibly dense) reconstruction from multiple images that are taken in various directions from the same place - for instance one image to the front, then to both sides (right/left) and to the back (this would give an angle of 90 deg horizontal between 4 cameras), or even with 8 cameras having 45 deg angle in horizontal. I plan to build such camera myself. Looking at existing products - Point Grey Ladabug camera is the closes I can find (though only having 5 cameras in horizontal). I see OpenSfM can work with equirectangular images (360 degs in one image) but this is not my case (it actually may be the case if one stitches the images - I have no idea if that can compromise the reconstruction quality of SfM though).
Has anyone tried to reconstruct from such spherical input images? I think this should be possible but maybe with some special workflow? Also, there will probably be 2 cases of this - one where the images are precisely synchronized and taken at exactly the same time and the other with images that are spherical, but not synced to a single timestamp (this may be the case with cheaper cameras that dont have external trigger).
I will appreciate any info! Thanks in advance!