Closed jjisnow closed 3 years ago
You could try Meshroom-2019.1.0\aliceVision\bin\aliceVision_utils_split360Images.exe (cli only)
Thanks! I've done that now, but there is no help in the app on computing the necessary metadata to use with meshroom
What do you mean with
there is no help in the app on computing the necessary metadata to use with meshroom
? Do you need help with the cli or meshroom?
I figured it out thanks. The images split perfectly. When i tried mesh room, the sfm step failed with images from one panorama, but if i renamed the files from several panoramas to get some parallax in sequential images to get some initial point cloud, then augmented the scan with the remaining photos, then on dense cloud choose "Downscale" =1, as noted from #409 it proceeds perfectly!
I know it can split the images but does it deal with the cube map pinhole cameras as a fixed rig?
@jeffreyianwilson I have tested this with the Datasets from here and it works.
@natowi I would leave this issue open. And have the new openMVG code imported to support panoramas natively.
Excellent, processing hundreds if not thousands of panoramas into cube maps images is an unnecessary waste of storage
Does Meshroom/Alice Vision support camera rigs/fisheye lenses? I want to take the individual camera output from a 360 rig (8 x200degree cameras) and apply this rig per shot. The parallax offset is considerable and prevents close range precision when using Equirectangular (converted to cubemap) images
Typically such rig does not use fish eye lenses, but fixed focal lenses. If you would calibrate this rig (and this is the missing documentation part) this would be better than the combined image, more image detail, more overlap per photo and thus depth. Then again, openMVG recently showed that calibrated stitched images are superior to unstitched unrigged images with respect to matching them in SfM. So you may wonder if a workflow: start with pre-stitched then augment with raw images gives faster results.
The Insta 360 Pro 2 and Pro use 200 degree lenses. Like I said, close proximity features and camera offset from the nodal point prevent any sort of precision from baked equirectangular images.
I am looking at constructing a "calibration room" which would have enough features to treat each lens/sensor separately but as a whole as part of a rig.
@jeffreyianwilson you might be interested in https://blog.elphel.com/category/calibration/
Hi @jeffreyianwilson,
Does Meshroom/Alice Vision support camera rigs/fisheye lenses? I want to take the individual camera output from a 360 rig (8 x200degree cameras) and apply this rig per shot.
Yes, this is fully supported as explained here: https://github.com/alicevision/meshroom/wiki/Multi-Camera-Rig The calibration of the rig is fully automatic.
Would you be open to share one of your datasets with me? I would be interested to do more tests on these setups. If yes, you could use the private mailing-list alicevision-team@googlegroups.com.
Thanks
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hello, I have a Samsung gear 360 camera and i do 360 picture with 30 Megapixel equirectangular every 10 meter to survey bicycle routes. Then I add geolocation to pictures and share the pictures on Mapillary mostly to add map feature in OpenStreetMap. I wonder if there is support of 360 pictures or it is still something to develop. I could share any picture i have taken if it is helpful.
@CorentinLemaitre, Yes, it would be interesting to have access to a dataset made with the Samsung Gear 360.
There is no support for 360° images in input. We have support for a rig of synchronized cameras, but I don't know if you have access to the raw images on the Samsung Gear 360 (before stitching).
I have 360 before processing because this camera (2016) don't do the stitching. After I have done the stitching process I delete these files. I have really few that left on my computer. Here is an example of the picture i have before stitching : And the result after stitching :
I have a small dataset of closely located 360-degree equirectangular images (taken with a Gear 360 2016). I previously used them with Cupix. I can provide one (in private) if it helps development.
Here are five images from my old rooftop to start with:
Unstitched (7776x3888 dual fisheye)
Stitched (7776x3888 equirectangular)
Thanks for the datasets.
@fabiencastan would you be interested in other vendors too?
@Baasje85 I think it would not hurt to have a few different datasets for testing.
@fabiencastan We could use a demo&testing dataset similar to https://github.com/alicevision/dataset_monstree Maybe we can put something together based on user contributions for a few different camera models.
@Baasje85 I think it would not hurt to have a few different datasets for testing.
@fabiencastan We could use a demo&testing dataset similar to https://github.com/alicevision/dataset_monstree Maybe we can put something together based on user contributions for a few different camera models.
I'll be more than happy to help I have an Insta360 One X
any notes or pointers on how you want a sample set? how may pictures? HDR on or off? indoor or outdoor?
Here's my contribution... 5 image interior dataset from an Insta360 OneX
I actually want to use meshroom for interiors so I have a lot more if it's helpful (an entire house). I could provide it privately from github, just contact me.
I´m merging the shared datasets into one repository with a hand full of images per dataset, all under CC-BY-SA-4.0 license. If you are ok with it, leave a thumbs up on this post and I´ll add your dataset. @EwoutH @Baasje85 @SM-26 @tscibilia
When it is well structured, I can move it to AliceVision. https://github.com/natowi/meshroom-360-datasets
tscibilia beat me to the punch. But, I saw that there is no info about the Insta360 one X on the camera DB
Sensor: 1/2.3" (~6.16 x 4.62mm) source
@SM-26 what is the make and model in the metadata?
how may pictures? HDR on or off? indoor or outdoor?
We don´t need too many images (let´s say images from ~6 different locations), these datasets are just for testing and demonstration. I think indoor/outdoor with and without HDR would be nice. If you are using a tripod, you could use the same positions for HDR on/off.
@SM-26 what is the make and model in the metadata?
how may pictures? HDR on or off? indoor or outdoor?
We don´t need too many images (let´s say images from ~6 different locations), these datasets are just for testing and demonstration. I think indoor/outdoor with and without HDR would be nice. If you are using a tripod, you could use the same positions for HDR on/off.
camera brand: Arashi Vision camera model: Insta360 ONE X
I'm on it, good thing the weekend is here.
@SM-26 what is the make and model in the metadata?
how may pictures? HDR on or off? indoor or outdoor?
We don´t need too many images (let´s say images from ~6 different locations), these datasets are just for testing and demonstration. I think indoor/outdoor with and without HDR would be nice. If you are using a tripod, you could use the same positions for HDR on/off.
Sorry it took me such a long time. I've created a PR I'd love to help as much as I can
Just catching up, I saw the repo and @SM-26 pull request so I did a PR of my own
Are there any recommended settings or workflow for double-fisheye images? I am trying to use Gear360 in outdoor. Both stitched and non-stitched (double fisheye) images are accessible on this cam. I tried to extract "regular" images with aliceVision_utils_split360Images.exe, but just a few images (4 from 340) were matched with default Meshroom settings. I saw also the option "fisheye" in camera settings in Meshroom, should I split non-stitched images and try this option?
FYI: I tried it on RICOH THETA Z1 (dual fisheye image). Meshroom runs.
I used original script to split . I also added vignette to remove features on edge of fisheye circles. (Little bit better result of camera pose estimation)
In my experiment, using rig setting is not good for 360 degree images,because PreapareDenseSecene node get failed. Just adding EXIF cameara serial number for each L/R images was enough .
@akirayou Have you tried the split360Images executable?
@natowi It would be good to add the corresponding node in meshroom: https://github.com/natowi/meshroom_external_plugins/blob/master/Split360Images.py Could you submit it as a PR?
Have you tried the split360Images executable?
I've not tried it yet. Because I want to try with dual fisheye image and THETA Z1's dual fisheye image format is DNG [not supported]. And I want to marge JPEG's exif data and DNG's image, so I have to write the script by my self.
Using equirectangular image (THETA's jpeg output) and split360Images sounds easy way.But it seems to need more photos to reconstruction.
DNG and dual-fisheye are supposed to supported.
I can not run it in my environment (JPG is ok)
Meshroom-2021.1.0 on win10 20H2 (Japanese)
C:\Users\youak>C:\Meshroom-2021.1.0\aliceVision\bin\aliceVision_utils_split360Images.exe -i C:\Users\youak\Desktop\meshroom_theta\DNG\R0010072.DNG -o a -m dualfisheye Program called with the following parameters:
- dualFisheyeSplitPreset = "center" (default)
- equirectangularDemoMode = 0 (default)
- equirectangularNbSplits = 2 (default)
- equirectangularSplitResolution = 1200 (default)
- input = "C:\Users\youak\Desktop\meshroom_theta\DNG\R0010072.DNG"
- output = "a"
- splitMode = "dualfisheye"
- verboseLevel = "info" (default)
[00:08:34.793096][fatal] Can't write output image file 'C:\Users\youak\a/R0010072_0.DNG'.
@fabiencastan I'm assuming adding this node in the graph editor hasn't been released yet. Is that correct?
yes
Hi, im trying to decompose a theta X 11K jpeg using aliceVision_util_split360images.exe but it seems only to generate images on the horizon line. is there any parameters that can be inputted so it splits the top and bottom too?
For dualfisheye there is a top bottom setting
Hi guys I have zero coding experience I want to split the 360 images to top/bot/left/right not just on a horizon line
I used this code : .\aliceVision_utils_split360Images.exe -i C:\Users\craig\Pictures\THETA\ — equirectangularNbSplits 32 -o C:\Users\craig\Pictures\mesh
from this link : https://medium.com/theta360-guide/splitting-360-images-into-2d-images-137fab5406da
what should i do ? with simple coding ( using insta360 1x )
@Hamed93g "THETA— equirectangularNbSplits" the -- and spaces may cause issues. Try
.\aliceVision_utils_split360Images.exe -i "C:\Users\craig\Pictures\THETA — equirectangularNbSplits" 32 -o "C:\Users\craig\Pictures\mesh"
If this does not help, please open a new issue.
Since the release 2023.2, the Split360Images can be added directly into the graph after the CameraInit node: https://github.com/alicevision/Meshroom/pull/1939
Since the release 2023.2, the Split360Images can be added directly into the graph after the CameraInit node: #1939
This functions well, thank you so much for adding. I am having trouble though. I'm using bracketed exposures to make an HDR spherical pano from a Gear 360 camera. The resulting sfm data from the split360Image does not seem to work with the Hdr pipeline when I plug it. The sfm data all looks correct, but the LdrToHdrSampling is mixing images from each 'rig'. Also exposure blending is also not doing the right thing even when I use the un-split original images and I have not yet figured out why
@kromond You can open a new issue for this
It would be great if there was support for the standard flat 360 camera projection images given out by 360 cameras.
J