alicevision / Meshroom

3D Reconstruction Software
http://alicevision.org
Other
11.2k stars 1.09k forks source link

[Feature Request] Support for 360 Camera images #526

Closed jjisnow closed 3 years ago

jjisnow commented 5 years ago

It would be great if there was support for the standard flat 360 camera projection images given out by 360 cameras.

J

natowi commented 5 years ago

You could try Meshroom-2019.1.0\aliceVision\bin\aliceVision_utils_split360Images.exe (cli only)

jjisnow commented 5 years ago

Thanks! I've done that now, but there is no help in the app on computing the necessary metadata to use with meshroom

natowi commented 5 years ago

What do you mean with

there is no help in the app on computing the necessary metadata to use with meshroom

? Do you need help with the cli or meshroom?

jjisnow commented 5 years ago

I figured it out thanks. The images split perfectly. When i tried mesh room, the sfm step failed with images from one panorama, but if i renamed the files from several panoramas to get some parallax in sequential images to get some initial point cloud, then augmented the scan with the remaining photos, then on dense cloud choose "Downscale" =1, as noted from #409 it proceeds perfectly!

jeffreyianwilson commented 5 years ago

I know it can split the images but does it deal with the cube map pinhole cameras as a fixed rig?

natowi commented 5 years ago

@jeffreyianwilson I have tested this with the Datasets from here and it works.

Baasje85 commented 5 years ago

@natowi I would leave this issue open. And have the new openMVG code imported to support panoramas natively.

jeffreyianwilson commented 5 years ago

Excellent, processing hundreds if not thousands of panoramas into cube maps images is an unnecessary waste of storage

jeffreyianwilson commented 5 years ago

Does Meshroom/Alice Vision support camera rigs/fisheye lenses? I want to take the individual camera output from a 360 rig (8 x200degree cameras) and apply this rig per shot. The parallax offset is considerable and prevents close range precision when using Equirectangular (converted to cubemap) images

Baasje85 commented 5 years ago

Typically such rig does not use fish eye lenses, but fixed focal lenses. If you would calibrate this rig (and this is the missing documentation part) this would be better than the combined image, more image detail, more overlap per photo and thus depth. Then again, openMVG recently showed that calibrated stitched images are superior to unstitched unrigged images with respect to matching them in SfM. So you may wonder if a workflow: start with pre-stitched then augment with raw images gives faster results.

jeffreyianwilson commented 5 years ago

The Insta 360 Pro 2 and Pro use 200 degree lenses. Like I said, close proximity features and camera offset from the nodal point prevent any sort of precision from baked equirectangular images.

jeffreyianwilson commented 5 years ago

I am looking at constructing a "calibration room" which would have enough features to treat each lens/sensor separately but as a whole as part of a rig.

Baasje85 commented 5 years ago

@jeffreyianwilson you might be interested in https://blog.elphel.com/category/calibration/

fabiencastan commented 5 years ago

Hi @jeffreyianwilson,

Does Meshroom/Alice Vision support camera rigs/fisheye lenses? I want to take the individual camera output from a 360 rig (8 x200degree cameras) and apply this rig per shot.

Yes, this is fully supported as explained here: https://github.com/alicevision/meshroom/wiki/Multi-Camera-Rig The calibration of the rig is fully automatic.

Would you be open to share one of your datasets with me? I would be interested to do more tests on these setups. If yes, you could use the private mailing-list alicevision-team@googlegroups.com.

Thanks

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

CorentinLemaitre commented 4 years ago

Hello, I have a Samsung gear 360 camera and i do 360 picture with 30 Megapixel equirectangular every 10 meter to survey bicycle routes. Then I add geolocation to pictures and share the pictures on Mapillary mostly to add map feature in OpenStreetMap. I wonder if there is support of 360 pictures or it is still something to develop. I could share any picture i have taken if it is helpful.

fabiencastan commented 4 years ago

@CorentinLemaitre, Yes, it would be interesting to have access to a dataset made with the Samsung Gear 360.

There is no support for 360° images in input. We have support for a rig of synchronized cameras, but I don't know if you have access to the raw images on the Samsung Gear 360 (before stitching).

CorentinLemaitre commented 4 years ago

I have 360 before processing because this camera (2016) don't do the stitching. After I have done the stitching process I delete these files. I have really few that left on my computer. Here is an example of the picture i have before stitching : 360_5204 And the result after stitching : 360_5204_Stitch_YHC

EwoutH commented 4 years ago

I have a small dataset of closely located 360-degree equirectangular images (taken with a Gear 360 2016). I previously used them with Cupix. I can provide one (in private) if it helps development.

Here are five images from my old rooftop to start with:

Unstitched (7776x3888 dual fisheye)

Stitched (7776x3888 equirectangular)

fabiencastan commented 4 years ago

Thanks for the datasets.

Baasje85 commented 4 years ago

@fabiencastan would you be interested in other vendors too?

natowi commented 4 years ago

@Baasje85 I think it would not hurt to have a few different datasets for testing.

@fabiencastan We could use a demo&testing dataset similar to https://github.com/alicevision/dataset_monstree Maybe we can put something together based on user contributions for a few different camera models.

SM-26 commented 4 years ago

@Baasje85 I think it would not hurt to have a few different datasets for testing.

@fabiencastan We could use a demo&testing dataset similar to https://github.com/alicevision/dataset_monstree Maybe we can put something together based on user contributions for a few different camera models.

I'll be more than happy to help I have an Insta360 One X

any notes or pointers on how you want a sample set? how may pictures? HDR on or off? indoor or outdoor?

tscibilia commented 4 years ago

Here's my contribution... 5 image interior dataset from an Insta360 OneX

I actually want to use meshroom for interiors so I have a lot more if it's helpful (an entire house). I could provide it privately from github, just contact me.

natowi commented 4 years ago

I´m merging the shared datasets into one repository with a hand full of images per dataset, all under CC-BY-SA-4.0 license. If you are ok with it, leave a thumbs up on this post and I´ll add your dataset. @EwoutH @Baasje85 @SM-26 @tscibilia

When it is well structured, I can move it to AliceVision. https://github.com/natowi/meshroom-360-datasets

SM-26 commented 4 years ago

tscibilia beat me to the punch. But, I saw that there is no info about the Insta360 one X on the camera DB

Sensor: 1/2.3" (~6.16 x 4.62mm) source

natowi commented 4 years ago

@SM-26 what is the make and model in the metadata?

how may pictures? HDR on or off? indoor or outdoor?

We don´t need too many images (let´s say images from ~6 different locations), these datasets are just for testing and demonstration. I think indoor/outdoor with and without HDR would be nice. If you are using a tripod, you could use the same positions for HDR on/off.

SM-26 commented 4 years ago

@SM-26 what is the make and model in the metadata?

how may pictures? HDR on or off? indoor or outdoor?

We don´t need too many images (let´s say images from ~6 different locations), these datasets are just for testing and demonstration. I think indoor/outdoor with and without HDR would be nice. If you are using a tripod, you could use the same positions for HDR on/off.

camera brand: Arashi Vision camera model: Insta360 ONE X

I'm on it, good thing the weekend is here.

SM-26 commented 4 years ago

@SM-26 what is the make and model in the metadata?

how may pictures? HDR on or off? indoor or outdoor?

We don´t need too many images (let´s say images from ~6 different locations), these datasets are just for testing and demonstration. I think indoor/outdoor with and without HDR would be nice. If you are using a tripod, you could use the same positions for HDR on/off.

Sorry it took me such a long time. I've created a PR I'd love to help as much as I can

tscibilia commented 4 years ago

Just catching up, I saw the repo and @SM-26 pull request so I did a PR of my own

Aracon commented 3 years ago

Are there any recommended settings or workflow for double-fisheye images? I am trying to use Gear360 in outdoor. Both stitched and non-stitched (double fisheye) images are accessible on this cam. I tried to extract "regular" images with aliceVision_utils_split360Images.exe, but just a few images (4 from 340) were matched with default Meshroom settings. I saw also the option "fisheye" in camera settings in Meshroom, should I split non-stitched images and try this option?

akirayou commented 3 years ago

FYI: I tried it on RICOH THETA Z1 (dual fisheye image). Meshroom runs.

Here is the report

I used original script to split . I also added vignette to remove features on edge of fisheye circles. (Little bit better result of camera pose estimation)

In my experiment, using rig setting is not good for 360 degree images,because PreapareDenseSecene node get failed. Just adding EXIF cameara serial number for each L/R images was enough .

fabiencastan commented 3 years ago

@akirayou Have you tried the split360Images executable?

@natowi It would be good to add the corresponding node in meshroom: https://github.com/natowi/meshroom_external_plugins/blob/master/Split360Images.py Could you submit it as a PR?

akirayou commented 3 years ago

Have you tried the split360Images executable?

I've not tried it yet. Because I want to try with dual fisheye image and THETA Z1's dual fisheye image format is DNG [not supported]. And I want to marge JPEG's exif data and DNG's image, so I have to write the script by my self.

Using equirectangular image (THETA's jpeg output) and split360Images sounds easy way.But it seems to need more photos to reconstruction.

fabiencastan commented 3 years ago

DNG and dual-fisheye are supposed to supported.

akirayou commented 3 years ago

I can not run it in my environment (JPG is ok)
Meshroom-2021.1.0 on win10 20H2 (Japanese)

C:\Users\youak>C:\Meshroom-2021.1.0\aliceVision\bin\aliceVision_utils_split360Images.exe -i C:\Users\youak\Desktop\meshroom_theta\DNG\R0010072.DNG -o a -m dualfisheye Program called with the following parameters:

  • dualFisheyeSplitPreset = "center" (default)
  • equirectangularDemoMode = 0 (default)
  • equirectangularNbSplits = 2 (default)
  • equirectangularSplitResolution = 1200 (default)
  • input = "C:\Users\youak\Desktop\meshroom_theta\DNG\R0010072.DNG"
  • output = "a"
  • splitMode = "dualfisheye"
  • verboseLevel = "info" (default)

[00:08:34.793096][fatal] Can't write output image file 'C:\Users\youak\a/R0010072_0.DNG'.

calbear47 commented 3 years ago

@fabiencastan I'm assuming adding this node in the graph editor hasn't been released yet. Is that correct?

fabiencastan commented 3 years ago

yes

dpredie commented 2 years ago

Hi, im trying to decompose a theta X 11K jpeg using aliceVision_util_split360images.exe but it seems only to generate images on the horizon line. is there any parameters that can be inputted so it splits the top and bottom too?

natowi commented 2 years ago

For dualfisheye there is a top bottom setting

Hamed93g commented 1 year ago

Hi guys I have zero coding experience I want to split the 360 images to top/bot/left/right not just on a horizon line

I used this code : .\aliceVision_utils_split360Images.exe -i C:\Users\craig\Pictures\THETA\ — equirectangularNbSplits 32 -o C:\Users\craig\Pictures\mesh

from this link : https://medium.com/theta360-guide/splitting-360-images-into-2d-images-137fab5406da

what should i do ? with simple coding ( using insta360 1x )

natowi commented 1 year ago

@Hamed93g "THETA— equirectangularNbSplits" the -- and spaces may cause issues. Try

.\aliceVision_utils_split360Images.exe -i "C:\Users\craig\Pictures\THETA — equirectangularNbSplits" 32 -o "C:\Users\craig\Pictures\mesh"

If this does not help, please open a new issue.

fabiencastan commented 8 months ago

Since the release 2023.2, the Split360Images can be added directly into the graph after the CameraInit node: https://github.com/alicevision/Meshroom/pull/1939

kromond commented 8 months ago

Since the release 2023.2, the Split360Images can be added directly into the graph after the CameraInit node: #1939

This functions well, thank you so much for adding. I am having trouble though. I'm using bracketed exposures to make an HDR spherical pano from a Gear 360 camera. The resulting sfm data from the split360Image does not seem to work with the Hdr pipeline when I plug it. The sfm data all looks correct, but the LdrToHdrSampling is mixing images from each 'rig'. Also exposure blending is also not doing the right thing even when I use the un-split original images and I have not yet figured out why

natowi commented 8 months ago

@kromond You can open a new issue for this