alicevision / Meshroom

3D Reconstruction Software
http://alicevision.org
Other
11.06k stars 1.08k forks source link

[question] Using VR180 images from Calf VR camera #2255

Open atkulp opened 10 months ago

atkulp commented 10 months ago

Describe the problem I'm trying to use 180° fisheye images taken by the Calf VR VR 180 camera. I've split the images in half to make things easier and set the intrinsics to what I think make sense. I fear the camera's odd geometry may be the issue since it's not a round fisheye (harder to tell at the top, but it's an odd arc.

The camera uses twin 1/2.3 inch SONY IMX577 12.3mp sensors with fisheye lenses with actual 185° field of view and a 34mm focal length (if I'm understanding the specs properly).

Screenshots image

Dataset (cut) V1000227-left

(original) V1000227-sm

Log Log from Meshing node where it gives up:

[2023-11-20 23:12:22.799986] [0x000042cc] [trace]   Embedded OCIO configuration file: 'C:\Users\arian\tools\Meshroom-2023.1.0\aliceVision/share/aliceVision/config.ocio' found.
Program called with the following parameters:
 * addLandmarksToTheDensePointCloud = 0
 * angleFactor = 15
 * colorizeOutput = 0
 * contributeMarginFactor = 2
 * densifyNbBack = 0 (default)
 * densifyNbFront = 0 (default)
 * densifyScale = 1 (default)
 * depthMapsFolder = "F:/mesh/Meshroom/valley_trail/MeshroomCache/DepthMapFilter/45e1683dcdca4e548644cf2b1d702f8006fa25e5"
 * estimateSpaceFromSfM = 1
 * estimateSpaceMinObservationAngle = 10
 * estimateSpaceMinObservations = 3
 * exportDebugTetrahedralization = 0
 * fullWeight = 1
 * helperPointsGridSize = 10
 * input = "F:/mesh/Meshroom/valley_trail/MeshroomCache/StructureFromMotion/ffde86c3282242ba7749e1e46291a8bd71b03461/sfm.abc"
 * invertTetrahedronBasedOnNeighborsNbIterations = 10
 * maskBorderSize = 1 (default)
 * maskHelperPointsWeight = 0 (default)
 * maxCoresAvailable =  Unknown Type "unsigned int" (default)
 * maxInputPoints = 50000000
 * maxMemoryAvailable = 18446744073709551615 (default)
 * maxNbConnectedHelperPoints = 50
 * maxPoints = 5000000
 * maxPointsPerVoxel = 1000000
 * minAngleThreshold = 1
 * minSolidAngleRatio = 0.2
 * minStep = 2
 * minVis = 2 (default)
 * nPixelSizeBehind = 4
 * nbSolidAngleFilteringIterations = 2
 * output = "F:/mesh/Meshroom/valley_trail/MeshroomCache/Meshing/2ae239f9ce2a5fafc99d90c9615300df68b17709/densePointCloud.abc"
 * outputMesh = "F:/mesh/Meshroom/valley_trail/MeshroomCache/Meshing/2ae239f9ce2a5fafc99d90c9615300df68b17709/mesh.obj"
 * partitioning =  Unknown Type "enum EPartitioningMode"
 * pixSizeMarginFinalCoef = 4
 * pixSizeMarginInitCoef = 2
 * refineFuse = 1
 * repartition =  Unknown Type "enum ERepartitionMode"
 * saveRawDensePointCloud = 0
 * seed =  Unknown Type "unsigned int"
 * simFactor = 15
 * simGaussianSize = 10
 * simGaussianSizeInit = 10
 * universePercentile = 0.999 (default)
 * verboseLevel = "info"
 * voteFilteringForWeaklySupportedSurfaces = 1
 * voteMarginFactor = 4

Hardware : 
    Detected core count : 16
    OpenMP will use 16 cores
    Detected available memory : 17340 Mo

[23:12:22.809481][info] Found 1 image dimension(s): 
[23:12:22.809481][info]     - [3840x3840]
[23:12:22.821536][info] Overall maximum dimension: [1920x1920]
[23:12:22.821536][warning] repartitionMode: 1
[23:12:22.821536][warning] partitioningMode: 1
[23:12:22.821536][info] Meshing mode: multi-resolution, partitioning: single block.
[23:12:22.821536][info] Estimate space from SfM.
[23:12:22.822535][info] Estimate space done.
[23:12:22.822535][info] bounding Box : length: 0.0573023, width: 0.0163111, height: 0.0417687
[23:12:22.825505][info] Creating dense point cloud.
[23:12:22.825505][info] fuseFromDepthMaps, maxVertices: 5000000
[23:12:22.884073][info] simFactor: 15
[23:12:22.884073][info] nbPixels: 44236800
[23:12:22.884073][info] maxVertices: 5000000
[23:12:22.884073][info] step: 2
[23:12:22.884073][info] realMaxVertices: 11059200
[23:12:22.884073][info] minVis: 2
[23:12:22.884073][info] Load depth maps and add points.
[23:12:23.556393][info] Filter initial 3D points by pixel size to remove duplicates.
[23:12:23.556393][info] Build nanoflann KdTree index.
[23:12:24.938301][info] KdTree created for 11059200 points.
[23:12:24.949820][info] Filtering done.
[23:12:25.002900][info] 11059200 invalid points removed.
[23:12:25.022926][info] 3D points loaded and filtered to 0 points.
[23:12:25.022926][info] Init visibilities to compute angle scores
[23:12:25.022926][info] NANOFLANN: KdTree created.
[23:12:25.022926][info] Create visibilities (0/3)
[23:12:25.023929][info] Create visibilities (1/3)
[23:12:25.023929][info] Create visibilities (2/3)
[23:12:25.717221][info] Visibilities created.
[23:12:25.717221][info] Compute max angle per point
[23:12:25.717221][info] angleFactor: 15
[23:12:25.717221][info] 0 points filtered based on the number of observations (minVis). 
[23:12:25.717221][info] 0 invalid points removed.
[23:12:25.717221][info] Filter by angle score and sim score
[23:12:25.717221][info] Build nanoflann KdTree index.
[23:12:25.717221][info] KdTree created for 0 points.
[23:12:25.717221][info] Filtering done.
[23:12:25.717221][info] 0 invalid points removed.
[23:12:25.717221][info] The number of points is below the max number of vertices.
[23:12:25.717221][info] 3D points loaded and filtered to 0 points (maxVertices is 5000000).
[23:12:25.717221][info] Create final visibilities
[23:12:25.717221][info] NANOFLANN: KdTree created.
[23:12:25.717221][info] Create visibilities (0/3)
[23:12:25.717221][info] Create visibilities (2/3)
[23:12:25.717221][info] Create visibilities (1/3)
[23:12:26.378268][info] Visibilities created.
[23:12:26.378268][fatal] Depth map fusion gives an empty result.

Desktop (please complete the following and other pertinent information):

Additional context I'm trying to take a collection of images taken from a VR180 camera. I'm using just half the image to avoid confusion from the side-by-side, but it only fits a few shots, and it just makes a mess of it. Maybe this just isn't enough images to work with, but I fear it's the tops and bottoms. Is there a way to apply a mask? Do I need to pre-process to stretch it to a pure sphere? Can I be using the distortion params somehow?

image

image

fabiencastan commented 10 months ago

If I understand correctly, your source images are pre-stitched equirectangular images. If true, with the next release, you will be able to use your source images directly and add the Split360Images node after the CameraInit node.

atkulp commented 10 months ago

That would be awesome. To be clear, the side by side source is a stereo pair of 180 fisheye, so when I split it and reproject it to fisheye it's still only 180°. Will that still work?

Also, will it work with fisheye or just equirectangular? I'm not clear why it doesn't work now.

On Wed, Nov 22, 2023, 1:59 PM Fabien Castan @.***> wrote:

If I understand correctly, your source images are pre-stitched equirectangular images. If true, with the next release, you will be able to use your source images directly and add the Split360Images node after the CameraInit node.

— Reply to this email directly, view it on GitHub https://github.com/alicevision/Meshroom/issues/2255#issuecomment-1823432299, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAYMEEQS7LZMRLQ5CIPYLRTYFZKQZAVCNFSM6AAAAAA7UAETTGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRTGQZTEMRZHE . You are receiving this because you authored the thread.Message ID: @.***>