Open skinkie opened 4 years ago
Hypothesis: I assume that when the exact camera poses are a priori known, it would be easier to do any reconstruction in the future. So I am looking for help in the following direction.
Yes, it is possible to use known camera positions, however this feature is experimental and did not yet provide the same quality as the default graph for externally calculated camera positions. If you have a fixed rig, you can use the camera views and poses for future reconstrucitons. See links below.
Given that I am satisfied with the resolved camera positions in the 3DViewer. From previous issues here on github I read that outputViewsAndPoses is the key ingredient to extract the data that I ask for.
Yes. Take a look at this discussion (includes useful information). https://github.com/alicevision/meshroom/issues/829 If you would like to experiment with resolved camera positions by Meshroom, you can take a look at this https://github.com/alicevision/meshroom/wiki/Using-known-camera-positions. You need the latest MR build (snapshot or from source) to get many of the "fromKnownPoses" options. You could use a script like in 829 to use the known sfm file for a new capture with new image file names.
If an ExifTag would be enhanced with these priors, they would be available. I assume a new node could be made that would write such information in the tags.
Why? In Meshroom the information is stored in the SFM file.
Is there any node that currently accepts priors? Is my hypothesis correct that when poses are provided, Match From Known Camera Poses could assist FeatureMatching?
Yes, FeatureMatching , SfM and FromKnownPoses (triangulation only). matchFromKnownCameraPoses in the FeatureMatching node is not available yet in the 2019 release
Is this all needed? Could known poses within an image folder already do a PrepareDenseScene.
You still need the extraction, matching nodes and sfm. Camera views and poses alone are not enough for DepthMap.
Yes. Take a look at this discussion (includes useful information). #829 If you would like to experiment with resolved camera positions by Meshroom, you can take a look at this https://github.com/alicevision/meshroom/wiki/Using-known-camera-positions. You need the latest MR build (snapshot or from source) to get many of the "fromKnownPoses" options.
You are awesome for having documented this!!
If an ExifTag would be enhanced with these priors, they would be available. I assume a new node could be made that would write such information in the tags.
Why? In Meshroom the information is stored in the SFM file.
Because that would allow a generic approach of exchanging information, opposed to an application specific fileformat. I would agree that exporting the Exif information to a SFM would be fine too. But I am targetting a portable approach, hence CameraInit should produce me a SFM file that contains these properties.
Is this all needed? Could known poses within an image folder already do a PrepareDenseScene.
You still need the extraction, matching nodes and sfm. Camera views and poses alone are not enough for DepthMap.
Could you elaborate on this line a bit more? Why aren't they enough? What are the landmarks adding here?
Because that would allow a generic approach of exchanging information, opposed to an application specific file format. I would agree that exporting the Exif information to a SFM would be fine too. But I am targetting a portable approach, hence CameraInit should produce me a SFM file that contains these properties.
Simply store the SFM file with views and poses with your images. Then add a few lines of code in the CameraInit node to allow to input SFM reference from a previous sfm reconstruction...
You still need the extraction, matching nodes and sfm. Camera views and poses alone are not enough for DepthMap.
Sorry, this is required for SFM+Meshing (That is why there are the use known poses option). For PrepareDense+DepthMap can technically use the generated cameras.sfm file, adjust the image paths and names for a new dataset shot with the same rig and use ConvertSfMformat to convert to abc, then connect it to PrepareDense and it will compute... But Meshing will fail, when "Estimate space from SfM" So DepthMap would work, but you still need the other data in Meshing. Think of DraftMeshing - Meshing can not Mesh from camera views and poses only. The spares point cloud is required.
Because that would allow a generic approach of exchanging information, opposed to an application specific file format. I would agree that exporting the Exif information to a SFM would be fine too. But I am targetting a portable approach, hence CameraInit should produce me a SFM file that contains these properties.
Simply store the SFM file with views and poses with your images.
Two arguments:
Think of DraftMeshing - Meshing can not Mesh from camera views and poses only. The spares point cloud is required.
This is something I do not directly understand either. If the software can create a DepthMap, wouldn't that mean it would create a point cloud in that process, especially since the mesh is not yet known in this step?
Having lat, lon, yaw, roll, pitch would allow GPS priors to be implemented (partially). Of course it would require that the poses are not locked, but rather relaxed.
Yes, that could work.
DepthMap, wouldn't that mean it would create a point cloud in that process
You can retrieve the depth information from the generated depth map. A DepthMap contains only Depth (Z) information. https://alicevision.org/#photogrammetry/depth_maps_estimation https://alicevision.org/#photogrammetry/meshing You can see the dense point cloud derived from the DepthMaps when enabling "Save...PointCloud" in Meshing.
As this is all experimental, some nodes may not work as expected or rely on other nodes although it would not be necessary. So I would recommend you to do some testing with known datasets.
Please reopen.
Hi, how do you import camera positions? From which file and file format?
Thanks!
Describe the problem Hypothesis: I assume that when the exact camera poses are a priori known, it would be easier to do any reconstruction in the future. So I am looking for help in the following direction.
Screenshots