Open dadaaichifan opened 2 months ago
Do you mean that you want 3D volume grid/mesh?
您的意思是您想要 3D 体积网格/网格吗?
Yes, just like the ns-export command statement in nerfstudio, export to a point cloud or Poisson grid, because my future work needs to manipulate a model in a 3D scene
I am sorry that we do not provide the related code. I think the easiest solution is to render depth maps from different points and combine them to get a point cloud. Alternatively, you can sample points and calculate their volume density and instance prediction.
You can check run
function in this file. The depth map is one of the output.
I think the current codebase can support you to save RGB and depth. But for the following steps, you need to implement them by youself.
I am sorry that we do not provide the related code. I think the easiest solution is to render depth maps from different points and combine them to get a point cloud. Alternatively, you can sample points and calculate their volume density and instance prediction.
I tried exporting single-view RGB and depth in Nerfstudio to generate a point cloud. The problems I encountered are: first, using Colmap's intrinsic and extrinsic parameters directly didn't perfectly align the point clouds from multiple viewpoints. Second, the point cloud from a single view was deformed, possibly due to inaccurate depth estimation.
For the camera parameters, it would be better if you can use the default parameters used for NeRF training. To faciliate the training, the original camera parameters might be normalized or translated.
For your second question, I think first you need to visualize your depth map to check whether it is correct. If not, it is probably because the reconstruction is bad. I think in NeRF Studio, they providie the visualization of depth (or disparity) in their UI, if I remember correctly. Then, you can check whether you use the correct depth value. The output might be disparity maps, or the depth values might be scaled.
请问您的工作可以实现将分割出来的部分模型导出吗?