nerfstudio-project / nerfstudio

A collaboration friendly studio for NeRFs
https://docs.nerf.studio
Apache License 2.0
8.97k stars 1.2k forks source link

There is way to remove baxkground in mesh? #2908

Closed hanjoonwon closed 5 months ago

hanjoonwon commented 5 months ago

Currently mesh outpur has dirty background. But i want to get only object. How to remove messy background ?

maturk commented 5 months ago

@hanjoonwon unfortunately there is no way to do this without manually edditing the mesh in some software. This is actually possible in code however, before the mesh is generated. I saw that the SuGaR paper considered two different meshes that were later combined into a single one, a foreground mesh and a background mesh. The foreground mesh is simply a mesh that is bounded by the camera poses, and the background mesh is everything outside the bounds of the camera poses. This distinction can be made, but I don't think I will be making a PR for it anytime soon, due to some time issues :P

maturk commented 5 months ago

I suggest you look into some popular opensource mesh tools, like meshlab or blender, to manually edit your meshes. It is actually pretty easy these days with these tools, you can manually move vertices and delete things.

maturk commented 5 months ago

@hanjoonwon have you looked into making a bounding box for your scene, and then generating the mesh? There are ways to crop the scene before hand using these flags: https://github.com/nerfstudio-project/nerfstudio/blob/57fbc07bf1efecdb767259148fe5705ccf12af3f/nerfstudio/scripts/exporter.py#L284

hanjoonwon commented 5 months ago

I suggest you look into some popular opensource mesh tools, like meshlab or blender, to manually edit your meshes. It is actually pretty easy these days with these tools, you can manually move vertices and delete things.

Thanks :) i know edit tools like meshlab,but it is quite bothering removing background. Is it just a matter of viewing and adjusting the viewer to find the bounding box I want?

maturk commented 5 months ago

I suggest you look into some popular opensource mesh tools, like meshlab or blender, to manually edit your meshes. It is actually pretty easy these days with these tools, you can manually move vertices and delete things.

Thanks :) i know edit tools like meshlab,but it is quite bothering removing background. Is it just a matter of viewing and adjusting the viewer to find the bounding box I want?

yes you can do some trial and error. If you are using a nerf, then the bounding box will be automatically between -1 and 1, so you can start cropping this down to get a smaller bounding box to only target the correct area. good luck

hanjoonwon commented 5 months ago

@hanjoonwon unfortunately there is no way to do this without manually edditing the mesh in some software. This is actually possible in code however, before the mesh is generated. I saw that the SuGaR paper considered two different meshes that were later combined into a single one, a foreground mesh and a background mesh. The foreground mesh is simply a mesh that is bounded by the camera poses, and the background mesh is everything outside the bounds of the camera poses. This distinction can be made, but I don't think I will be making a PR for it anytime soon, due to some time issues :P

I'm sorry to bother you, but if you don't mind, could you please point me to where that part of the code is in sugar? It seems like it would be easier to get just the objects if you separate the foreground and background mesh generation in sugar.

maturk commented 5 months ago

@hanjoonwon unfortunately there is no way to do this without manually edditing the mesh in some software. This is actually possible in code however, before the mesh is generated. I saw that the SuGaR paper considered two different meshes that were later combined into a single one, a foreground mesh and a background mesh. The foreground mesh is simply a mesh that is bounded by the camera poses, and the background mesh is everything outside the bounds of the camera poses. This distinction can be made, but I don't think I will be making a PR for it anytime soon, due to some time issues :P

I'm sorry to bother you, but if you don't mind, could you please point me to where that part of the code is in sugar? It seems like it would be easier to get just the objects if you separate the foreground and background mesh generation in sugar.

from this line downwards: https://github.com/Anttwo/SuGaR/blob/60fc76f9cfdc652e643e9cfa48252a88f3726ea5/sugar_extractors/coarse_mesh.py#L342

they distinguish beween fg and bg based on camera centers. later they simply merge the two meshes together, but maybe you can skip this and only keep fg_mesh. You can simulate this behaviour with the cropping bbox btw

hanjoonwon commented 5 months ago

@hanjoonwon unfortunately there is no way to do this without manually edditing the mesh in some software. This is actually possible in code however, before the mesh is generated. I saw that the SuGaR paper considered two different meshes that were later combined into a single one, a foreground mesh and a background mesh. The foreground mesh is simply a mesh that is bounded by the camera poses, and the background mesh is everything outside the bounds of the camera poses. This distinction can be made, but I don't think I will be making a PR for it anytime soon, due to some time issues :P

I'm sorry to bother you, but if you don't mind, could you please point me to where that part of the code is in sugar? It seems like it would be easier to get just the objects if you separate the foreground and background mesh generation in sugar.

from this line downwards: https://github.com/Anttwo/SuGaR/blob/60fc76f9cfdc652e643e9cfa48252a88f3726ea5/sugar_extractors/coarse_mesh.py#L342

they distinguish beween fg and bg based on camera centers. later they simply merge the two meshes together, but maybe you can skip this and only keep fg_mesh. You can simulate this behaviour with the cropping bbox btw

Thanks for kind answer

This is an additional question, is it possible to get just the mesh object automatically like an image segmentaion without having to adjust the boundingboxes and such by trial and error?

maturk commented 5 months ago

@hanjoonwon probably yes. Anything seems to be possible these days with deep learning/AI. But this is not implemented in nerfstudio. Masking with known masks should be straightforward.

Lizhinwafu commented 5 months ago

Hi, I have other question.

Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?

hanjoonwon commented 5 months ago

Hi, I have other question.

Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?

Tag maturk with @ the question

maturk commented 5 months ago

Hi, I have other question.

Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?

@Lizhinwafu, the default behaviour of nerfacto and the nerfstudio-dataparser is to squeeze all of your camera poses into a [-1,1] box (cube of sides 2), this is because instant ngp hashgrid expects normalized coordinates and also due to the contraction in nerfacto. When you generate the pointcloud, the result is still in this contracted space. The undo the process, you need to reverse the process that the nerfstudio dataparser does. The transformation and scaling information to do this is stored in the dataparser_outputs.json file under the same directory as your config.yml file is stored in. Please check out this issue https://github.com/nerfstudio-project/nerfstudio/issues/1606 for more details and also the math required to rescale your mesh back to the original scale.

hanjoonwon commented 5 months ago

Hi, I have other question. Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?

@Lizhinwafu, the default behaviour of nerfacto and the nerfstudio-dataparser is to squeeze all of your camera poses into a [-1,1] box (cube of sides 2), this is because instant ngp hashgrid expects normalized coordinates and also due to the contraction in nerfacto. When you generate the pointcloud, the result is still in this contracted space. The undo the process, you need to reverse the process that the nerfstudio dataparser does. The transformation and scaling information to do this is stored in the dataparser_outputs.json file under the same directory as your config.yml file is stored in. Please check out this issue #1606 for more details and also the math required to rescale your mesh back to the original scale.

2924 I'm sorry to bother you and I was wondering if you could give me some advice when you have time? I saw the issue and worked on rescaling to original size, but it's still too small for the actual object.

Lizhinwafu commented 4 months ago

Hi, I have other question. Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?

@Lizhinwafu, the default behaviour of nerfacto and the nerfstudio-dataparser is to squeeze all of your camera poses into a [-1,1] box (cube of sides 2), this is because instant ngp hashgrid expects normalized coordinates and also due to the contraction in nerfacto. When you generate the pointcloud, the result is still in this contracted space. The undo the process, you need to reverse the process that the nerfstudio dataparser does. The transformation and scaling information to do this is stored in the dataparser_outputs.json file under the same directory as your config.yml file is stored in. Please check out this issue #1606 for more details and also the math required to rescale your mesh back to the original scale.

2924 I'm sorry to bother you and I was wondering if you could give me some advice when you have time? I saw the issue and worked on rescaling to original size, but it's still too small for the actual object.

I think, The point cloud exported by nerfstudio must obtain the original size through an external reference object.

hanjoonwon commented 4 months ago

Hi, I have other question. Why is the point cloud generated by nerfstudio not the same size as the original target size? How can I restore the point cloud to its original size?

@Lizhinwafu, the default behaviour of nerfacto and the nerfstudio-dataparser is to squeeze all of your camera poses into a [-1,1] box (cube of sides 2), this is because instant ngp hashgrid expects normalized coordinates and also due to the contraction in nerfacto. When you generate the pointcloud, the result is still in this contracted space. The undo the process, you need to reverse the process that the nerfstudio dataparser does. The transformation and scaling information to do this is stored in the dataparser_outputs.json file under the same directory as your config.yml file is stored in. Please check out this issue #1606 for more details and also the math required to rescale your mesh back to the original scale.

2924 I'm sorry to bother you and I was wondering if you could give me some advice when you have time? I saw the issue and worked on rescaling to original size, but it's still too small for the actual object.

I think, The point cloud exported by nerfstudio must obtain the original size through an external reference object.

Thanks for the answer, can I ask how you did it?