yanivw12 / gs2mesh

[ECCV 2024] Official implementation of the paper "GS2Mesh: Surface Reconstruction from Gaussian Splatting via Novel Stereo Views"
162 stars 9 forks source link

I don't know where segment-anything-2 was applied #6

Closed anshanYCL closed 3 months ago

anshanYCL commented 3 months ago

have a nice day!!! I don't know where segment-anything-2 was applied Even without installing segment-anything-2, I can still obtain a mesh result from custom data Doesn't custom data require segment-anything-2?

anshanYCL commented 3 months ago

Additionally, I noticed that the input image has not been fully colmapped. For example, when I input 100 images of a Lego car, only 4 images are available for colmap? What is the reason for this?

yanivw12 commented 3 months ago

have a nice day!!! I don't know where segment-anything-2 was applied Even without installing segment-anything-2, I can still obtain a mesh result from custom data Doesn't custom data require segment-anything-2?

SAM2 was applied only in the interactive notebook (custom_data.ipynb), as it requires a prompt of clicking the object in the first frame. You don't have to use SAM2 for custom data, it should work without it as well (usually the TSDF_max_depth_baselines parameter truncates the depth image and removes most of the background). I might add in the future the option to segment an object using a text prompt with SAM2.

Additionally, I noticed that the input image has not been fully colmapped. For example, when I input 100 images of a Lego car, only 4 images are available for colmap? What is the reason for this?

Can you give more details? Was it a custom dataset or the MobileBrick dataset? Are you using the latest version of the repository? What was the resolution of the images?

anshanYCL commented 3 months ago

have a nice day!!! I don't know where segment-anything-2 was applied Even without installing segment-anything-2, I can still obtain a mesh result from custom data Doesn't custom data require segment-anything-2?

SAM2 was applied only in the interactive notebook (custom_data.ipynb), as it requires a prompt of clicking the object in the first frame. You don't have to use SAM2 for custom data, it should work without it as well (usually the TSDF_max_depth_baselines parameter truncates the depth image and removes most of the background). I might add in the future the option to segment an object using a text prompt with SAM2.

Additionally, I noticed that the input image has not been fully colmapped. For example, when I input 100 images of a Lego car, only 4 images are available for colmap? What is the reason for this?

Can you give more details? Was it a custom dataset or the MobileBrick dataset? Are you using the latest version of the repository? What was the resolution of the images?

Thank you for your reply I used the traditional NERF data Lego car as a custom dataset for processing, with a total of 100 images. Using other customized data, I took 195 pictures of chairs, and the final number of completed colmaps was 190, with some missing parts. Although it does not affect usage. The resolution of Lego data is 800 * 800. The generated result file is particularly large. Will we consider some model compression in the future? Also, if I am not satisfied with the modeling results, what parameters can I optimize through? Do you have any relevant suggestions?

anshanYCL commented 3 months ago

have a nice day!!! I don't know where segment-anything-2 was applied Even without installing segment-anything-2, I can still obtain a mesh result from custom data Doesn't custom data require segment-anything-2?

SAM2 was applied only in the interactive notebook (custom_data.ipynb), as it requires a prompt of clicking the object in the first frame. You don't have to use SAM2 for custom data, it should work without it as well (usually the TSDF_max_depth_baselines parameter truncates the depth image and removes most of the background). I might add in the future the option to segment an object using a text prompt with SAM2.

Additionally, I noticed that the input image has not been fully colmapped. For example, when I input 100 images of a Lego car, only 4 images are available for colmap? What is the reason for this?

Can you give more details? Was it a custom dataset or the MobileBrick dataset? Are you using the latest version of the repository? What was the resolution of the images?

Sorry, I encountered another issue while running custom_data.ipynb. I am unable to access huggingface. co, but I have downloaded sam_2hiera_barge.pt locally. Can you tell me where sam_2hiera_barge.pt should be placed to ensure successful running of sam2?

yanivw12 commented 3 months ago

Thank you for your reply I used the traditional NERF data Lego car as a custom dataset for processing, with a total of 100 images. Using other customized data, I took 195 pictures of chairs, and the final number of completed colmaps was 190, with some missing parts. Although it does not affect usage. The resolution of Lego data is 800 * 800.

Usually, when COLMAP outputs only a few images, it likely fails to find correspondences in the mapper stage.

The relevant line of code is line 227 in colmap_utils.py:

os.system(f"colmap mapper --database_path {database_dir} --image_path {images_raw_dir} --output_path {sparse_dir} --Mapper.num_threads 16 --Mapper.init_min_tri_angle 4 --Mapper.multiple_models 0 --Mapper.extract_colors 0")

I found that playing around with the parameters at the end can sometimes help.

The generated result file is particularly large. Will we consider some model compression in the future?

It's likely because the background noise from the GS process is being converted to mesh. Segmenting the object you want to reconstruct using SAM2 in custom_data.ipynb should solve it, or adjusting the TSDF_max_depths_baseline to a smaller number can sometimes help. Additionally, you can lower the mesh resolution by increasing TSDF_voxel.

I will add an automatic background removal option as well soon.

Also, if I am not satisfied with the modeling results, what parameters can I optimize through? Do you have any relevant suggestions?

There are some suggestions and tips in the main page of the repository under "Common Issues and Tips". If you still have issues afterwards, please send an example.

Sorry, I encountered another issue while running custom_data.ipynb. I am unable to access huggingface. co, but I have downloaded sam_2hiera_barge.pt locally. Can you tell me where sam_2hiera_barge.pt should be placed to ensure successful running of sam2?

I pushed a new update now with instructions on how to use local weights instead of huggingface weights. I added a new parameter in custom_data.ipynb and a few lines in masker_utils.py and the sam2 source code to make it work, so you need to pull the new version of the code.

anshanYCL commented 3 months ago

thanks,i know。good job!

yanivw12 commented 2 months ago

have a nice day!!! I don't know where segment-anything-2 was applied Even without installing segment-anything-2, I can still obtain a mesh result from custom data Doesn't custom data require segment-anything-2?

Update: I just pushed an update with support for automatic masking with GroundingDINO, so you can run the script without the notebook and segment specific objects/remove background. See the main page README for more details.