Open Lizhinwafu opened 10 months ago
As for exporting the GARField-based labels, this isn't currently supported.
Can you elaborate what you mean by "instance labels"? (ie do you mean cluster labels, or the grouping features?)
Either way, it's definitely possible. Currently, It should be possible to export the gaussians visible in the viewport, if you check the "Export Options" box. This code borrows from the gaussian export function in the original nerfstudio's export_utils. The label data can be added to the pointcloud metadata.
The corresponding function is linked here:
Thank you for your reply. I want to export the point cloud of each part after grouping (point clouds with different labels are exported separately).
It should be possible to combine this gaussian export code, with:
cluster_labels
, which should assign every gaussian with a cluster ID,state_stack
:Unfortunately, I don't currently have the bandwidth to code this up in the near future, but I'd be happy to review any PRs or contributions!
This result seems comparable to the ones in the README -- I don't know what the scene RGB looks like, but it seems to be a bunch of flowers on a branch/twig. Given this, it seems like GARField successfully clusters the individual flowers / parts are clustered well together, as well as the larger structures (table, wall, ...)
I'm not sure what you mean by "how I can adjust it to get as good results as the demo" -- is your question,
1) Why aren't the flowers and the branches clustered together?
In this case, check if SAM-based masks are able to group the branches and the flowers together, at all. It's possible that the twigs are too thin to be grouped together. Since GARField distills 2D groups into 3D, if the masks fail to generate a desired group, it won't emerge in 3D.
Also, the scale seems small (0.0) here?
Also, SAM might return more masks with different parameters (e.g., increasing crop_n_layers
or points_per_side
).
2) Why is the ground / left-side of the wall muddled/patchy?
There probably isn't sufficient group supervision there, especially if only a few cameras face this part of the scene.
It does seem to separate the individual branches a bit (at least based on the PCA visualization you provided), but I'm with you in that I'm also not sure how GARField would perform on these plants.
I do think SAM is the bottleneck here, after running SAM's segment-everything mode on your RGB image (the web version). Across multiple views, individual branches will probably be grouped independently at some point (like the bottommost branch in the attached image). However, there's no guarantee all the branches will be grouped independently like this. Segment-everything uses point queries, and the thin structures are adversarial for that.
It seems that GarField relies heavily on the segmentation results of SAM. Is it possible to use box prompts with GarField?
It's not possible. For a group to exist in 3D, it must be generated in 2D.
Also, to clarify, GARField's selection/clustering isn't generating groups using prompts -- it uses 2D masks to supervise the 3D grouping features, which then can be filtered/grouped using their affinities. The "clicking" demo is a simple thresholding of the affinity, not a SAM-like point prompt fed into a decoder.
FYI, if you have another segmentation model that can generate these desired instance labels in 2D, you can add it to img_group_model.py
.
FYI, if you have another segmentation model that can generate these desired instance labels in 2D, you can add it to
img_group_model.py
.
Yeah. I do want to train a 2D instance segmentation model myself to obtain the mask, but I'm not familiar with how to do it.
As for exporting the GARField-based labels, this isn't currently supported.
Can you elaborate what you mean by "instance labels"? (ie do you mean cluster labels, or the grouping features?)
Either way, it's definitely possible. Currently, It should be possible to export the gaussians visible in the viewport, if you check the "Export Options" box. This code borrows from the gaussian export function in the original nerfstudio's export_utils. The label data can be added to the pointcloud metadata.
The corresponding function is linked here:
Hi, how to call this function "_export_visible_gaussians" using the command line?
Hi, how to call this function "_export_visible_gaussians" using the command line?
This functionality isn't supported on the command line, but you should see a "Export Visible Gaussians" button once you check the "Export Options" checkbox (for garfield-gauss
).
Hi, how to call this function "_export_visible_gaussians" using the command line?
This functionality isn't supported on the command line, but you should see a "Export Visible Gaussians" button once you check the "Export Options" checkbox (for
garfield-gauss
).
Yeah, I can see the "Export Options" checkbox (for garfield-gauss). But when I click the "Export Visible Gaussians", there isn't any response. Can the "Export Visible Gaussians" checkbox work when clicking?
It should work (I believe it writes a ply file to the current directory, as shown below).
If the export code doesn't work, or there are issues with the code, please feel free to open up a PR!
I have always had a question. The current segmentation method of nerf or 3dgs is to first find the mask from 2D, then map the mask to 3D, and segment the 3D target.
This introduces occlusion issues in 2D images and is not directly segmented in 3D space like point clouds.
Many models now are a combination of 2D and 3D. I would like to ask why not perform segmentation directly in the generated 3D GS model? What's the difficulty?
The generated 3D GS model retains both spatial information and high image-level fidelity. I think that achieving segmentation directly in 3D space truly reflects its advantages over point clouds.
Hi, have you successfully export point cluds with semantic labels?
Hi, have you successfully export point cluds with semantic labels?
I've been testing other models recently, but I haven't research this model again.
I am also trying to export point clouds with semantic lables, maybe we can discuss about this if possible.
Any success with this so far? Would be great to export the semantic label/IDs to the .ply file for each respective "semantic object", such as an additional semantic attribute for the complete splat into the .ply file in the header (and not what is just visible in the current viewport). It would be like that every XYZ point is assigned a label based on its segmentation.
Does anyone have success with this? @chungmin99 chungmin99
Is it possible to export point clouds with semantic labels or instance labels after instance segmentation?