Is there any way to use nerfstudio to render 2D mask based on training images accordingly?
From my result shown in the uploaded video, I ask the model to segment the "conference table" based on the "room" LLFF dataset, but the segmentation result of some views are not good, is there any reason causing it?
We have not tried to save the rendered masks in nerfstudio, but I think it can be done by using ns-render --rendered-output-names=mask_scores (refer to this link).
In our sa3d-nerfstudio, we use hash grids to be mask grids. Therefore, you can try to increase the log2_hashmap_size, or decrease the mask_thresholdhere.
Thank you for your great work.
I have several questions regarding your project:
https://github.com/Jumpat/SegmentAnythingin3D/assets/75596632/dcd68c8f-9e95-4518-b63d-3c4bb8a10fbf