Open SWWdz opened 9 months ago
Hi @SWWdz
The DTU scenes only observe the objects from one side and have a background that is almost all white, which results in ambiguous surroundings. We use object masks provided by the dataset to remove these background regions following prior work. After removing the background, you will see the targeted object.
Thanks for your reply! Can you tell me how to remove these background regions with object masks? Or can you give me some instructions?
Hi @SWWdz
You can refer to prior work's provided functionalities here
OK! Thank you!
OK! Thank you!
Hi @SWWdz I'm sorry to bother you, but how did you solve this problem in the end?
+1
I just figured it out:
1) (optional) I adjusted the bounding sphere to something a little bit smaller, like 0.95: https://github.com/NVlabs/neuralangelo/blob/94390b64683c067c620d9e075224ccfe582647d0/projects/neuralangelo/utils/mesh.py#L137
2) For the cleanup I used this script: https://gist.github.com/Totoro97/43664cfc28110a469d88a158af040014#file-clean_mesh-py
P = cameras['world_mat_{}'.format(i)]
=>
P = cameras['world_mat_{}'.format(i)] @ cameras['scale_mat_{}'.format(i)]
kernel = cv.getStructuringElement(cv.MORPH_ELLIPSE, (101, 101))
to
kernel = cv.getStructuringElement(cv.MORPH_ELLIPSE, (25, 25))
After step 1: After step 2:
Nevermind the bad mesh quality, it was a fast training with little hashgrid values. Hope this helps the next person stumbling upon this post :)
I run the code in stu_scan24, but I get a strange mesh out. It is in a sphere shell. Can you give me some advice? Thank you!