I ran a few scenes from the dtu and nerf datasets and they look great. So now I'm trying to run NeuS2 on custom data, but the quality is low.
I think I've made the object is inside the aabb, and scale set to 1. (Following image got from ~1k iter)
The data doesn't come with foreground mask, so I tried (1) generating foreground mask and (2) the neuspp branch, and I got a few questions.
For the first attempt, I was using the dtu.json, ran for 15k iter and got results like this (mesh, predicted and gt frame 0 respectively)
Played around with ek_loss weight as well as other stuff in the config file but didn't see much improvement.
Trying to figure out why the predicted image is so blurred, I took a look at the loss chart and found it kinda strange. There's an obvious dip in 500-1k iter for rgb and ek loss. (Correct me if I'm wrong but I don't think the mask loss is been used since mask_loss_weight is 0 in dtu.json) So I ran again for 1k iter and got somehow better mesh but still pretty blurry predicted frame. So my first question is why predicted frames are so blurry and why more iteration made things worse?
For the neuspp branch attempt, I could get good results with dtu-scan24 using womask.json so I proceeded to the same custom data. I assume that the input images should still be 4 channels with the alpha channel all 255 and got following results (images from 15k iter, predicted frame, loss chart and mesh respectively). Predicted frame is still blurry, and the mesh doesn't seem to be meaningful.
The second question is why neuspp branch failed in this case? Did I make some mistakes or miss something?
I notice that there are predefined weights stored in utils/mlp_weights.txt used for initialization. I wonder how these weights are generated, and also will they work (magically) for all data or do we have to come up with a new version if the data distribution changed a lot?
Hi, firstly thanks for the great work!
I ran a few scenes from the dtu and nerf datasets and they look great. So now I'm trying to run NeuS2 on custom data, but the quality is low. I think I've made the object is inside the aabb, and scale set to 1. (Following image got from ~1k iter)
The data doesn't come with foreground mask, so I tried (1) generating foreground mask and (2) the neuspp branch, and I got a few questions.
dtu.json
, ran for 15k iter and got results like this (mesh, predicted and gt frame 0 respectively)Played around with ek_loss weight as well as other stuff in the config file but didn't see much improvement.
Trying to figure out why the predicted image is so blurred, I took a look at the loss chart and found it kinda strange. There's an obvious dip in 500-1k iter for rgb and ek loss. (Correct me if I'm wrong but I don't think the mask loss is been used since
mask_loss_weight
is 0 indtu.json
) So I ran again for 1k iter and got somehow better mesh but still pretty blurry predicted frame. So my first question is why predicted frames are so blurry and why more iteration made things worse?womask.json
so I proceeded to the same custom data. I assume that the input images should still be 4 channels with the alpha channel all 255 and got following results (images from 15k iter, predicted frame, loss chart and mesh respectively). Predicted frame is still blurry, and the mesh doesn't seem to be meaningful.The second question is why neuspp branch failed in this case? Did I make some mistakes or miss something?
utils/mlp_weights.txt
used for initialization. I wonder how these weights are generated, and also will they work (magically) for all data or do we have to come up with a new version if the data distribution changed a lot?Thanks in advance.