Open iraj465 opened 1 year ago
@bennyguo somehow the object is not being scaled inside the sphere. I tried increasing the radius
but still a portion of the object is being cut out. Why is this happening? For the nerf_synthetic objects it is being rendered at the centre or mass but not for custom colmap datasets?
@iraj465 The camera normalization bug is now fixed. See https://github.com/bennyguo/instant-nsr-pl/issues/55 for more details. In your case, setting dataset.center_est_method=point
should give fair results. Could you try with the latest code and see if the problem is solved?
@bennyguo I tried with setting dataset.center_est_method=point
and now more of the object is coming inside the mesh but the shape seems pretty off.
Current output on data posted above:
This used mc
with resolution=1024
but the texture seems pretty different from the dataset. I have provided all the files in the above drive link for testing. Let me know what might be the problem and i can dig up the code and get back to you.
@bennyguo i tried the above approach for 10 other variants to check the variance. Mostly the same in all, the findings can be summarised as:
After some experimentation, I found the incorrect color was probably due to the limited viewing angles in your training data. The cameras are all at the same height, which makes it hard to optimize view-dependent colors. This could be problematic in retrieving vertex colors because I currently use the inverse normal direction as the viewing direction for MLP evaluation, and this query direction may not be seen in training data for many positions. For example, the inverse normal direction of the shoe top points downwards, and there's no photo taken in this viewing direction in the training set, leading to weird (uncontrolled) vertex colors. I just pushed an update to support training without view-dependent colors, i.e., assuming diffuse material. Here's some results I got on your data:
# training with view-dependent color
python launch.py --config configs/neus-colmap.yaml --train dataset.root_dir=load/shoe dataset.center_est_method=lookat dataset.up_est_method=camera model.radius=0.3
# training with diffuse color, without masks
python launch.py --config configs/neus-colmap.yaml --train dataset.root_dir=load/shoe dataset.center_est_method=lookat dataset.up_est_method=camera model.radius=0.3 model.texture.name=volume-color model.texture.input_feature_dim='${model.geometry.feature_dim}'
# training with diffuse color, with masks
python launch.py --config configs/neus-colmap.yaml --train dataset.root_dir=load/shoe dataset.center_est_method=lookat dataset.up_est_method=camera model.radius=0.3 model.texture.name=volume-color dataset.apply_mask=true system.loss.lambda_mask=0.1
Note that using dataset.center_est_method=lookat
better suits your data, and you may provide a tighter radius value to reduce the floaters. Hope this helps!
Looks good @bennyguo Although i'm not able to test this from my side. Getting this error in color extraction:
File "/instant-nsr-pl/models/texture.py", line 51, in forward
color = self.network(network_inp).view(*features.shape[:-1], self.n_output_dims).float()
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/tinycudann/modules.py", line 177, in forward
output = _module_function.apply(
File "/opt/conda/lib/python3.8/site-packages/tinycudann/modules.py", line 89, in forward
native_ctx, output = native_tcnn_module.fwd(input, params)
RuntimeError: /tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:87 check failed input.size(1) == n_input_dims()
It was working in the last commit though!
What command are you running? You probably need to train with diffuse color (the last two commands I mentioned above) instead of directly testing with old checkpoints.
Yeah i was running the above commands only, it's giving an error! I'm training it from scratch, no ckpts. This is where the error is coming from
File "/instant-nsr-pl/models/texture.py", line 51, in forward
color = self.network(network_inp).view(*features.shape[:-1], self.n_output_dims).float()
That's weird, I could run all three commands without any errors. Could you please confirm that you have pulled the latest code and copied the full command? If still not working, please try using model.texture.input_feature_dim=13
instead of model.texture.input_feature_dim='${model.geometry.feature_dim}'
.
@bennyguo seems to have been some issue with tinycudann torch bindings, did a fresh install and solved it.
The meshes look pretty good now and even the alignment issue is also solved, can confirm as i tested it on other datasets too.
There seems to be some jaggedness on the mesh surface? Any way to improve this and refine or smoothen the mesh? I used an mc
resolution of 1024
I tried ruining it as you suggested , I end up with error like this :
File "/workspace/instant-nsr-pl/systems/neus.py", line 64, in preprocess data
rgb = self.dataset.all_images[index].view(-1, self.dataset.all_images.shape[-1]).to(0)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)
The dataset being used is the dataset provided in the issue.
command run
python launch.py --config configs/neus-colmap.yaml --train dataset.root_dir=/workspace/custom-dataset/SHOGCYNH8DGJGUPJ dataset.center_est_method=lookat dataset.up_est_method=camera model.radius=0.3 model.texture.name=volume-color model.texture.input_feature_dim='${model.geometry.feature_dim}'
This looks like a bug, just change line35 to this and it should work.
if 'index' in batch: # validation / testing
index = batch['index'].to(self.dataset.all_images.device)
Hi, I was trying out neus method using the neus-colmap config to extract a good texture mesh but currently only a portion of the mesh is generated. I tried with changing various thresholds but still the same result. I have provided the dataset and the results of the neus meshing training if you want to test why it's failing. https://drive.google.com/drive/folders/1XK8n3G482rxVGMBeh6GLwIkVK2w5GxxH?usp=sharing
@bennyguo i have tried using different thresholds and resolutions and chunk sizes but still failing