Open Yasaman-Haghighi opened 1 year ago
Hi! Could you provide some images to illustrate this issue?
Hi! Could you provide some images to illustrate this issue? Hello,
The one on top is what I get with the new code and the one below is what I used to get with the old version. The loss values and rendered RGB images are the same for both versions.
Which specific old version (commit hash) did you use? And how do you manage to get the vertex colors? If the rendered RGB images are the same, the vertex colors should be too.
I'm using the following code for retrieving color and the old version is Automatic calculation of render_step_size.
b,_ = mesh['v_pos'].shape
sdf, sdf_grad, feature = self.geometry(mesh['v_pos'], with_grad=True, with_feature=True)
normal = F.normalize(sdf_grad, p=2, dim=-1)
t_dirs=torch.tensor([0,0,-1], device= mesh['v_pos'].device).repeat(b,1) //random direction
color = self.texture(feature, t_dirs, normal)
Did you re-train the model in the new version? or did you simply restore the old pretrained model?
I re-trained it.
Could you provide some major modifications of the config file? and could you check whether the output vertex colors lie in range [0, 1]
?
Yes, the values are between [0,1]. The only thing that I changed in the config file is the num_samples_per_ray. I tested both 1024 and 2048 values but the color issue remains the same for both of them.
This is very weird. Could you make sure that you could get correct results if you checkout to the old version? Please also try different random view directions (like [0,1,0] or [0,-1,0]) and see if the colors change.
I can get correct results with the old version and changing view direction doesn’t affect the output.
I tested on the Lego scene and the only artifact I observed was due to the choice of the random direction. Did you try to input the current facing direction and see if the faced area looks good? The facing direction could be roughly estimated by turning on XYZ coordinates in MeshLab.
Thank you for your quick replies. I changed the marching cube threshold to -0.5 and the colors are correct now. Is there a reason for this?
This is probably due to the removal of model.geometry.sdf_bias
in favor of model.geometry.mlp_network_config.sphere_init_radius
. Make sure you use the same config file when training and testing.
Thank you but I am using the same config file during training and testing.
Could you please check that there is no model.geometry.sdf_activation
in the config file you used (and in the parsed.yaml
you used for testing)?
I don’t have that in my config.
Sorry I couldn't replicate this problem. Theoretically if you set the marching cube threshold to -0.5
, the geometry could be very wrong (empty even).
I'm using the following code for retrieving color and the old version is Automatic calculation of render_step_size.
b,_ = mesh['v_pos'].shape sdf, sdf_grad, feature = self.geometry(mesh['v_pos'], with_grad=True, with_feature=True) normal = F.normalize(sdf_grad, p=2, dim=-1) t_dirs=torch.tensor([0,0,-1], device= mesh['v_pos'].device).repeat(b,1) //random direction color = self.texture(feature, t_dirs, normal)
I'm trying to implement this approach to get texture data. However, when I run the trainer, I encounter this error: File "/instant-nsr-pl/models/network_utils.py", line 49, in forward return self.encoding(x, args) if not self.include_xyz else torch.cat([x self.xyz_scale + self.xyz_offset, self.encoding(x, *args)], dim=-1) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument tensors in method wrapper_cat)
Did you also face this?
@MIMNSI Hi! Could you check whether mesh['v_pos']
is on CUDA?
Just checked, it is on CPU. Can you guide me as to how to set the device to CUDA?
I shifted it to 'cuda:0', the cuda device on my machine. However it is throwing a tensor size mismatch error:
File "/instant-nsr-pl/models/neus.py", line 178, in forward out = self.forward(rays) File "/instant-nsr-pl/models/neus.py", line 146, in forward alpha = self.get_alpha(sdf, normal, t_dirs, dists)[...,None] File "/instant-nsr-pl/models/neus.py", line 101, in get_alpha estimated_next_sdf = sdf[...,None] + iter_cos dists.reshape(-1, 1) 0.5 RuntimeError: The size of tensor a (857156) must match the size of tensor b (81623) at non-singleton dimension 0
Hi, you can now export textured meshes with the latest code! For more information, see https://github.com/bennyguo/instant-nsr-pl/issues/34#issuecomment-1496877662.
Hello, thanks for your great code! I am using your code to get textured meshes. I have low PSNR and the rendered RGB images have very high quality. I was able to get a very good color for my mesh with your previous version but the color of the current version seems to be wrong. Could be related to the alphas that you are calculating? Should I multiply the RGB values by a parameter to get the correct colors (similar to the rendered RGB)?