Closed sunrainyg closed 1 year ago
Hi maybe this is the cause? https://github.com/NVlabs/tiny-cuda-nn/issues/236
Thanks for your suggestion. I change the batch_size from 262144 to 1024, but still the same error
I'm also facing the same issue.
Hi, Tried dropping the batch_size as well, facing the same issue, anyone found a solution? Why is that this comes up only during evaluation stage ? Training seems to be fine
Thanks!
I think Tom's suggestion is a good thing to try here ( https://github.com/NVlabs/tiny-cuda-nn/issues/236#issuecomment-1376996853 ):
slice your batch into chunks of, say, 1m elements, and compute parameter gradients for each of these chunks separately. Then, simply average those gradients. The resulting values will be the same as if you had computed them from a single large batch. (Ignoring fp32 order-of-addition quirks, which shouldn't be significant here.)
Maybe there's a way to effectively use one GPU in DDP mode?
Hi,
I found this error raised when the input size is 0 to tiny-cuda-nn. Adding a condition at the beginning of the function of radiance_field.query_density(positions) solves the problem in my case:
def query_density(self, x, return_feat: bool = False):
if x.shape[0] == 0:
if return_feat:
return x.new_zeros(0, 1), x.new_zeros(0, self.geo_feat_dim)
else:
return x.new_zeros(0, 1)
# other code
Cheers
when I run
python examples/train_ngp_nerf.py --train_split train --scene lego
, At first, every thing is fine. After 321s:ps: It was fine when I run
python examples/train_mlp_nerf.py --train_split train --scene lego
Could you please help to find what's the problem? Thank you!