Closed smontode24 closed 2 years ago
Update: When using separate encoders it seems to work fine, but would be nice to know how to use the Composite encoding, as in the documentation it says that separate encoders and network lead to worse performance (instead of using tcnn.NetworkWithInputEncoding
).
The current workaround would be to do something like:
# Model definition
spatial_encoding = tcnn.Encoding(3, hash_encoder_config)
dir_encoding = tcnn.Encoding(3, oneblob_encoder_config)
mlp_network = tcnn.Network(spatial_encoding.n_output_dims + dir_encoding.n_output_dims, n_output_dims, mlp_config)
# Forward pass
out_spatial = spatial_encoding(coords)
out_vdir = dir_encoding(viewing_dir)
encoding = torch.cat([out_spatial, out_vdir], 1)
out = mlp_network(encoding)
Same problem here. Would switch to separate encoder for now but it seems to be faster to use tcnn.NetworkWithInputEncoding
as is claimed in README
Fixed on latest master via https://github.com/NVlabs/tiny-cuda-nn/commit/e421b8b2d4a4065e04bf0c724c8ba7652d716239
Many apologies for taking so long to get to this one.
Hi,
I am obtaining an illegal memory access when composing a HashGrid encoding with any other encoding (see the example below). I have observed that the error is only happening when composing a HashGrid encoding with another encoding. When using the HashGrid alone or composing a TriangleWave with a OneBlob encoding it works well. I am using python and I have installed the package through pip (
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
).The following code is raising an illegal memory access:
Output:
Additional information:
sample/mlp_learning_an_image_pytorch.py
.There is one closed issue related to this #57, but as it has not been solved, I am opening this issue to see what is the problem. Do you have any idea of what could be the reason behind this or can you think of any workaround?
Thanks in advance!