NVlabs / tiny-cuda-nn

Lightning fast C++/CUDA neural network framework
Other
3.7k stars 450 forks source link

Possibility to integrate tiny-cuda-nn with my own custom CUDA kernel? #469

Open bchao1 opened 2 weeks ago

bchao1 commented 2 weeks ago

Hi,

First of all, thanks for this amazing library! I was wondering if the following is doable (or how complicated it would be) with the tiny-cuda-nn framework.

I have a PyTorch model that uses a custom CUDA kernel that implements some forward/backward passes. The gradients from CUDA kernels are connected back to PyTorch to facilitate autodiff with other modules defined in PyTorch.

Now, I would like to integrate tiny-cuda-nn with my model. The caveat is that the tiny-cuda-nn input is actually calculated in my custom CUDA kernel (due to design and efficiency reasons, it is not practical to expose this calculation to PyTorch), so I cannot use the PyTorch binding that you already provided. This means I have to initialize a tiny-cuda-nn instance in my custom CUDA kernel, is that correct?

From my understanding, what I'll have to do is:

  1. Define the tiny-cuda-nn weights in PyTorch
  2. Pass the weights to my custom CUDA kernel from the Python process
  3. Initialize a tiny-cuda-nn instance in CUDA kernel
  4. Set tiny-cuda-nn parameters manually using the weights passed in from Python process
  5. Connect the forward / backward passes of my CUDA kernel with that of tiny-cuda-nn's

Thank you so much!