NVlabs / nvdiffrast

Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering
Other
1.37k stars 146 forks source link

nvdiffrast with pycuda #188

Closed briliantnugraha closed 3 months ago

briliantnugraha commented 3 months ago

Dear NVidia Labs, thanks for this nvdiffrast repo.

Btw, I have an issue, is there any way/plan to build nvdiffrast with pycuda/numba, without having to use torch/tensorflow? Currently, we've encountered an issue to port this package to Jetson device without pytorch/TF packages. And AFAIK, we can't use pytorch and TensorRT models at the same time. thanks

s-laine commented 3 months ago

At the moment, there is no way to do that. However, achieving this wouldn't probably require very much additional code.

The vast majority of nvdiffrast (most of nvdiffrast/common) is implemented in CUDA. This code should be usable as-is, but I have never used pycuda myself, so I don't know how exactly you'd get pycuda to compile and run it.

The pytorch bindings (all the .cpp files in nvdiffrast/torch) are fairly simple C++ functions that interface between pytorch and CUDA — allocating the output tensors, acquiring GPU memory pointers for input/output tensors, and launching the necessary CUDA kernel(s). If you wrote similar interfacing functions for pycuda/numba, this should be enough to use nvdiffrast's CUDA kernels.

The pytorch ops are then wrappers that connect these "raw" C++ computation functions to pytorch's automatic differentiation engine. I would imagine you'd need something similar with TensorRT.

briliantnugraha commented 3 months ago

I see, thanks for the insight @s-laine ~