NVlabs / curobo

CUDA Accelerated Robot Library
https://curobo.org
Other
798 stars 125 forks source link

Upload pip package to pypi with pre-compiled cuda kernels. #13

Closed blooop closed 3 months ago

blooop commented 1 year ago

I made curobo into a pip package using:

python setup.py bdist_wheel

and uploaded it to a local pip registry so that I'm now able to:

pip install nividia_curobo

on different repos without needing to compile it each time.

However, when I use the pip installed curobo wheel I need to jit compile the cuda kernels every time. The output looks like:

kinematics_fused_cu not found, JIT compiling...
geom_cu binary not found, jit compiling...
lbfgs_step_cu not found, JIT compiling...
line_search_cu not found, JIT compiling...
tensor_step_cu not found, jit compiling...
  1. Do you know how to fix this? I have done some basic searching online but not found anything helpful.
  2. Can you publish curobo on pypi to make it easier for anyone to use curobo?

Thanks

balakumar-s commented 1 year ago

Here is how we create a pip package. Use either a docker with pytorch or a python environment with torch already installed and run the below commands:

python3 -m pip install build
cd curobo && python3 -m build --no-isolation

This will create the .whl file that you can host in your pip registry.

Let me know if this works. You might have to install venv if it's not already installed.

We are looking at putting on pypi.

blooop commented 1 year ago

Thanks. Those steps produce the wheel, but the jit compile steps don't go away. Its not really that big of a deal though, I can wait for an official pip as this temporary solution works well enough.

balakumar-s commented 1 year ago

You can also reduce the compilation time by setting this environment variable export TORCH_CUDA_ARCH_LIST="7.0+PTX" to only compile for one architecture with forward compatability.

balakumar-s commented 3 months ago

We are deferring this to a later time.