triton-inference-server / dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html
MIT License
118 stars 28 forks source link

Fix building DALI with `TRITON_DALI_SKIP_DOWNLOAD=ON` #222

Open szalpal opened 8 months ago

szalpal commented 8 months ago

This PR introduces a fix for building DALI Backend directly within Triton container with -D TRITON_SKIP_DALI_DOWNLOAD=ON option passed to the cmake.

When the aforementioned option is passed to cmake, get_dali_paths function is responsible for discovering the path of the DALI installation in the system. This function uses DALI and Python directly. In case of building within Triton container, the system-defined python path is not the proper one. The proper python path is the one created by the conda environment, which has a different path than the system-defined python binary. This PR allows to specify which python should be used inside get_dali_paths invocation.

From now on, when DALI is built within Triton docker container, TRITON_DALI_BUILD_IN_TRITON option shall be turned ON (it is not the default option).

dali-automaton commented 8 months ago

CI MESSAGE: [10619632]: BUILD STARTED

dali-automaton commented 8 months ago

CI MESSAGE: [10619632]: BUILD FAILED