triton-inference-server / dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html
MIT License
118 stars 28 forks source link

Change the way DALI is installed inside Triton #245

Closed szalpal closed 2 months ago

szalpal commented 2 months ago

Due removing Conda dependency from tritonserver docker image, DALI Backend needs to modify how DALI wheel is installed within this docker image. We no longer can rely on pip install to do so - Triton's build system uses builder and runtime images. DALI Backend is built within the builder image and the necessary artifacts are copied to runtime image. Therefore DALI Backend will have the DALI wheel unpacked (and installed with CMake) together with all dependencies. Additionally, Triton's build system will set the PYTHONPATH variable (https://github.com/triton-inference-server/server/pull/7216), so that Python interpreter can pick proper DALI wheel.

dali-automaton commented 2 months ago

CI MESSAGE: [15000767]: BUILD STARTED

dali-automaton commented 2 months ago

CI MESSAGE: [15003702]: BUILD STARTED

dali-automaton commented 2 months ago

CI MESSAGE: [15000767]: BUILD FAILED

dali-automaton commented 2 months ago

CI MESSAGE: [15006739]: BUILD STARTED

dali-automaton commented 2 months ago

CI MESSAGE: [15003702]: BUILD FAILED

dali-automaton commented 2 months ago

CI MESSAGE: [15006739]: BUILD PASSED