triton-inference-server / dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html
MIT License
118 stars 28 forks source link

Working around incompatiblitiy in CUDA compat layer #206

Closed szalpal closed 10 months ago

szalpal commented 10 months ago

When running the setup.sh script using CUDA compat layer, docker exec proves to be problematic. It is better not to use docker exec, but to run docker run and then copy out the artifacts from the container.

dali-automaton commented 10 months ago

CI MESSAGE: [9780938]: BUILD STARTED

dali-automaton commented 10 months ago

CI MESSAGE: [9780938]: BUILD PASSED

dali-automaton commented 10 months ago

CI MESSAGE: [9802944]: BUILD STARTED

dali-automaton commented 10 months ago

CI MESSAGE: [9802944]: BUILD PASSED