triton-inference-server / dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html
MIT License
118 stars 28 forks source link

Missing conda env. in 24.04 breaks autoserialization #244

Open mbahri opened 2 months ago

mbahri commented 2 months ago

Hello

The latest Triton image from NGC (24.04) includes the DALI backend, but the conda-packed environment that used to be shipped with the backend is no longer there.

This breaks the @autoserialize feature.

Running already serialized models works fine.