triton-inference-server / dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html
MIT License
120 stars 28 forks source link

Enable dynamic batching when auto-completing model config #154

Closed banasraf closed 1 year ago

banasraf commented 2 years ago

This PR extends auto-config to automatically enable dynamic batching in DALI models

Signed-off-by: Rafal Banas.Rafal97@gmail.com

dali-automaton commented 1 year ago

CI MESSAGE: [6202993]: BUILD STARTED

dali-automaton commented 1 year ago

CI MESSAGE: [6202993]: BUILD PASSED