triton-inference-server / dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html
MIT License
120 stars 28 forks source link

Add ProtoBuf txt parsing to read max_batch_size #169

Closed banasraf closed 1 year ago

banasraf commented 1 year ago

When config is read by the tritonserver, there is no way to determine if max_batch_size was set to 0 by the user or was it just not provided. To workaround it, I manually load the config text file (if it exists) and parse it (to a limited level) to read the max_batch_size field.

Signed-off-by: Rafal rbanas@nvidia.com

dali-automaton commented 1 year ago

CI MESSAGE: [7034658]: BUILD STARTED

dali-automaton commented 1 year ago

CI MESSAGE: [7034658]: BUILD PASSED

banasraf commented 1 year ago

Why not use RapidJSON or some other JSON parser?

This format is not JSON. It is Protocol Buffer Text Format

dali-automaton commented 1 year ago

CI MESSAGE: [7052665]: BUILD STARTED

dali-automaton commented 1 year ago

CI MESSAGE: [7052665]: BUILD FAILED

dali-automaton commented 1 year ago

CI MESSAGE: [7052665]: BUILD PASSED