triton-inference-server / dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html
MIT License
120 stars 28 forks source link

Fix tmp filename generation #185

Closed banasraf closed 1 year ago

banasraf commented 1 year ago

Autoserialize was not working correctly due to race condition in tmp file naming. It was using a timestamp with a resolution of a second which caused conflicts between models.

To fix this, I changed the timestamp to maximal resolution of a monotonic clock (std::steady_clock).

dali-automaton commented 1 year ago

CI MESSAGE: [7949237]: BUILD STARTED

dali-automaton commented 1 year ago

CI MESSAGE: [7950467]: BUILD STARTED

dali-automaton commented 1 year ago

CI MESSAGE: [7949237]: BUILD PASSED

dali-automaton commented 1 year ago

CI MESSAGE: [7950467]: BUILD PASSED

dali-automaton commented 1 year ago

CI MESSAGE: [7959308]: BUILD STARTED

dali-automaton commented 1 year ago

CI MESSAGE: [7959308]: BUILD FAILED

dali-automaton commented 1 year ago

CI MESSAGE: [7960270]: BUILD STARTED

dali-automaton commented 1 year ago

CI MESSAGE: [7960882]: BUILD STARTED

dali-automaton commented 1 year ago

CI MESSAGE: [7960882]: BUILD PASSED