Closed ntadimeti closed 4 years ago
From the analysis here :https://confluence.nvidia.com/display/GEN/AtacWorks+Inference+Profiling this issue is deprioritized as it only seems to be a bottleneck on specific platforms. Closing for now, can pick up in future as needed.
There is a slowdown in populating queue as the dataset size increases and this is causing a massive slow down in writing during inference.In some cases, queue remains empty until the inference is completed. This makes writing and inference a serial operation even though writer thread keeps running in parallel all the time.