Open aaditya-srivathsan opened 4 months ago
1) By default, the BFCArena (ORT's memory pool implementation) is used to allocate the weights (initializers) and it could have grown quite a bit during the weights allocation process. Usually, this is not so bad, as the "unused" memory in the pool will be used to service Run() requests. But, you have a case where you are hosting multiple models on the same server and depending on the memory usage of Run() for each of these models - some portion of the memory in the pool might be wasted for each of the models. To cut down on this, I suggest using this option to by-pass the memory arena for weights(Usage example: https://github.com/microsoft/onnxruntime/blob/d30c81d270894f41ccce7b102b1d4aedd9e628b1/onnxruntime/test/shared_lib/test_inference.cc#L3065). This ensures that the weights are not allocated through the memory pool (and hence doesn't grow the memory pool during the weights' allocation) and the memory pool's growth is only a function of the memory usage during Run() itself. Also keep in mind, that the first Run() might be a tad slower with this option since this is the Run() where the memory pool actually grows (as opposed to the previous growth during session initialization). This is one way to ensure minimal memory wastage while hosting multiple models on the same server.
2) The second thing to try is to tweak the arena's extension strategy - By default, the arena's extension strategy may make it sub-optimal for the scenario you have. Try changing it to "kSameAsRequested" to be more economical wrt to memory growth.
(1) or (2) (or both) might help in your usage scenario
Thanks @hariharans29 Let me try one of these 2 approaches to see if that helps me at all
@hariharans29 so despite enabling the 2 options, I still see the exact same error. In my config.pbtxt file, I am passing the 2 as a parameter
parameters { key: "arena_extend_strategy" value: { string_value: "1" } }
parameters { key: "use_device_allocator_for_initializers" value: { string_value: "1" } }I
Any idea how to further debug this?
Is config.pbtxt file the way to specify the ORT options to tritonserver ? If so, I am not sure whether support for these options have been enabled on tritonserver. Please check with relevant folks on this.
I would suggest trying these options in the standalone ORT setup you have and study the differences it has with baseline. It should make some difference in the amount of memory allocated - how subtle or marked is something I don't know.
This is the list of options supported by Triton's ORT backend: https://github.com/triton-inference-server/onnxruntime_backend?tab=readme-ov-file#model-config-options.
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
So I was trying to deploy a custom model on the tritonserver(23.08) with the onnxruntime_backend(onnxruntime version 1.15.1). But while doing so, we are facing this issue:
There are 7 other models also hosted on the the same server and those work fine(even under stress) but things break once this new model is added. Any idea why this might be happening? The server is also hosted in a T4 gpu and these are our current stats:
Separately, while testing things out without the tritonserver setup and using the onnxruntime session, we were able to see that the model, despite the size of the onnx file being 350MB and the input shape being [3, 1280, 1280], we still see the GPU mem consumption jump upto 9GB[this is with FP 32, reducing it to FP 16 still shows 5GB usage for a single batch] after a single request(batch_size = 1 and for ref the actual use batch size for inference is 8)
Any help on understanding why this might be caused and how to fix this will be appreciated Thanks!