microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.79k stars 2.94k forks source link

how to release gpu memory when use onnxruntime with fastapi #22899

Open SZ-ing opened 5 days ago

SZ-ing commented 5 days ago

This is probably a repetitive problem, but I still haven't found how to solve it.

I used FastAPI to build API interfaces and used onnxruntime to load models. However, due to the limited number of models and gpu memory, I hope to release this part of the occupied gpu memory after each interface call is completed. But it seems that I can never fully release it, and there will still be some models occupying gpu memory.

I want to know if there is any way to solve this problem for Python.