Open Zerocheng001 opened 2 hours ago
π Hello @Zerocheng001, thank you for reaching out and your interest in Ultralytics π! This is an automated response, but don't worry, an Ultralytics engineer will be here to assist you soon.
To help us diagnose the issue you're experiencing with memory release after exporting your model to ONNX, please ensure you've included a minimum reproducible example demonstrating the behavior. This will greatly assist in understanding the problem.
In the meantime, ensure your setup is up-to-date. You can upgrade to the latest ultralytics
package using the following command:
pip install -U ultralytics
Also, make sure that your environment meets the following criteria:
If you're looking to run YOLO in verified environments, consider these options with pre-installed dependencies:
Lastly, engage with our community if you want real-time discussions or need additional support. You can join our Discord π§, participate in discussions on Discourse, or share insights on our Subreddit.
Stay tuned for further assistance from our team!
How are you checking if it's freed? Because even if Python frees the memory, it doesn't mean the memory would be made available when you look at the free memory. This is because of how CPython works.
Search before asking
Ultralytics YOLO Component
No response
Bug
I want to release the memory of the model after saving onnx after training the model, how should I release it? I've tried using gc.collect() and torch.cuda.empty_cache(), but neither of them frees up memory
Environment
Ultralytics YOLOv8.2.85 π Python-3.10.0 torch-2.3.1+cu118 CUDA:0 (NVIDIA GeForce RTX 3060 Laptop GPU, 6144MiB)
Minimal Reproducible Example
if name == 'main': model = YOLO(pre_model_name)
Train the model
do other something .....
Additional
if name == 'main': model = YOLO(pre_model_name)
Train the model
do other something .....
Are you willing to submit a PR?