Open HFeliks opened 2 weeks ago
Hello HFeliks,
Thank you for reaching out. It seems like the GPU acceleration is not providing significant improvement in inference speed compared to CPU usage. One possible reason could be that you are loading too many images at once.
I recommend reducing the number of images loaded per task. For example, try limiting each task to around 2,000 images or fewer. This should help alleviate some of the strain on the GPU and potentially improve performance.
Additionally, it might be helpful to test the inference speed with a smaller batch size, such as processing only 100 images initially. This will give you an idea of how much improvement you can expect when using GPU acceleration.
Please let us know if these suggestions help resolve your issue. I'm here to assist you further if needed.
Thanks for your reply! I tried processing only 100 images, and the speed difference between GPU and CPU is still not significant. Below are the results of running check.py. Could there be an issue with my environment setup?
Application Information: {'App name': 'X-AnyLabeling', 'App version': '2.4.4', 'Device': 'GPU'}
System Information: {'CPU': 'Intel64 Family 6 Model 183 Stepping 1, GenuineIntel', 'CUDA': 'V12.0.140', 'GPU': '0, NVIDIA GeForce RTX 4060 Laptop GPU, 8188', 'Operating System': 'Windows-10-10.0.22621-SP0', 'Python Version': '3.9.0'}
Package Information: {'ONNX Runtime GPU Version': '1.19.2', 'ONNX Runtime Version': None, 'ONNX Version': '1.17.0', 'OpenCV Contrib Python Headless Version': '4.10.0.84', 'PyQt5 Version': '5.15.7'}
To assist you better, could you please provide the following information:
Please include these details in your response so we can proceed with a more accurate assessment of the situation.
Sure! I use the model YOLO11s-Det-BoT-SORT. Here is the output. 2024-11-07 17:27:38,224 | INFO | app:main:159 - 🚀 X-AnyLabeling v2.4.4 launched! 2024-11-07 17:27:38,224 | INFO | app:main:162 - ⭐ If you like it, give us a star: https://github.com/CVHub520/X-AnyLabeling 2024-11-07 17:27:38,256 | INFO | config:get_config:83 - 🔧️ Initializing config from local file: C:\Users\27700.xanylabelingrc 2024-11-07 17:27:53,295 | INFO | model_manager:_load_model:1640 - ✅ Model loaded successfully: yolo11_det_track 2024-11-07 17:28:12,704 | INFO | label_widget:run_all_images:5981 - Start running all images... 2024-11-07 18:35:42,840 | INFO | label_widget:run_all_images:5981 - Start running all images...
Thanks for sharing your output logs! Maybe you can try running a test with YOLOv5s model as well?
Search before asking
Question
I have used GPU acceleration, but the improvement is not significant. The inference speed is similar to when using CPU. When I checked the GPU memory usage, I found that it is low, although the GPU is being utilized. How can I resolve this issue? I use the RTX 4060.
Additional
No response