WongKinYiu / YOLO

An MIT rewrite of YOLOv9
MIT License
477 stars 50 forks source link

Query regarding gpu and cpu usage #43

Open Hemanth-TS opened 1 month ago

Hemanth-TS commented 1 month ago

when i ran the command with cuda , around 20% of gpu was used and like 200% cpu but it was still very slow, i was trying to infer a video. python yolo/lazy.py task=inference \ # default is inference name=AnyNameYouWant \ # AnyNameYouWant device=cuda\ # hardware cuda, cpu, mps model=v9-s \ # model version: v9-c, m, s task.nms.min_confidence=0.1 \ # nms config task.fast_inference=onnx \ # onnx, trt, deploy task.data.source=data/toy/images/train \ # file, dir, webcam +quite=True \ # Quite Output

Can someone guide me on how to infer videos properly with higher fps while using gpu efficiently.

henrytsui000 commented 1 month ago

Hi,

Could you please provide your system details, command line, and Git commit version? For example, my setup is:

python yolo/lazy.py task=inference name=AnyNameYouWant device=cuda model=v9-c task.data.source=MOT20-05-raw.mp4 task.fast_inference=deploy

This configuration achieves 20 FPS without using ONNX or TensorRT.

best regards, Henry Tsui