Open Izzatullokh24 opened 3 months ago
Hi everyone Thanks all of @yolo teams to make yolov9
I am using ubuntu 22.04
I have trianed yolo models on google colab most of the time.However, As I got my first job, I am working on traffic light detection project in my new company.
%cd {HOME}/
!CUDA_VISIBLE_DEVICES=0,1 python train_dual.py \ --batch 32 --epochs 5 --img 512 --device 0,1 \ --data {HOME}/yolov9/Dataset/data.yaml \ --weights {HOME}/weights/yolov9-m.pt \ --cfg models/detect/yolov9-m.yaml \ --hyp hyp.scratch-high.yaml
this is my training code, I am using yolov9-m pre-trained model. There are two GPU (2080Ti), training is very slow.
dataset has 150k images, this is a requirement of company. imgsize in dataset is 1280x780, 1920x1080, 1920x1200.
we have another pc, which has three GPU(3090).
when increasing batchsize more than 32 causing an CUDA out of memory error.
what Can I do to decrease training time?
using another pc,which has three GPU?
Did you solve this problem? I have same issue. Even I tried DDP training, it doesn't work.
Hi everyone Thanks all of @yolo teams to make yolov9
I am using ubuntu 22.04
I have trianed yolo models on google colab most of the time.However, As I got my first job, I am working on traffic light detection project in my new company.
%cd {HOME}/
!CUDA_VISIBLE_DEVICES=0,1 python train_dual.py \ --batch 32 --epochs 5 --img 512 --device 0,1 \ --data {HOME}/yolov9/Dataset/data.yaml \ --weights {HOME}/weights/yolov9-m.pt \ --cfg models/detect/yolov9-m.yaml \ --hyp hyp.scratch-high.yaml
this is my training code, I am using yolov9-m pre-trained model. There are two GPU (2080Ti), training is very slow.
dataset has 150k images, this is a requirement of company. imgsize in dataset is 1280x780, 1920x1080, 1920x1200.
we have another pc, which has three GPU(3090).
when increasing batchsize more than 32 causing an CUDA out of memory error.
what Can I do to decrease training time?
using another pc,which has three GPU?