Open ghost opened 5 years ago
I think you should turn down batch_size. You can set batch_size equal 1 at first and turn up it slowly.From experience,batch_size equals 6 maybe the best for GTX 1060 when img_h and img_h equals 416.
In my experience, batch size should be set to be 16 if your GPU memory is 12G (GTX1080Ti).
remove the parallels config in params.py and related code in main.py
I've gotten image recognition to work at multiple frames/second, using a GTX 1060 with 6GB of memory. Now I'm trying to train a custom classifier but I keep running out of memory. Running on the darknet implementation, I can train using the yolov3-tiny.cfg file but not the yolov3.cfg file, which I guess is probably expected behavior given my hardware limitations. Now I'm trying to train with this implementation.
What parameters could I tweak in
training/params.py
to reduce my memory consumption? Is there an equivalent param in this implementation forsubdivisions
in the darknet implementation?