llSourcell / YOLO_Object_Detection

This is the code for "YOLO Object Detection" by Siraj Raval on Youtube
GNU General Public License v3.0
1.73k stars 797 forks source link

Can't train with GPU on Windows10 #6

Closed FeederDiver closed 6 years ago

FeederDiver commented 6 years ago

I've used YOLO detection with trained model using my GPU - Nvidia 1060 3Gb, and everything worked fine.

Now I am trying to generate my own model, with param --gpu 1.0. Tensorflow can see my gpu, as I can read at start those communicates: "name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705" "totalMemory: 3.00GiB freeMemory: 2.43GiB"

Anyway, later on, when program loads data, and is trying to start learning i got following error: "failed to allocate 832.51M (872952320 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY"

I've checked if it tries to use my other gpu (Intel 630) , but it doesn't.

As i run the train process without "--gpu" mode option, it works fine, but slowly. ( I've tried also --gpu 0.8, 0.4 etc.)

Any idea how to fix it?

FeederDiver commented 6 years ago

Problem solved. Changing values of batch size and image size / subdivisions and other in cfg file didn't work, as they somehow loaded incorrectly. I went to defaults.py file and changed them up there, so my GPU is now capable of processing it.