Closed dipam7 closed 4 years ago
Please reduce the batch size until the error stops occuring. If, for the whole dataset the error occurs just once or twice it's alright. If it occurs too often, reduce the batch size while preprocessing.
I reduced the batch size to 4 and it worked for a few videos. However, for a certain video I just get "Killed". Is it because the video is long and high resolution? I've tried a batch size of 2 as well but the same thing happens. Why is this happening and any suggestions for overcoming it? Thanks
Ensure you are training on face resolutions of 96x96 only, to start with. Also, ensure the temporal window of 5 frames only.
Hey, I haven't reached the training stage yet. I am still preprocessing the data. Do I have to ensure the things that you mentioned for pre-processing as well? If yes, how do I do that?
No, you can just preprocess with a lower batch size to avoid memory errors.
I'm already using a batch size of 2. Is it possible that this is because my videos are long ( > 5 minutes) and high res (1080p)? Should I break them down into smaller chunks?
No, I do not think long videos is a reason for GPU memory error. Batch size 2 should work. There must be some other mistake due to which it is not working.
I am trying to train a model with my own data. I have the following directory structure:
I've change the line in preprocess.py from
to
for my directory structure. However, when I run the command given in the readme, I get the following error for every video:
My videos are all 1080p
I'm using Paperspace with a P5000 GPU, 8 CPUs and 30 Gb Ram. Can you specify what computing power did you use to train and how can I use the one I have available to train my own model? Thanks