alexlee-gk / video_prediction

Stochastic Adversarial Video Prediction
https://alexlee-gk.github.io/video_prediction/
MIT License
303 stars 65 forks source link

ResourceExhaustedError OOM even my batch_size equal 1? #6

Closed Sander-houqi closed 6 years ago

Sander-houqi commented 6 years ago

Hi Alex,

Thanks for releasing all the code. I am working on fitting a new dataset into your model. I resize the (height,width) to (240,320) which same as ucf101 datasets. I set context_frames equal to 2 in order to predict 8 frames with the model. When I train the model with batchsize 1 ,everything started fine in few steps, but after a while it have mistakes like oom ResourceExhaustedError. I don't know why this is happening.

Do you have any good suggestion on this issue?

Thanks a lot.

Best, Sander

alexlee-gk commented 6 years ago

Hi Sander, the default model is not fully-convolutional and as a consequence it's quite expensive (and has too many weights) for large images. In particular, this is likely caused by the cdna transformations. I'd suggest to try dna or flow transformations. This can be done by passing this flag to the train.py script: --model_hparams transformation=dna For the flow transformations, I'd also use some regularization on the flows, e.g. --model_hparams tv_weight=0.001,transformation=flow

With these hyperparameters, a batch_size of 1 should not give OOM errors. To train with a batch_size of 16, you'd need to do multi-GPU training with 4 or 8 GPUs, or reduce the number of expensive layers (e.g. use larger strides, use less layers, only use ConvRNN layers once the features maps are small enough, etc).

Sander-houqi commented 6 years ago

Hi Alex,

Thanks for you timely Reply. I turn --model_hparams transformation=dna and also batch_size equal to 1 ,But it still have oom error, just extended the running time compare to transformation=cdna, So I want to know why gpu memory increace in train model , if don't have good resolutions, I will reduce the layers etc. Extra information: my Gpu is GeForce GTX 1080 ,the gpu have 11GB memory .

I look forward to your reply. Thank you.

Best, Sander

alexlee-gk commented 6 years ago

Can you try flow transformation instead? DNA is still a bit computationally and memory intensive given that there is no native efficient implementation of locally connected layers.

Sander-houqi commented 6 years ago

Thanks a lot .when I use small image and reduce number of feature maps ,it solved. close this issue.