Closed dagongji10 closed 5 years ago
Please try to reduce batch_size
in
https://github.com/HYPJUDY/Decouple-SSAD/blob/8af4bfd5840c6403f7cc4b6cea2ea328db30498e/config.py#L28
I think it will reduce the memory cost. But maybe you need to tune learning_rate
and training_epochs
as well to achieve the best performance.
https://github.com/HYPJUDY/Decouple-SSAD/blob/8af4bfd5840c6403f7cc4b6cea2ea328db30498e/config.py#L41-L42
I've tried batch size of 32 before, the performance is still comparable but maybe not as good as 48. I didn't carefully tune other parameters at that time.
If you find such parameters for small memory GPU, please share with us.
Thanks and good luck!
@HYPJUDY Thanks for your nice work. I have tested decouple-ssad without any problem. But when I try to run train mode, it will get OOM error. I saw you sad we need 'one or more GPU with 12G memory', I wonder decouple-ssad will take use all of the 12G? My GPU only have 8G, can I change some setting in config.py to let the training run normally?