Closed snapfinger closed 6 years ago
@snapfinger Have you figured out what's going on?
@barathbheeman Yep. They use a batch size of 100 which is quite large and require much computation resource. Adjusting the batch_size
variable in training code to smaller value like 10 works on my machine.
Hi Míriam, thanks for your code. I tried running your code on computer with tesla K40 GPU which is with 12G video ram but still encountered "cannot allocate memory" error as someone posted before. This is pretty surprising to me. Is the computation that heavy? I wonder if you have ever recorded the maximum ram running the training process needed?