Open jeremy-myers opened 6 years ago
In the code:def train(model, data, batch_size=128, learning_rate=FLAGS.learning_rate, log_dir='./log', checkpoint_dir='./checkpoint', num_epochs=-1):
seems batchsize not get from
--batch_size = N
So, I recommend you change it in train function.
When I evaluate the sample, I attempt to pass the argument --batch_size = N for a range of N, but I receive this error 2018-05-01 22:46:06.579010: W tensorflow/core/framework/op_kernel.cc:1158] Resource exhausted: OOM when allocating tensor with shape[3,3,128,256]
In my opinion, it doesn't seem like the code is responsive to my argument. What is the correct way to change the batch size for BNN_cifar10?