Open younfor opened 7 years ago
Interesting. I remember seeing this error sometimes, IIRC we installed newer drivers to fix it, but this seems like a pretty benign fix. I can kinda see why this might allocate more memory. In any case we shouldn't be running an eval just to get a bunch of zeros. Good catch, wanna roll this into the other PR?
Did you see a performance change when you set allow_growth = True? And do you have a cluster or is it a bunch of cores on a local?
for example : I had change the files to load my own pic like [None,32,32,3] . Everything is OK, but when I set the partition=2 or 4 , 8 ... and my computer information is gtx1070, ubuntu14.04, 8G. I also change the model init code with: `config = tf.ConfigProto() config.gpu_options.allow_growth = True config.gpu_options.allocator_type = 'BFC'
config.gpu_options.per_process_gpu_memory_fraction = 0.2
upon will enable several process in one gpu. the bug is when the programer run some epoches , I find "nvidia-smi" 's gpu memory grows without stop. from 800MB to 2G , 4G, 8G... finally show some errors like cuda OOM. my way to solve it: after my check and try to fix it, I find a function leads to the GPU Memory Leak ` def reset_gradients(self):
with self.session.as_default():
` though I don't the details why this change can works ,but it did.
email :younfor@yeah.net