Closed myalos closed 2 years ago
Hello, It seems for me that tensorflow uses every memory available, even if it is not needed. On the contrary, pytorch allocates memory only when needed.
You can try to use the option tf.config.experimental.set_memory_growth
https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth
Thanks for reply. I choose this way which is most similar to the original code config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.16 sess = tf.Session(config=config) as sess: This makes the gpu memory to 3477 MiB which is close to the memory consuming of the pytorch version
Thanks for sharing! I run the tensorflow version, which consumed 9Gb GPU memory, while i run this code which only consumed 3Gb GPU memory, why?