Open FrancescoRusticali opened 9 months ago
To add more comments: setting AllowGrowth
to true, or changing the value of PerProssesGpuMemoryFraction
on GPUOptions seems not to help, nor it helps using method tf.config.set_memory_growth()
.
All of these work fine in Python.
How is it possible to manage GPU memory usage in Tensorflow.NET?
Thank you for the suggestion. I'm afraid it's not helping either. Whatever I do, the memory limit is kept the same. My problem is that I always see the same memory occupation (around 75% of total GPU dedicated memory), and there seems to be no way to increase it if needed, or reduce it if more processes need to run in parallel. I also tried to run the exact same code from the GpuLeakByCNN example above, but I get same behaviour.
Hi,
I tried to investigate further.
Even calling directly c_api.TFE_ContextOptionsSetConfig
does not change the situation.
I even tried to pass directly the serialized config, following for example this.
There's probably something that I'm not understanding. How should these config options be applied?
Description
Hi all, is there any way to set a specific memory limit to GPU memory usage (different from the tensorflow default)?
I'm looking for something similar to this: https://www.tensorflow.org/api_docs/python/tf/config/set_logical_device_configuration
Alternatives
No response