Open shivSD opened 5 years ago
resolved by inserting the following at the begging: gpu_fraction = 0.05 # this will control the percentage of gpu used import tensorflow as tf config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = gpu_fraction session = tf.Session(config=config)
You will still get 'ran out of memory' warning, but checking the gpu memory usage will show it's using as much as you allocate.
Trained a YOLOv2 architecture on the custom images using darknet and after freezing the graph using darkflow, Model weight (.pb) file size is 268MB, Once we load this into gpu for inference it is consuming 7.93GB. I know tensorflow allocates the buffers for the output data at each stage at the beginning. Please, can somebody explain why there is so much memory usage by tensorflow.
Btw, if we run the same model with darknet framework it is taking 2.9GB of RAM.