Closed neisserBOT closed 3 years ago
Hey, I did not encounter this issue before but does this behavior occur when you execute the command python main.py test -d DATA -p PATH
, where PATH
represents a folder with images? If not, how exactly do you initialize the model and process the images?
This is the portion of code. Every time the function executes an image, the memory is rising. I don't know if is the version of 1.13.1 of TensorFlow or another library.
def saliency_model(img):
#model_name = "model_%s_%s.pb" % (dataset, device)
model_name = "model_salicon_cpu.pb"
with tf.gfile.Open(model_name, "rb") as file:
graph_def = tf.GraphDef()
graph_def.ParseFromString(file.read())
## define the placeholder for image
input_img = tf.placeholder(tf.float32, (None, None, None, 3))
## define the input and output of the graph
[predicted_maps] = tf.import_graph_def(graph_def,
input_map={"input": input_img},
return_elements=["output:0"])
## read any image
img_input = cv2.resize(img, (320, 240))
img_input = cv2.cvtColor(img_input, cv2.COLOR_BGR2RGB)
img_input = img_input[np.newaxis, :, :, :] ## reshape image (1, 240, 320, 3)
## run session
with tf.Session() as sess:
## send image to the graph
saliency = sess.run(predicted_maps,
feed_dict={input_img: img_input})
## reshape image (240, 320, 3)
saliency = cv2.cvtColor(saliency.squeeze(),
cv2.COLOR_GRAY2BGR)
saliency = np.uint8(saliency * 255)
saliency = cv2.resize(saliency, (400, 300))
tf.keras.backend.clear_session()
gc.collect()
return saliency
This is the output of the memory.
total used free shared buff/cache available
Mem: 3933 1030 804 2 2098 2620
Swap: 0 0 0
total used free shared buff/cache available
Mem: 3933 1183 651 2 2098 2468
Swap: 0 0 0
total used free shared buff/cache available
Mem: 3933 1204 630 2 2098 2446
Swap: 0 0 0
What should I do? Thank you in advance
It looks like you call the function saliency_model()
every time you want to process an image, which will repeatedly load the model, create a session, etc. I can imagine that this could lead to the memory leak you observe.
Try to instead load the model once, create a single session, and within that session iterate over your images. For example:
graph_def = tf.GraphDef()
with tf.gfile.Open("model_salicon_gpu.pb", "rb") as file:
graph_def.ParseFromString(file.read())
input_plhd = tf.placeholder(tf.float32, (None, None, None, 3))
[predicted_maps] = tf.import_graph_def(graph_def,
input_map={"input": input_plhd},
return_elements=["output:0"])
with tf.Session() as sess:
for img in img_list:
img = cv2.resize(img, (320, 240))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img[np.newaxis, :, :, :]
saliency = sess.run(predicted_maps,
feed_dict={input_plhd: img})
saliency = cv2.cvtColor(saliency.squeeze(),
cv2.COLOR_GRAY2BGR)
saliency = np.uint8(saliency * 255)
saliency = cv2.resize(saliency, (400, 300))
Great Thank you very much
First, it's a great job.
I have this issue each time I process an image. The memory is rising till filling out, I tried to use session.close() or tf.keras.backend.clear_session but I don't have results. Is there any way to stop the rising o refresh the memory each time process the algorithm?
Thank you in advance