Open anupamja-kore opened 2 years ago
@tombstone @pkulzc @jch1 experiencing similar issue. I am using tensorflow 2.6. I am using tensorflow 1.14 trainined object detection model in backward compatibility mode. when I run inference on stream of images, over the period of time, cpu memory grows and after certain time, process crashes. when I use tensorflow 1.14 for inference, inference runs without any problem for days and month. Any suggestions?
I am biggner in tensorflow. I used transfer learning machanism and create custom object detection model using "ssd_resnet101_v1_fpn_keras" pre-trained model.
I follow the below documentation for custom traning:
I observed one issue while I used it for detection it takes lot of RAM and not releasing it.
I am sharing you the code snippet where it took lot of RAM and not releasing it.
Memory profiler info:
Note: I am facing issue with CPU memory not with GPU memory
As you can see, it's take 191.8 Mb RAM. It's not releasing it after competion the process.
I used gc.collect() and tf.keras.backend.clear_session() for releasing the memory.
Both is not working for me.
Please anyone can help me how can I solve this problem.