RidgeRun / gst-inference

A GStreamer Deep Learning Inference Framework
GNU Lesser General Public License v2.1
119 stars 29 forks source link

Small memory leak when using only src pad #320

Open dsteger opened 3 years ago

dsteger commented 3 years ago

There seems to be a small memory leak of about ~100KB/s when running the pipeline below. I'm currently looking into this but thought I would post the issue ahead of discovery. The one delta in this pipeline is our custom backend but... we didn't notice a leak with 0.6 release plugin.

gst-launch-1.0 videotestsrc ! \
video/x-raw, width=1920, height=1080, format=RGB, framerate=60/1 ! tee name=t1 \
t1. ! queue max-size-buffers=3 leaky=no ! videoscale ! videoconvert ! \
video/x-raw, width=416, height=416, format=RGBx, framerate=60/1 ! \
net1.sink_model \
tinyyolov2 name=net1 model-location=$MODEL_LOCATION backend=$BACKEND \
net1.src_model ! videoconvert ! perf ! queue ! fakesink -v

dsteger commented 3 years ago

==1800== 59,360 (14,840 direct, 44,520 indirect) bytes in 371 blocks are definitely lost in loss record 3,065 of 3,070 ==1800== at 0x48481E0: malloc (vg_replace_malloc.c:309) ==1800== by 0x4A97B6F: g_malloc (in /usr/lib/libglib-2.0.so.0.6000.7) ==1800== by 0x4AB0383: g_slice_alloc (in /usr/lib/libglib-2.0.so.0.6000.7) ==1800== by 0x4AB0913: g_slice_alloc0 (in /usr/lib/libglib-2.0.so.0.6000.7) ==1800== by 0x4A9B757: g_node_new (in /usr/lib/libglib-2.0.so.0.6000.7) ==1800== by 0x5A12B57: prediction_reset (gstinferenceprediction.c:419) ==1800== by 0x5A12B57: gst_inference_prediction_new (gstinferenceprediction.c:114) ==1800== by 0x5A12BB7: prediction_copy (gstinferenceprediction.c:179) ==1800== by 0x5A12C9F: prediction_scale (gstinferenceprediction.c:498) ==1800== by 0x5A12C9F: node_scale (gstinferenceprediction.c:552) ==1800== by 0x4A9C23F: g_node_copy_deep (in /usr/lib/libglib-2.0.so.0.6000.7) ==1800== by 0x5A1349F: gst_inference_prediction_scale (gstinferenceprediction.c:586) ==1800== by 0x5A0B4FB: gst_inference_meta_transform_new_meta (gstinferencemeta.c:294) ==1800== by 0x5A0B4FB: gst_inference_meta_transform (gstinferencemeta.c:318) ==1800== by 0x5A07B27: video_inference_transform_meta (gstvideoinference.c:829)

rrcarlosrodriguez commented 3 years ago

Hi @dsteger, I'm not able to reproduce this issue using TensorFlow backend. Are you using the latest 0.10 release?

dsteger commented 3 years ago

Hi @rrcarlosrodriguez Thank you for trying to reproduce this. The memory leak is quite slow and takes some time to notice. Did you try running the pipeline with valgrind? "valgrind --tool=memcheck --leak-check=full gst-launch-1.0 mypipeline". I'm curious if you have any loss reported.

I'm using vanilla v0.10 gst-inference and an empty backend based on v8.0 which just creates prediction data and returns it.