RidgeRun / gst-interpipe

GStreamer plug-in for interpipeline communication
Other
140 stars 63 forks source link

Seemingly memory leak doing single snapshot #152

Closed carlosduran-visiontir closed 1 year ago

carlosduran-visiontir commented 1 year ago

Greetings,

In my linux system, I am running gstd 0.15 as daemon, and two processes acting as server and client, interconnected with gst-interpipes 1.18. The server, takes video from a video device, and sends it to an interpipesink. The pipeline is similar to the following one:

Server pipeline v4l2src ! videorate rate=30 ! video/x-raw,width=1920,height=1080 ! queue ! interpipesink name=video

Then, I intend from another process, to connect to this interpipesink and take a single snapshot into my file system, with the following pipeline:

Client pipeline interpipesrc num-buffers=1 max-bytes=1 format=time listen-to=video ! videoconvert ! video/x-raw,width=1920,height=1080 ! avenc_tiff ! queue ! filesink location=/media/card/snapshots/snap.tiff

For some reason, everytime this snapshot pipeline runs, it seems to increase gstd daemon process ~10 megabytes, which are not deallocated afterwards (even after deleting the pipeline). Am I missing something?

How to reproduce:

  1. Instance gstd
  2. Open gst-client
  3. Create server pipeline, and make it play
  4. Create client pipeline, and make it play
  5. You'll observe the mentioned memory leak (10 mb/s), even after the pipeline itself is done.
  6. Manually stop and delete the pipeline. The memory leak stops, but the leaked memory is not recovered.

Other pipelines I've tried (with similar or worse results):

Any help would be great!

Greetings, Carlos

michaelgruner commented 1 year ago

Hey Carlos,

I think this is not a leak, but an effect of the appsrc (inside the interpipesrc) accumulating buffers internally when the snapshot has been taken. Can you set block=true in the interpipesrc? This will have the interpipesrc block from accumulating more buffers once the max-bytes threshold has been reached.

On 2 Jun 2023, at 03:13, Carlos Durán @.***> wrote:

Greetings,

In my linux system, I am running gstd 0.15 as daemon, and two processes acting as server and client, interconnected with gst-interpipes 1.18. The server, takes video from a video device, and sends it to an interpipesink. The pipeline is similar to the following one:

Server pipeline v4l2src ! videorate rate=30 ! video/x-raw,width=1920,height=1080 ! queue ! interpipesink name=video

Then, I intend from another process, to connect to this interpipesink and take a single snapshot into my file system, with the following pipeline:

Client pipeline interpipesrc num-buffers=1 max-bytes=1 format=time listen-to=video ! videoconvert ! video/x-raw,width=1920,height=1080 ! avenc_tiff ! queue ! filesink location=/media/card/snapshots/snap.tiff

For some reason, everytime this snapshot pipeline runs, it seems to increase gstd daemon process ~10 megabytes, which are not deallocated afterwards (even after deleting the pipeline). Am I missing something?

How to reproduce:

Instance gstd Open gst-client Create server pipeline, and make it play Create client pipeline, and make it play You'll observe the mentioned memory leak (10 mb/s), even after the pipeline itself is done. Manually stop and delete the pipeline. The memory leak stops, but the leaked memory is not recovered. Other pipelines I've tried (with similar or worse results):

interpipesrc listen-to=video num-buffers=1 is-live=true allow-renegotiation=true stream-sync=passthrough-ts ! queue max-size-buffers=3 leaky=downstream ! jpegenc ! multifilesink location=/tmp/snapshot%d.jpg (example taken from gst-interpipe docs <https://developer.ridgerun.com/wiki/index.php/GstInterpipe-_Example_2:_Digital_Camera>) interpipesrc format=time block=false max-bytes=1 listen-to=video ! fakesink (doesn't create memory leak, but is useless) interpipesrc block=false num-buffers=1 listen-to=video ! videoconvert ! video/x-raw, width=1920,height=1080 ! avenc_tiff ! queue ! filesink location=/media/card/snapshots/a.tiff Any help would be great!

Greetings, Carlos

— Reply to this email directly, view it on GitHub https://github.com/RidgeRun/gst-interpipe/issues/152, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFPIOHFKKLBYM6RE7PVNATXJGVFBANCNFSM6AAAAAAYYBNBTM. You are receiving this because you are subscribed to this thread.

carlosduran-visiontir commented 1 year ago

Hello Michael, thanks for the quick response!

I tried what you mentioned, but somehow it seems to aggravate the problem, and once setting the pipeline to play, the RAM usage of gstd increases greatly (100-200 MB/s), blocking the system until Linux OOM killer steps in and kills gstd.

The client pipe that I tried is the following: interpipesrc num-buffers=1 max-bytes=1 block=true format=time listen-to=video ! videoconvert ! video/x-raw,width=1920,height=1080 ! avenc_tiff ! queue ! filesink location=/media/card/snapshots/snap.tiff

michaelgruner commented 1 year ago

@cjarguedas do you mind looking into this?

cjarguedas commented 1 year ago

Hi Carlos, could you provide the script you are using for the testing and if possible the actual memory leak using tools such as massif?

carlosduran-visiontir commented 1 year ago

Hi Christian,

Here you can find a bash script to repeat the scenario

createServer()
{
    gst-client-1.0 pipeline_create server v4l2src device=/dev/video1 ! videorate rate=30 max-rate=30 drop-only=true ! video/x-raw,width=1920,height=1080 ! queue ! interpipesink name=video
    sleep 1
    gst-client-1.0 pipeline_play server
}

createClient()
{
    gst-client-1.0 pipeline_create client interpipesrc num-buffers=1 max-bytes=1 format=time listen-to=video ! videoconvert ! video/x-raw,width=1920,height=1080 ! avenc_tiff ! queue ! filesink location=/media/card/snapshots/snap.tiff
    sleep 1
    gst-client-1.0 pipeline_play client
    sleep 3
}

echo "Please make sure to run gstd before running this program!"
echo 'Select an option:'
echo '1) Take a single snapshot'
echo '2) Take a snapshot every 3 seconds'
read -rsn1 option

createServer
sleep 3
gst-client-1.0 pipeline_play server

createClient
sleep 3

case $option in
    1)
        echo "Taking single snapshot"
        gst-client-1.0 pipeline_play client
        sleep 3 # Give time to take snap
        gst-client-1.0 pipeline_stop client
        gst-client-1.0 pipeline_delete client
        exit 0
        ;;
    2)
        while true
        do
            echo "Taking snapshot"
            gst-client-1.0 pipeline_play client
            sleep 3 # Give time to take snap

            echo 'Press any key to take another (esc to exit): '
            read -rsn1 key
            if [[ $key == $'\e' ]]; then
                echo "Exiting..."
                exit 0
            fi
            gst-client-1.0 pipeline_stop client
            gst-client-1.0 pipeline_delete client
            createClient
            sleep 3
        done
        ;;
    *)
        exit 0
        ;;
esac

In my system, just making the client pipeline switch to ready state (first pipeline_play client) starts increasing gstd's ram usage greatly. The snapshots are taken properly anyways (at first at least).

I tried both valgrind with memcheck and massif, but unfortunately the oom steps in before I can get a valid report, and kills gstd.

If you notice a misconfiguration in my pipelines please let me know!

Thanks!

cjarguedas commented 1 year ago

Hi Carlos, sorry for the delay. I tested your code and the leakage of memory clearly happens, but when I added the block=true using a program to see the memory I obtained the next results.

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 128404K    writeable/private: 11456K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 128404K    writeable/private: 11456K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 128404K    writeable/private: 11456K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 128404K    writeable/private: 11456K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 128404K    writeable/private: 11456K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 128404K    writeable/private: 11456K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 128404K    writeable/private: 11456K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 128404K    writeable/private: 11456K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 206188K    writeable/private: 20444K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 353652K    writeable/private: 38088K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 353652K    writeable/private: 38088K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 353652K    writeable/private: 38088K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 495148K    writeable/private: 66368K    shared: 28K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

Memory map for process with PID 198285: 
    PID 198285 actual memory usage is : mapped: 717980K    writeable/private: 94352K    shared: 156K 

As you can see the memory got stuck in the last value. I'll do further testing with the pipeline tunning but if it's possible could you share the specs of your camera with the following command?

v4l2-ctl -d /dev/video1 --list-formats-ext

Just to be sure.

michaelgruner commented 1 year ago

I get the same results as @cjarguedas . Without block=true I do see an increase on memory usage, with stops if I set block=true.

carlosduran-visiontir commented 1 year ago

Hi again! Thank you for your answers, and sorry for the late replay, as my work load has shitfted into another project.

Setting the property block to true seems indeed to fix the issue. Thank you!

michaelgruner commented 1 year ago

Thanks for the confirmation

carlosduran-visiontir commented 1 year ago

Hello,

After testing in our product the solution, unfortunately the problem seems to persist.

With the pipeline description are as follows:

Server v4l2src device=/dev/video1 ! videorate rate=30 max-rate=30 drop-only=true ! video/x-raw,width=1920,height=1080 ! queue ! interpipesink name=color

Client interpipesrc block=true num-buffers=1 max-bytes=1 format=time listen-to=color ! videoconvert ! video/x-raw, format=RGB, width=1920,height=1080 ! avenc_tiff ! queue ! filesink location=/media/card/snapshots/test-2023_07_19T11_53_59-Vis.tiff

I still get around 30 MB mem leak each snap. Additionally, with the new block=true parameter, it seems that eventually stopping the pipeline (gstc_pipeline_stop) freezes.

Answering @cjarguedas:

The output of v4l2-ctl -d /dev/video1 --list-formats-ext:

ioctl: VIDIOC_ENUM_FMT
        Type: Video Capture

        [0]: 'YUYV' (YUYV 4:2:2)
                Size: Stepwise 176x144 - 1920x1080 with step 16/8
        [1]: 'NV12' (Y/CbCr 4:2:0)
                Size: Stepwise 176x144 - 1920x1080 with step 16/8
        [2]: 'NV16' (Y/CbCr 4:2:2)
                Size: Stepwise 176x144 - 1920x1080 with step 16/8
        [3]: 'BA12' (12-bit Bayer GRGR/BGBG)
                Size: Stepwise 176x144 - 1920x1080 with step 16/8