BrettRD / ros-gst-bridge

a bidirectional ros to gstreamer bridge and utilities for dynamic pipelines
Other
128 stars 30 forks source link

Handling asynchronous sources #42

Open sandman opened 2 years ago

sandman commented 2 years ago

Hi @BrettRD

I believe this issue may be related to #33 but I'm still creating a new Issue as the scenario is slightly different:

I'm trying to stream ROS topics over the network with a sender pipeline like so: rosimagesrc --> videoencode --> rtp --> udpsink. My receiver is the reverse: udpsrc --> rtp --> videodecode --> appsink. Now, the issue that I face is that sometimes when the input to the sender side stops and is restarted, the sender pipeline does not push the traffic to the UDP port.

I'm looking to build a debug version of ros-gst-bridge so I can put breakpoints and trace the execution path. Its tricky because of the GStreamer MainLoop as well as ROS threads interacting with each other...any suggestions would be much appreciated!

sandman commented 2 years ago

I just noticed that rosimagesrc is exposed as a ROS node. This is interesting and potentially the reason why my pipeline does not work..

I have a standalone C++ application/ROS node that runs a customized GStreamer pipeline which is essentially the sender pipeline mentioned above. Now with rosimagesrc, my GStreamer pipeline is split across two ROS nodes as the rest of the elements that rosimagesrc links with belong to a different ROS node...

BrettRD commented 2 years ago

The default build for gst-bridge is release with debug info I've had a decent time running gdb over a whole pipeline with the optimiser left on, hunting through the bucket of threads was tedious though. https://github.com/BrettRD/ros-gst-bridge/blob/ros2/gst_bridge/CMakeLists.txt#L4 If your breakpoint behaviour is really bad, you can use #set(CMAKE_BUILD_TYPE Debug)

Exposing ROS nodes from within the pipeline was one of the main design goals of gst-bridge, that way I can shim ROS compatibility into any program that supports a gstreamer pipeline. The only limitation I've found with this design pattern is that you can't use ComposableNodes to get shared memory transport, the GScam (appsrc/appsink) approach has a small but definite performance advantage at the cost of flexibility.

It think the problem you're having is probably more about encoder and decoder state; Running resilient compressed video piplines is really tricky.

I use a pipeline like this one on a raspi zero that recovers from basically everything:

      descr: 'rpicamsrc bitrate=10000000 preview=0 ! video/x-h264,width=640,height=480,framereate=10/1,profile=high ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host={dest_url} port={dest_port}'

The config-interval of rtph264pay tells the payloader to regularly include metadata that lets the decoder recover from lost or corrupted frames, or interrupted transmission. Without that repeated metadata, the decoder times-out the stream and waits for a fresh definition of the stream before trying to decode again.

It could also be an input latency problem, gstreamer tries very hard to run a pipeline with properly synchronised streams; this means that if a packet arrives with a timestamp from the recent past, gstreamer will drop it, you can sometimes use sync=false at the render end of the pipeline to relax that constraint.

I'd check the encoder state using GST_DEBUG=2,videoencode:6 gst-launch-1.0 ..., then check udp out-bound traffic with wireshark before firing up breakpoints, it might simply be an encoder/decoder config issue.

I'm interested to hear your findings.