Open Nebred opened 5 years ago
We ran into a similar issue, but found that everything works just fine if we use MJPEG
input instead of a H264
stream.
What we, however, could not figure out so far is where the actual decoder responsible for decoding our incomoing RTSP
stream is selected, preventing us from digging further into this.
@j1elo: Can you give us a hint here?
@mneundorfer I had a look on the PlayerEndpoint code to refresh my memory. It seems that the actual decoding is still done inside the uridecodebin
GStreamer element, which is what gets used as the main "all-in-one" decoding element for the PlayerEndpoint.
The uridecodebin
element by default decodes up to video/x-raw
. This is controlled with the caps property. The only case where we touch that property is when PlayerEndpoint is created with the useEncodedMedia() modifier enabled, in order to just depayload the RTP packets and otherwise pass the encoded video as-is to the Kurento pipeline.
Only think I can think of, is that you edit kmsagnosticcaps.h and change from
#define KMS_AGNOSTIC_RAW_VIDEO_CAPS \
"video/x-raw;"
to
#define KMS_AGNOSTIC_RAW_VIDEO_CAPS \
"video/x-raw(ANY);"
This should match the old video/x-raw
and the new video/x-raw(memory:VASurface)
in places such as the agnosticbin element.
Note that video/x-raw
also appears hardcoded in other files, such as kmsenctreebin.c (1, 2) so you should grep
inside kms-omni-build and make sure to check out all places.
Thanks for the quick reply @j1elo!
Sadly, this did not help - still no video playing when receiving H264
as input from the camera. I could not make any progress on that so far, but will give feedback as soon as I find something new... For anyone else stumbling across this so far: A workaround is to grab a JPEG
stream from the camera.
In addition, I just noticed interesting behavior when using vaapih264enc
in conjunction with dynamically adjusting the resolution (the initial problem being that as opposed to openh264enc
, vaapih264enc
does not seem to do that automatically).
Inside kmsenctreebin.c::kms_enc_tree_bin_set_target_bitrate
, I went ahead and inserted a code snipped that would - based on the currently measured bitrate - set a new width and height, i.e. using a smaller resolution in case the bitrate drops under a certain threshold (by using g_object_set
to update the capsfilter
). This results in fun image errors, such as the following.
The root cause from what I could observe so far seems to be that somewhere in the pipeline the resolution change seems not to be synced correctly across the pipeline - at least the resulting stream in the browser switches between the old and the new resolution. Since I cannot reproduce this behavior when setting up a similar pipeline from scratch, I suspect there is something I am missing about how Kurento works.
Meanwhile, I am pretty sure that the root cause is that somewhere in the pipeline, the originally set resolution (i.e. the initial capsfilter
being set) is still present. I observe that any newly applied resolution work just fine, except that there are some frames in between, to which the old resolution seems to be applied, resulting in the green/grey images.
I will provide a patch for kmsenctreebin.c
, in case anyone is interested and wants to take a look...
We figured it out: The problem is that vaapih264enc
produces stream-format: avc
, where as the software encoder produces stream-format: byte-stream
. When adding a capsfilter
behind the vaapih264enc
which enforces byte-stream
, everything works just fine.
Since this does not happen in our small pipeline we built from scratch, I suspect it has something to with how vaapih264enc
and the next element in the pipeline negotiate their format.
After a few more weeks of testing, I think we have got this for vaapih264enc
. @j1elo, are you in general interested in a PR which integrates VAAPI
encoding for H264?
Of course, we would love to offer hardware-accelerated transcoding in Kurento.
Hi @mneundorfer, did you get any new insights on the decoding issue?
In https://github.com/Kurento/kms-elements/pull/22 you mention that:
As an FYI: There does not seem to be an issue with
vaapidecode
in general, at least everything works fine when
- the
uridecodebin
is replaced by a dedicated pipeline consisting ofrtspsrc
andvaapih264dec
uridecodebin
is used from a plain GStreamer pipeline (e.g.gst-launch1.0 uridecodebin url="rtsp://..." ! vaapih264enc ! avdec_h264 ! autovideosink
)
What you say is interesting because the decoding issues in Kurento seem to be caused exclusively by a problem inside the uridecodebin
itself. I say this based on the first point you raise:
the
uridecodebin
is replaced by a dedicated pipeline consisting ofrtspsrc
andvaapih264dec
what I understand from this is that you have manually changed the Kurento source code to remove the uridecodebin
and instead configure a pair of (rtspsrc
, vaapih264dec
), and this made the Kurento pipeline to work correctly. However if you revert the changes and leave the uridecodebin
, then the problem happens again.
We need to use uridecodebin
because PlayerEndpoint is able to get inputs from a variety of resources, such as HTTP, RTSP, local files, and actually anything that uridecodebin
supports.
Even though the Kurento pipelines are pretty long and sometimes overwhelming, they are really conceptually simple, as the most complex thing that is done (in this case) is just decoding the input stream, and re-encoding it again before passing it to the WebRTC elements.
Here I'm attaching a couple of Graphviz dot files that show a PlayerEndpoint-to-WebRtcEndpoint pipeline, which might help you understand what is going on inside Kurento. You can open these with the xdot tool:
playerinternal-raw-transcoding shows the internal pipeline that PlayerEndpoint uses to create an uridecodebin
element and decode all media, which is the default thing to do. If you enable the useEncodedMedia property in PlayerEndpoint, then it won't try to decode the media. The final result of this process is passed on to an appsink
element.
Then, player-raw-transcoding shows the rest of the pipeline, getting its data from the leftmost appsrc
and processing it all the way towards the WebRTC part at the right.
The pair appsink
/appsrc
is used because all incoming buffers are checked for valid timestamps in process_sample function so that's why a gst-launch
command cannot be exactly equivalent to the pipeline inside Kurento. However I'd be surprised if this function is really causing the decoding problems you observed.
Hey @j1elo,
Sorry no, I didn't.
what I understand from this is that you have manually changed the Kurento source code to remove the
uridecodebin
and instead configure a pair of (rtspsrc
,vaapih264dec
), and this made the Kurento pipeline to work correctly. However if you revert the changes and leave theuridecodebin
, then the problem happens again.
That is precise, this is exactly what happened.
We need to use
uridecodebin
because PlayerEndpoint is able to get inputs from a variety of resources, such as HTTP, RTSP, local files, and actually anything thaturidecodebin
supports.
Of course, removing uridecodebin
in favor of rtspsrc
isn't an option.
Here I'm attaching a couple of Graphviz dot files that show a PlayerEndpoint-to-WebRtcEndpoint pipeline, which might help you understand what is going on inside Kurento.
Yeah, generating dot graphs for the internal state of Kurento is a great idea, I already used that before to get more insights. However, it does not really help me in understanding the root issue here. Might be my very limited GStreamer knowledge.
Anyways, I attached another dot file (please rename the extension after download, GitHub does not allow dot
files...) showing the pipeline with the rtspsrc/vaapidec
combination in place. Maybe it is of any use for you
Hi there, any news about this one? I'm actually more interested in NVIDIA acceleration (since intel servers usually don't come with GPU), but I guess it should be the same idea.
Description: I try to activate hardware acceleration in the Kurento experimental branch. Unfortunately the pipeline seems to hang somewhere and I am not able to see how I could fix it.
Hardware setup:
KMS Version:
Ubuntu Version
18.04
Other libraries versions:
What steps will reproduce the problem?
What is the expected result? Kurento is running fine
What happens instead? When restarting Kurento (Step 5) it is no longer working (Video not displayed).
Please provide any additional information below I will attach logs of the two kurento runs. If you compare them most of the startup process is identical. The first major difference I could identify is this:
As you see the hardware-accelerated version uses an NV12 stream format and it seems, that kurento is not able to handle this stream correctly.
The hardware-accelerated uridecodebin itself is working fine. I used it in a gst-launch pipeline and received video data.
Do you have any Idea, if and how we could make the vaapi elements work in Kurento?
Log with hardware Acceleration Log without hardware Acceleration