Closed ardera closed 4 years ago
System Information
------------------
Raspberry Pi 4 Model B Rev 1.1
The Pi4 doesn't support the firmware GLES/EGL interface, therefore the egl_render component is disabled. Otherwise it will fail on enabling the component, so better to fail early.
Someone needs to look again at disentangling the GStreamer use of egl_render within omxh264dec, but the other approach is to use v4l2h264dec instead.
The Pi4 doesn't support the firmware GLES/EGL interface, therefore the egl_render component is disabled. Otherwise it will fail on enabling the component, so better to fail early.
Okay that makes sense. Still, OMX_ErrorInsufficientResources
seems like a kinda weird error to return in that case. I think OMX_ErrorComponentNotFound
would be more fitting, even though it's not that precise as well.
v4l2h264dec
works as expected! Thanks!
Okay that makes sense. Still, OMX_ErrorInsufficientResources seems like a kinda weird error to return in that case.
The component module is there, but deliberately fails the create as it checks on the platform. The return value from create within the Broadcom framework is a pointer to the component, or NULL. NULL is translated to OMX_ErrorInsufficientResources
.
The module choice is done through a dll loading mechanism, therefore there is no central list that we can nobble to drop egl_render from on Pi4. I also don't like messing with core code unless really necessary, particularly when it comes to IL!
@6by9 What is the replacement for the egl_render
component on a Pi 4? The egl_render
component also handled YUV -> RGB conversion before, it seems. There must a faster way for rendering into a GL texture / EGL image other than manual conversion and then glTexImage2D
?
Btw it seems gstreamer forgets to do that YUV -> RGB conversion as well, since the colors are really really weird and not at all what they should be when using glImageSink.
Import the buffers into EGL using dmabufs and eglCreateImageKHR. The Pi4 has a texture formatter hardware block that can take YUV images in and create a texture from it.
It may have bitrotted, but https://github.com/6by9/drm_mmal/blob/x11_export/drm_mmal.c or https://github.com/6by9/drm_mmal/blob/x11/drm_mmal.c should work as very basic examples for merging MMAL and Mesa EGL (written by Eric originally). The source of the dmabuf is largely irrelevant as to whether it is a dma-heap, V4L2 buffer, vcsm-cma, DRM, or something else. EGL shouldn't even require it to be contiguous as V3D has an IOMMU, but I wouldn't put money on that.
Thank you very much!
That helped me resolve my issue.
Though I didn't exactly do it the way you suggested. Instead of rendering the video to a texture, I implemented hardware plane support so I can render "around" the video output of omxplayer. HW plane support is basically impossible to achieve with the framework I am using, but thanks to some hacks involving EGLImages, I was able to work around that. Your comment pointed me in the right direction.
Btw, is it normal that I have to reflect the y-axis of the DRM overlay plane to get it to draw correctly? It's somehow scanned out upside down with the default settings. Also, is there some reason the number of available overlay planes is so limited?
What is it that you're rendering? GL output? In which case, yes, as GL works with bottom left as the origin, whilst DRM uses top left.
If you look at the master branch of drm_mmal, it takes MMAL buffers from the video_decode component and pokes them straight into DRM. No y-reflect required there.
The number of overlays can be increased, but there are some limits. DRM has a limit of 32 planes total because it uses a u32 bitmask to handle them. You have to have a primary and cursor plane per crtc. Other planes can be left as available for any crtc and this is what the beta full KMS driver does (it has 6 crtcs, so otherwise hits that limit very fast).
FKMS does take it to the minimum and only adds one overlay, but that would only require a quick loop at https://github.com/raspberrypi/linux/blob/rpi-5.4.y/drivers/gpu/drm/vc4/vc4_firmware_kms.c#L1742 to create multiple overlay planes pepr crtc. You'd need to pass in a zpos or overlay number to vc4_fkms_plane_init so that it can set up default_zpos sensibly. I'm open to a PR for that, otherwise it'll get added to my long list of jobs. The firmware side of FKMS allows for up to 16 planes spread over both displays, although that could be increased if really needed (it's only a define).
What is it that you're rendering? GL output? In which case, yes, as GL works with bottom left as the origin, whilst DRM uses top left.
Okay that makes sense. Yeah, it's GL output
FKMS does take it to the minimum and only adds one overlay, but that would only require a quick loop at https://github.com/raspberrypi/linux/blob/rpi-5.4.y/drivers/gpu/drm/vc4/vc4_firmware_kms.c#L1742 to create multiple overlay planes pepr crtc. You'd need to pass in a zpos or overlay number to vc4_fkms_plane_init so that it can set up default_zpos sensibly. I'm open to a PR for that, otherwise it'll get added to my long list of jobs.
Okay. I think multiple overlay planes would be nice to have, though they're not strictly necessary for me right now either. If I find a use for them at some point I'll write a PR.
The firmware side of FKMS allows for up to 16 planes spread over both displays, although that could be increased if really needed (it's only a define).
Is it also the same with the DSI display? (I assume you meant the HDMI displays) Does this 16 plane firmware-side limitation also affect the planes used by OpenMAX or MMAL? In other words, could I make use of 16 DRM planes simultaneously and at the same time use omxplayer to display some video?
16 planes over whichever 2 displays have been initialised. DSI+HDMI0, DPI+HDMI0, or HDMI0+HDMI1.
IL, MMAL, and DispmanX APIs all create their own client instance of DispmanX, so you can add as many layers as you can get the hardware to handle via that route. This limit of 16 planes is only through the mailbox API that FKMS uses.
We will be looking at shifting the main focus towards the pure ARM KMS driver (full KMS) at some point, at which point IL, MMAL, and DispmanX stop working, and DRM is the only route to putting images on the screen.
16 planes over whichever 2 displays have been initialised. DSI+HDMI0, DPI+HDMI0, or HDMI0+HDMI1.
IL, MMAL, and DispmanX APIs all create their own client instance of DispmanX, so you can add as many layers as you can get the hardware to handle via that route. This limit of 16 planes is only through the mailbox API that FKMS uses.
Okay, that makes sense. Thanks for explaining it
We will be looking at shifting the main focus towards the pure ARM KMS driver (full KMS) at some point, at which point IL, MMAL, and DispmanX stop working, and DRM is the only route to putting images on the screen.
I'm looking forward to full KMS!
Description
fails with
OMX_ErrorInsufficientResources
To Reproduce
I'm afraid I can't give a reduced example for this one. I encountered this when I was trying to play the
/opt/vc/src/hello_pi/hello_video/test.h264
video (without Desktop) with GStreamer, using the following command:This needs
gstreamer1.0-tools
andgstreamer1.0-omx
to be installed.Expected behaviour
Expected is a return value of
OMX_ErrorNone
, since the GPU should have enough memory. I tried allocating 16, 128, 256 and 512MB to the GPU, (I'm not sure the Pi 4 cares about how much mem is allocated to its GPU) all with the same error. I also tried using the legacy graphics driver, with exactly the same result.Actual behaviour
In reality,
OMX_ErrorInsufficientResources
is returned. I'm not sure where exactly the failed allocation ocurrs, but it looks like it's in this procedure.Also, some
vcdbg
outputs before and after runninggstreamer
:vcdbg reloc stats
Before executing gstreamer:
After:
vcdbg reloc small
Before:
After:
vcdbg malloc
Before:
After:
vcdbg log msg
vcdbg log assert
vcdbg log ex
System
Additional context
If it helps, the line calling
OMX_GetHandle
inside GStreamer is this one, which in turn is invoked by this line.Some more logs from gstreamer, if needed: