Closed hbiyik closed 5 months ago
If i understand correctly, the pipeline is:
rkmppdec -> drm_prime -> drmModeSetting -> VOP2 -> hdmi/dp -> screen
Here's an interesting commit: https://github.com/Joshua-Riek/rockchip-kernel/commit/558c82b92a0893fa15862c7434b7a2734d6803eb
And these two are worth a look: https://github.com/Joshua-Riek/rockchip-kernel/blob/develop-6.1/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c#L153 https://github.com/Joshua-Riek/rockchip-kernel/blob/develop-6.1/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
Yes the pipeline is correct,
[rant on] but no please not another new modifier as if there is enough support for the existing ones. [rant off]
I think there is a serious problem, see above log:
Each plane that Vop2 supports has maximum 4k resolution support, how is that supposed to render 8k input/output? This is very problematic..
│ ├───"INPUT_WIDTH" (immutable): range [0, 4096] = 0
│ ├───"INPUT_HEIGHT" (immutable): range [0, 4320] = 0
Kernel also complains in case 8k data is written:
Jan 04 15:41:09 alarm kernel: [drm:vop2_plane_atomic_check] *ERROR* Invalid source: 7680x4320. max input: 4096x4320
Jan 04 15:41:09 alarm kernel: [drm:vop2_plane_atomic_check] *ERROR* Invalid source: 7680x4320. max input: 4096x4320
I read somewhere that the 8k output needs to be configured in device tree before using.
Ah isee most likely this one. https://github.com/radxa/kernel/blob/linux-5.10-gen-rkr4.1/arch/arm64/boot/dts/rockchip/overlay/rock-5b-hdmi1-8k.dts
i think max_input and output is hardcoded, but if the conencted width is > 4096(4k) then it allows double the width=8192, this might suffice the width but again the height will still be less than standart 8k resolution 8192x4096<7680x4320.
So i am not sure if it is required to connect and 8k device to get input plane to support 8k, but it seems so, even if so, i dont really get why? power consumption? heat dissipation? who knows. Even if it would work that way, i am still not sure it will comply the height of 4320. So rockchip, thanks for making things unnecessarily complicated again.
this might suffice the width but again the height will still be less than standart 8k resolution 8192x4096<7680x4320.
.max_input = { 4096, 4320 },
max_input_w = vop2_data->max_input.width; // 4096
max_input_h = vop2_data->max_input.height; // 4320
max_input_w <<= 1; // 8192
// isn't it 8192x4320?
Perhaps it is because 8k is not natively supported by a single hardware unit, but is achieved through multiple hardware units using the "splice mode" of rk3588.
Also, 8k is far less commonly used than 4k, and enabling it even requires overclocking VOP2, so the default is not 8k. RK dev @andyshrk should know more about this.
yeah i mixed up w&h, it makes sense now, but for compatability reasons at least having max_input supporting to 8k is very reasonable, even though the output is not 8k, so that drm planes can be used without issues.
Andy is using the same w&h values in upstream linux. But as for how to support 8k, it has not yet been finalized.
https://github.com/nyanmisaka/ffmpeg-rockchip/pull/4/commits/4487c02f5897bc7e92de086e8bb6c3f0675164b7 with this commit all broken images are gone.
I also raised an mpp bug about it https://github.com/rockchip-linux/mpp/issues/509
tested AFBC mode on NV12, NV16, NV15, NV20 on h264, hevc, vp9 and av1 up to 8k. All work flawlessly, no hickup or whatsoever.
So there are 2 issues left.
1st is the 8k scaling issue in drm plane, i seriously thing this is a bug even rk3288 has 8k plane input support. 2nd is that video is on top of video OSD controls, i think thats a kodi issue.
https://github.com/nyanmisaka/ffmpeg-rockchip/commit/4487c02f5897bc7e92de086e8bb6c3f0675164b7 with this commit all broken images are gone.
Well... Another magic number on rockchip.
That's where the hor_stride
comes from.
Hi:
在 2024-01-05 02:00:15,"Hüseyin BIYIK" @.***> 写道:
yeah i mixed up w&h, it makes sense now, but for compatability reasons at least having max_input supporting to 8k is very reasonable, even though the output is not 8k, so that drm planes can be used without issues.
To support 8K plane, vop2 need 2 hardware plane work in splice mode, even for a 8K input 4k output mode, we still need two hardware plane, one hardware plane does not have the ability/peformace to scale down a 8k input to 4k output。
whats more, for a 8K output, we also need to video ports work at splice mode。
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Hello @andyshrk, thanks for clearification
we also need to video ports work at splice mode.
But for both VPs to work in splice mode (in 3588 vp0+vp1), do you need the attached adapter to have > 4096px width? Without really knowing bits and bytes in the internal that sounds weird to me from HW point of view. And actually it is even a downgrade when compared rk3288.
You should still be able to input 8k to 2 different planes in splice mode, even the attached device is negotatited to 4k. right?
But for both VPs to work in splice mode (in 3588 vp0+vp1), do you need the attached adapter to have > 4096px width?
When you say "the attached adapter " do you mean a monitor ? A monitor can work in any display mode it supports, it can be 1080P, 4K, or 8K. We switch splice mode dynamically according to the width of the display mode[0]
You should still be able to input 8k to 2 different planes in splice mode, even the attached device is negotatited to 4k. right?
Yes, but it is really very difficulty for a low level driver to grabble a plane that maybe used by other userspace application 。
@andyshrk Thanks again, this is very helpful.
When rock5b is set to 8k mode, dts is configured as
vp0: hdmi0
vp1: <no endpoint>
vp2: edp/hdmi1
or
vp0: hdmi1
vp1: <no endpoint>
vp2: edp/hdmi0
when it is in 4k mode dts is:
vp0: hdmi0
vp1: hdmi1
vp2: edp
on opi5 it is also similar since it has only 1 hdmi port. So what i am trying to say is, display mode is a result of, if the dts configuration allows a connection >4k or the connected monitor allows >4k.
If the connected montior can do >4k then the logic works but if not, we lose the scaling functionalities (8k->4k) which may be available according to dts config.
Instead of dynamically deciding splicing according to modesetted width, i think it should have been possible to check the dts first, if the splice capable vps are connected to only 1 display interface, then splice should have been activated, because as above cases, vp1s are always unusued.
This might work for 3588 but in case it creates problems for other vop2 variants, may be it is even a better idea to explain splice capability in a DTS property and act on the driver according to this property instead of dynamically checking the modeset resolution.
I hope i make sense.
@nyanmisaka i think the offset in the drm descriptor is not simple as pix * stride for AFBC, because this calculation points the byte offset of the frame which is afbc compressed, however the actual offset is the y pix offset after the AFBC decompression is done, I do not think it is possible to point to a byte offset before decompressing the frame, where it points to the nth y pixels stride start, since this is variable accroding to the compression also there is AFBC header as well at the beginning of the frame.
Instead i think may be the offset is represented as pixel in AFBC descriptor but the renderer must offset the plane not the frame buffer. I am still not sure if this can be achieved per frame basis as well, because we know that in AV1 case we have dynamic offset on each frame. So this is some another challange to tackle.
@hbiyik As per drm_framebuffer.h this offsets is intended to be used in linear mode.
/**
* @offsets: Offset from buffer start to the actual pixel data in bytes,
* per buffer. For userspace created object this is copied from
* drm_mode_fb_cmd2.
*
* Note that this is a linear offset and does not take into account
* tiling or buffer layout per @modifier. It is meant to be used when
* the actual pixel data for this framebuffer plane starts at an offset,
* e.g. when multiple planes are allocated within the same backing
* storage buffer object. For tiled layouts this generally means its
* @offsets must at least be tile-size aligned, but hardware often has
* stricter requirements.
*
* This should not be used to specifiy x/y pixel offsets into the buffer
* data (even for linear buffers). Specifying an x/y pixel offset is
* instead done through the source rectangle in &struct drm_plane_state.
*/
unsigned int offsets[DRM_FORMAT_MAX_PLANES];
Instead, such x/y pixel offsets should be used in drm_plane.h->drm_plane_state
/**
* @src_y: upper position of visible portion of plane within plane (in
* 16.16 fixed point).
*/
uint32_t src_y;
Therefore, the existing AVDRMFrameDescriptor
is not capable of carrying such pixel offsets information of MPP decoder.
Ugh, i was referring to AVDRMPlaneDescriptor
or AVDRMFrameDescriptor
but yeah it would be definetely a hack. It is at least clear to me AVDRMPlaneDescriptor
msut be updated, which is an API change in ffmpeg but may be that is necessary, first the ptr
and now the src_x
& src_y
offsets. May be we should revisit this PR at some time and get involved with FFmpeg people. Last change 6/7 years ago...
Maybe you can try asking in FFmpeg IRC or ffmpeg-devel.
The author of hwcontext_drm is still active too. https://github.com/fhvwy There have been no use cases for AFBC before.
Thanks for the hint, let me discover one last thing that if it is really not possible to point out to exact byte offset of an AFBC frame, i think those offsets are coming from the decoders alignment requirements, however the imported drm device might not have those requirements. My favorite guy icecream95 has some ninja code to decode some parts of AFBC.
Edit: Quickly disproved myself, Tiles are 16px * 16px in AFBC so pixel offset of <16 is already in an existing tile, may be possible to find byte offset of multiples of 16pixels but less than 16 is not possible. So does not help in rkmpp case.
@andyshrk Thanks again, this is very helpful.
When rock5b is set to 8k mode, dts is configured as
vp0: hdmi0 vp1: <no endpoint> vp2: edp/hdmi1
or
vp0: hdmi1 vp1: <no endpoint> vp2: edp/hdmi0
when it is in 4k mode dts is:
vp0: hdmi0 vp1: hdmi1 vp2: edp
on opi5 it is also similar since it has only 1 hdmi port. So what i am trying to say is, display mode is a result of, if the dts configuration allows a connection >4k or the connected monitor allows >4k.
If the connected montior can do >4k then the logic works but if not, we lose the scaling functionalities (8k->4k) which may be available according to dts config.
Instead of dynamically deciding splicing according to modesetted width, i think it should have been possible to check the dts first, if the splice capable vps are connected to only 1 display interface, then splice should have been activated, because as above cases, vp1s are always unusued.
This might work for 3588 but in case it creates problems for other vop2 variants, may be it is even a better idea to explain splice capability in a DTS property and act on the driver according to this property instead of dynamically checking the modeset resolution.
I hope i make sense.
Yes, from the information I have obtained from my communication with the IC team, the splice function will reconstruct in future soc。 I think your suggestion make sense, we will try to do it in Q1,due to our heavy work load, it may not that soon, but we will have a try。
@andyshrk Thanks for the great news
the splice function will reconstruct we will try to do it in Q1
I am confused about one thing. Is having splice enabled when the mode > 4k is a driver restriction or vop2 hardware limitation? I had the impression that this was a driver limitation but it is me guessing.
If this is a hardware limitation then i guess it would mean that existing SOCs wont receive such an improvement..
@andyshrk Thanks for the great news
the splice function will reconstruct we will try to do it in Q1
I am confused about one thing. Is having splice enabled when the mode > 4k is a driver restriction or vop2 hardware limitation? I had the impression that this was a driver limitation but it is me guessing.
From the hardware side, each plane/window--》CRTC/VP only supports max 4K input--》output。 We support 8K input/out by splicing two plane and two crtc, this is done by software, this is also because VP0 and VP1 support splice in hardware design。
At splice mode(mode > 4k), for example, Cluster0 + Cluster1 splice for a 8K plane, the Cluster0 should be attached to VP0, Cluster1 should be attached to VP1.
When mode < 4K, VP1 is not work, so if we want to use Cluster0 + Cluster1 for splice , we have to move Cluster1 from VP1, but it is very difficult to move a plane from one CRTC to another on rk356x/rk3588 due to hardware design. So this is a little different for splice when mode > 4k.
And from drm side, It is rare to see the low level driver grab one plane from one crtc to another, and bind which plane to which crtc always done by userspace.
So this is a software thing, it is also a hardware limitation.
Anyway, we will have a try, try to give VP0 a 8K input if VP1 is disabled when it has free plane.
If this is a hardware limitation then i guess it would mean that existing SOCs wont receive such an improvement..
I think i somehow got it, so hardware actually expects >4k to actually splice the vps, but what you will try is to may be manually activate splicing when <4k, and organize the planes manually in the driver and feed to spliced vp0+cp1, and may be give another plane to userspace to get the actual 8k plane input. Or something like this, thats why you are saying that to really address the issue vop2 core needs to be actually updated.
@andyshrk I have noticed another problem:
I am rendering on 1 Primary Plane with X/ARGB2101010 with zpos = 2, the buffer is an OSD layer where there are controls and rest is transparent. 1 Cursor Plane with any format lets say NV12 with zpos = 1, the buffer is the video plane.
So primary plane is on top of cursor plane with zpos. Rest of the planes are disabled, CRTC_ID=0, FB_ID=0
In this case, A/XRGB2101010 transparent parts are not blended and shown as black area. When i set the format to A/XRGB8888 the transparency blends and i can see with background video layer.
I have tested with rkr4.1 branch, directly rendering with KMS, this is case with Kodi. Is this a known issue? or has it been fixed in any future version?
PS: Same isseu happens when i use 2 primary planes but no cursor planes as well. Ie: Plane Foreground: zpos=2, primary, X/ARGB2101010 Plane Background: zpos=1, primary, YU08 with AFBC. Transparency of X/ARGB2101010 does not blend and gives black screen.
@andyshrk I have noticed another problem:
I am rendering on 1 Primary Plane with X/ARGB2101010 with zpos = 2, the buffer is an OSD layer where there are controls and rest is transparent. 1 Cursor Plane with any format lets say NV12 with zpos = 1, the buffer is the video plane.
So primary plane is on top of cursor plane with zpos. Rest of the planes are disabled, CRTC_ID=0, FB_ID=0
In this case, A/XRGB2101010 transparent parts are not blended and shown as black area. When i set the format to A/XRGB8888 the transparency blends and i can see with background video layer.
I have tested with rkr4.1 branch, directly rendering with KMS, this is case with Kodi. Is this a known issue? or has it been fixed in any future version?
PS: Same isseu happens when i use 2 primary planes but no cursor planes as well. Ie: Plane Foreground: zpos=2, primary, X/ARGB2101010 Plane Background: zpos=1, primary, YU08 with AFBC. Transparency of X/ARGB2101010 does not blend and gives black screen.
please show the output of :
/sys/kernel/debug/dri/0/summary
and if you can write your XRGB2101010 data to a file, please also upload.
@andyshrk
below is the NV12 video + AR30 osd on top
Video Port0: DISABLED
Video Port1: ACTIVE
Connector: HDMI-A-2
bus_format[2025]: YUV8_1X24
overlay_mode[1] output_mode[f] color_space[3], eotf:0
Display mode: 1920x1080p60
clk[148500] real_clk[148500] type[48] flag[5]
H: 1920 2008 2052 2200
V: 1080 1084 1089 1125
Cluster0-win0: ACTIVE
win_id: 0
format: AR30 little-endian (0x30335241)[AFBC] SDR[0] color_space[0] glb_alpha[0xff]
rotate: xmirror: 0 ymirror: 0 rotate_90: 0 rotate_270: 0
csc: y2r[0] r2y[1] csc mode[1]
zpos: 1
src: pos[0, 0] rect[1920 x 1080]
dst: pos[0, 0] rect[1920 x 1080]
buf[0]: addr: 0x0000000001012000 pitch: 7680 offset: 0
Esmart1-win0: ACTIVE
win_id: 10
format: NV12 little-endian (0x3231564e) SDR[0] color_space[0] glb_alpha[0xff]
rotate: xmirror: 0 ymirror: 0 rotate_90: 0 rotate_270: 0
csc: y2r[0] r2y[0] csc mode[0]
zpos: 0
src: pos[0, 0] rect[720 x 480]
dst: pos[240, 0] rect[1440 x 1080]
buf[0]: addr: 0x00000000029bd000 pitch: 720 offset: 0
buf[1]: addr: 0x00000000029bd000 pitch: 720 offset: 345600
Video Port2: DISABLED
Video Port3: DISABLED
this is the YU08 AFBC video and AR30 osd on top
Video Port0: DISABLED
Video Port1: ACTIVE
Connector: HDMI-A-2
bus_format[2025]: YUV8_1X24
overlay_mode[1] output_mode[f] color_space[3], eotf:0
Display mode: 1920x1080p60
clk[148500] real_clk[148500] type[48] flag[5]
H: 1920 2008 2052 2200
V: 1080 1084 1089 1125
Cluster0-win0: ACTIVE
win_id: 0
format: YU08 little-endian (0x38305559)[AFBC] SDR[0] color_space[0] glb_alpha[0xff]
rotate: xmirror: 0 ymirror: 0 rotate_90: 0 rotate_270: 0
csc: y2r[0] r2y[0] csc mode[0]
zpos: 0
src: pos[0, 4] rect[1920 x 1080]
dst: pos[0, 0] rect[1920 x 1080]
buf[0]: addr: 0x000000000288d000 pitch: 2880 offset: 0
Cluster1-win0: ACTIVE
win_id: 2
format: AR30 little-endian (0x30335241)[AFBC] SDR[0] color_space[0] glb_alpha[0xff]
rotate: xmirror: 0 ymirror: 0 rotate_90: 0 rotate_270: 0
csc: y2r[0] r2y[1] csc mode[1]
zpos: 1
src: pos[0, 0] rect[1920 x 1080]
dst: pos[0, 0] rect[1920 x 1080]
buf[0]: addr: 0x0000000001012000 pitch: 7680 offset: 0
Video Port2: DISABLED
Video Port3: DISABLED
i have tried to dump the ar30 buffer but couldnt find a way to do it. For your information im testing this using mesa-panfork with Kodi's GBM interface. So mesa should be irrelevant, i hope.
@andyshrk
below is the NV12 video + AR30 osd on top
Video Port0: DISABLED Video Port1: ACTIVE Connector: HDMI-A-2 bus_format[2025]: YUV8_1X24 overlay_mode[1] output_mode[f] color_space[3], eotf:0 Display mode: 1920x1080p60 clk[148500] real_clk[148500] type[48] flag[5] H: 1920 2008 2052 2200 V: 1080 1084 1089 1125 Cluster0-win0: ACTIVE win_id: 0 format: AR30 little-endian (0x30335241)[AFBC] SDR[0] color_space[0] glb_alpha[0xff] rotate: xmirror: 0 ymirror: 0 rotate_90: 0 rotate_270: 0 csc: y2r[0] r2y[1] csc mode[1] zpos: 1 src: pos[0, 0] rect[1920 x 1080] dst: pos[0, 0] rect[1920 x 1080] buf[0]: addr: 0x0000000001012000 pitch: 7680 offset: 0 Esmart1-win0: ACTIVE win_id: 10 format: NV12 little-endian (0x3231564e) SDR[0] color_space[0] glb_alpha[0xff] rotate: xmirror: 0 ymirror: 0 rotate_90: 0 rotate_270: 0 csc: y2r[0] r2y[0] csc mode[0] zpos: 0 src: pos[0, 0] rect[720 x 480] dst: pos[240, 0] rect[1440 x 1080] buf[0]: addr: 0x00000000029bd000 pitch: 720 offset: 0 buf[1]: addr: 0x00000000029bd000 pitch: 720 offset: 345600 Video Port2: DISABLED Video Port3: DISABLED
this is the YU08 AFBC video and AR30 osd on top
Video Port0: DISABLED Video Port1: ACTIVE Connector: HDMI-A-2 bus_format[2025]: YUV8_1X24 overlay_mode[1] output_mode[f] color_space[3], eotf:0 Display mode: 1920x1080p60 clk[148500] real_clk[148500] type[48] flag[5] H: 1920 2008 2052 2200 V: 1080 1084 1089 1125 Cluster0-win0: ACTIVE win_id: 0 format: YU08 little-endian (0x38305559)[AFBC] SDR[0] color_space[0] glb_alpha[0xff] rotate: xmirror: 0 ymirror: 0 rotate_90: 0 rotate_270: 0 csc: y2r[0] r2y[0] csc mode[0] zpos: 0 src: pos[0, 4] rect[1920 x 1080] dst: pos[0, 0] rect[1920 x 1080] buf[0]: addr: 0x000000000288d000 pitch: 2880 offset: 0 Cluster1-win0: ACTIVE win_id: 2 format: AR30 little-endian (0x30335241)[AFBC] SDR[0] color_space[0] glb_alpha[0xff] rotate: xmirror: 0 ymirror: 0 rotate_90: 0 rotate_270: 0 csc: y2r[0] r2y[1] csc mode[1] zpos: 1 src: pos[0, 0] rect[1920 x 1080] dst: pos[0, 0] rect[1920 x 1080] buf[0]: addr: 0x0000000001012000 pitch: 7680 offset: 0 Video Port2: DISABLED Video Port3: DISABLED
i have tried to dump the ar30 buffer but couldnt find a way to do it. For your information im testing this using mesa-panfork with Kodi's GBM interface. So mesa should be irrelevant, i hope.
Our BSP driver have a entry to do the dump。 Enable ROCKCHIP_DRM_DEBUG by: CONFIG_ROCKCHIP_DRM_DEBUG=y
Run this command at the point you display AR30 + video :
echo dump > sys/kernel/debug/dri/0/video_port0/vop_dump/dump
the file will be write to data/vop_buf/ dir(see the kernel dmesg)
And be carefull, CONFIG_ROCKCHIP_DRM_DEBUG is for debug only, you should disable it in real product.
@andyshrk I have noticed another problem:
I am rendering on 1 Primary Plane with X/ARGB2101010 with zpos = 2, the buffer is an OSD layer where there are controls and rest is transparent. 1 Cursor Plane with any format lets say NV12 with zpos = 1, the buffer is the video plane.
So primary plane is on top of cursor plane with zpos. Rest of the planes are disabled, CRTC_ID=0, FB_ID=0
In this case, A/XRGB2101010 transparent parts are not blended and shown as black area. When i set the format to A/XRGB8888 the transparency blends and i can see with background video layer.
I have tested with rkr4.1 branch, directly rendering with KMS, this is case with Kodi. Is this a known issue? or has it been fixed in any future version?
Sorry, I remember this issue when talked to my colleague, alpha is not supported of AR30 on rk3588. We should only report XR30 format to user space。
And another thing, XR30 is only support AFBC format,no linear。
PS: Same isseu happens when i use 2 primary planes but no cursor planes as well. Ie: Plane Foreground: zpos=2, primary, X/ARGB2101010 Plane Background: zpos=1, primary, YU08 with AFBC. Transparency of X/ARGB2101010 does not blend and gives black screen.
And another thing, XR30 is only support AFBC format,no linear。
Thats interesting, but with this pkane settings there is actually no afbc in ar30, but it displays correctly. Only alpha channel was missing. Rest of the images display well on the plane.
And another thing, XR30 is only support AFBC format,no linear。
Thats interesting, but with this pkane settings there is actually no afbc in ar30, but it displays correctly. Only alpha channel was missing. Rest of the images display well on the plane.
From the dri/summary you dump, the AR30 is afbc format: AR30 little-endian (0x30335241)[AFBC] SDR[0] color_space[0] glb_alpha[0xff]
Hmm thats also weird because it can not be, those graphics are osd/gui graphics generated by Kodi application. Kodi can not generate afbc textures. I will check that.
Hmm thats also weird because it can not be, those graphics are osd/gui graphics generated by Kodi application. Kodi can not generate afbc textures. I will check that.
you can dump the plane data as I said before, I can check the data is afbc or not?
@andyshrk i think understood whats going on. Kodi is using EGL to create the textures according to the plane supported formats and modifiers. Luckily their logic fell in to the area that it matched XR30 with AFBC modifier so they created the texture with AFBC. So it makes sense.
However, i am confused about one thing:
And another thing, XR30 is only support AFBC format,no linear。
Does that mean both XR30 and AR30 is not supported in Esmart windows. Esmart does not have AFBC modifiers support in 3588.
alpha is not supported of AR30 on rk3588
Is this case for Esmart and cluster windows both? so only XR30 is supported on both ESMART and CLUSTER.
@andyshrk I have also noticed that both in the vop2 driver code and in the 3588 TRM there is performance bottleneck when scaling down the spliced Cluster planes. Max 1.2 scale factor is supported to scale down. So this means that enabling splice <4k is not a good idea to benefit increased MAX_INPUT of planes, because since the output of the plane will be less <4k in this case there will be a need to scale more. This will simply suffer in performance.
I do not know how conventional this idea is, but may be you can leverage RGA3 cores in scaling VOP2 to overcome this. Of course this will introduce more delay, a lot more complexity in the driver, not sure if it is even also possible.
@andyshrk i think understood whats going on. Kodi is using EGL to create the textures according to the plane supported formats and modifiers. Luckily their logic fell in to the area that it matched XR30 with AFBC modifier so they created the texture with AFBC. So it makes sense.
However, i am confused about one thing:
And another thing, XR30 is only support AFBC format,no linear。
Does that mean both XR30 and AR30 is not supported in Esmart windows. Esmart does not have AFBC modifiers support in 3588.
alpha is not supported of AR30 on rk3588
Is this case for Esmart and cluster windows both? so only XR30 is supported on both ESMART and CLUSTER.
There is no X/ARGB30 in Esmart format list
@andyshrk I have also noticed that both in the vop2 driver code and in the 3588 TRM there is performance bottleneck when scaling down the spliced Cluster planes. Max 1.2 scale factor is supported to scale down. So this means that enabling splice <4k is not a good idea to benefit increased MAX_INPUT of planes, because since the output of the plane will be less <4k in this case there will be a need to scale more. This will simply suffer in performance.
I do not know how conventional this idea is, but may be you can leverage RGA3 cores in scaling VOP2 to overcome this. Of course this will introduce more delay, a lot more complexity in the driver, not sure if it is even also possible.
Yes,large scale down often encounters performance issues,we often use RGA or GPU to handle this large scale, but this is not done in vop driver, this is done in userspace(use librga api or gles)before commit a plane to drm。
@nyanmisaka This pr is tested and ready to be merged from my POV. I changed the functionality to be more generic and cleaner. Tested to be working fine.
If you thinks it is ok, this can go in.
Merged in 99ea69d
This helps video players which do not support AVOptions (ie:Kodi) to use AFBC mode.
With this change and this PR in Kodi, i initially got AFBC output. However i think there are some problems:
Output of AV1:![20240104_142344](https://github.com/nyanmisaka/ffmpeg-rockchip/assets/24766436/51c45958-a00c-41d0-9c0f-dcb70e1b0b6e)
Output of H264 & Hevc![20240104_142328](https://github.com/nyanmisaka/ffmpeg-rockchip/assets/24766436/2b128850-e0b5-4750-a9ee-bf68e90ea998)
Output of VP9:![20240104_142127](https://github.com/nyanmisaka/ffmpeg-rockchip/assets/24766436/aeec4ff7-733f-4e9d-bff4-c2e9a807e3a8)
With VP9 you can see it is almost correct except some stride issue, i think it should be divided by 4. However with H264, HEVC and AV1 it seems that there are some other issues, i suspect some of those modifiers might differ from decoder to decoder.
As always please do not merge yet :)