Open hbiyik opened 1 year ago
I also noticed corrupted output with that flag. But the performance boost looks promising.
Does MPP claimed to support async encoding internally? For comparsion, the QSV encoder on Intel does support async encoding. mfxVideoParam.AsyncDepth can be passed to the runtime for doing this.
yes, from rk_mpi.h. but i think they mean async in terms of backlog management only not core manangement.
I will go on a vacation for the next week, when i return, i will dig into it, it should not be so hard to achieve multhreading in ffmpeg. the code of rkmpp_* is somehow solid.
* @brief send video frame to encoder only, async interface
* @param[in] ctx The context of mpp, created by mpp_create() and initiated
* by mpp_init().
* @param[in] frame The input video data, its usage can refer mpp_frame.h.
* @return 0 and positive for success, negative for failure. The return
* value is an error code. For details, please refer mpp_err.h.
*/
MPP_RET (*encode_put_frame)(MppCtx ctx, MppFrame frame);
/**
* @brief get encoded video packet from encoder only, async interface
* @param[in] ctx The context of mpp, created by mpp_create() and initiated
* by mpp_init().
* @param[out] packet The output compressed data, its usage can refer mpp_packet.h.
* @return 0 and positive for success, negative for failure. The return
* value is an error code. For details, please refer mpp_err.h.
*/
MPP_RET (*encode_get_packet)(MppCtx ctx, MppPacket *packet);
Things should be easier than expected. The multi-threaded flag messes up the prediction frames, and you can get normal output after turning it off (-g:v 1 -keyint_min:v 1
) and have to tolerate the side effect of I frames only.
Also when scaling is used, RGA fails. But it works fine only when format conversion (10->8) is enabled.
[hevc_rkmpp_encoder @ 0xaaaac7f8cf70] Reconfigured with w=1920, h=1080, format=nv12.s speed=N/A
[hevc_rkmpp_encoder @ 0xaaaac7f565b0] Reconfigured with w=1920, h=1080, format=nv12.
RgaBlit(1465) RGA_BLIT fail: Invalid argument
RgaBlit(1466) RGA_BLIT fail: Invalid argument
handl-fd-vir-phy-hnd-format[0, 53, (nil), (nil), 0, 8192]
rect[0, 0, 3840, 2160, 4800, 2160, 8192, 0]
f-blend-size-rotation-col-log-mmu[8192, 0, 0, 0, 0, 0, 1]
handl-fd-vir-phy-hnd-format[0, 20, (nil), (nil), 0, 2560]
rect[0, 0, 3840, 2160, 3840, 2160, 2560, 0]
f-blend-size-rotation-col-log-mmu[2560, 0, 0, 0, 0, 0, 1]
This output the user parameters when rga call blit fail
Now we can get 8k30/4k120/1080p480 transcoding speed.
[hevc_rkmpp_encoder @ 0xaaaac13a28a0] Reconfigured with w=3840, h=2160, format=nv12.
frame= 2655 fps=130 q=-0.0 size= 53504kB time=00:00:22.13 bitrate=19798.1kbits/s speed=1.08x
Have a good time :p
Becasre ach thread has its own encoder and tgey dont know the frames of each other, setting gop=1 is a smart workaround :)
I think i have a better idea. I dont why noone else thought this before.
https://github.com/JeffyCN/libv4l-rkmpp
This thing wraps mpp to libv4lm2m interface. Considering ffmpeg already has v4lm2m bssed codecs interface, using this should already provide ffmpeg support even with vanilla ffmpeg, just need to select *_v4l2m2m codec variants. If this works i think there might not be a reason for this fork to exist..
If this works, that code can be seperated from v4lutils, and can be implemented to a standalone service binary using libfuse to intercept ioctl calls which is a cleaner approach than patching v4lutils
Then you down the rabbit hole. There's a whole bunch of obscure v4l2m2m patches downstream you have to maintain. Guess why I broke up with my RPi4.
Do you know whay they are not upstreamed?
They talked about upstreaming:
You are right that I should get round to trying to do that, but as it stands there isn't a hope in hell that it would be accepted - not because it doesn't work - it does - but because it is written as a separate thing that is just attached to ffmpeg due to having started life as a vaapi-v4l2 shim that I'd previously written so it obeys nearly none of ffmpegs normal conventions. But yes - I should get round to reworking it into a better shape for upstreaming.
How would you test it? I mean, are you going to implement it?
DEV.L. h263 H.263 / H.263-1996, H.263+ / H.263-1998 / H.263 version 2 (decoders: h263_rkmpp h263 h263_v4l2m2m ) (encoders: h263 h263_v4l2m2m )
DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264_rkmpp h264 ) (encoders: libx264 libx264rgb h264_nvenc h264_omx h264_v4l2m2m nvenc nvenc_h264 )
DEV.L. hevc H.265 / HEVC (High Efficiency Video Coding) (decoders: hevc_rkmpp hevc hevc_v4l2m2m ) (encoders: libx265 nvenc_hevc hevc_nvenc hevc_v4l2m2m )
and
DEV.L. h263 H.263 / H.263-1996, H.263+ / H.263-1998 / H.263 version 2 (decoders: h263_rkmpp_decoder h263 h263_v4l2m2m ) (encoders: h263 h263_v4l2m2m )
DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264_rkmpp_decoder h264 ) (encoders: h264_rkmpp_encoder h264_v4l2m2m )
DEV.L. hevc H.265 / HEVC (High Efficiency Video Coding) (decoders: hevc_rkmpp_decoder hevc hevc_v4l2m2m ) (encoders: hevc_rkmpp_encoder hevc_v4l2m2m )
DEV.L. mpeg1video MPEG-1 video (decoders: mpeg1_rkmpp_decoder mpeg1video mpeg1_v4l2m2m )
DEV.L. mpeg4 MPEG-4 part 2 (decoders: mpeg4_rkmpp_decoder mpeg4 ) (encoders: mpeg4 mpeg4_v4l2m2m )
D.V.L. vc1 SMPTE VC-1 (decoders: vc1 vc1_v4l2m2m )
DEV.L. vp8 On2 VP8 (decoders: vp8_rkmpp_decoder vp8 ) (encoders: vp8_rkmpp_encoder vp8_v4l2m2m )
D.V.L. vp9 Google VP9 (decoders: vp9_rkmpp_decoder vp9 vp9_v4l2m2m
Currently only an idea, just push -c:v h264_v4l2m2m to both decoder and encoder part of the command. Make sure you have patched v4lutils with mpp. And it should work. May not work also due to whatever bug or missing feature.
I dont have a setup atm, when i have in the next weeks i will check.
Make sure you have patched v4lutils with mpp
I have it. Will try and report later, thanks.
Thanks, curious how it will work out..
Sidenote: when hantro and request api is stable, i thibk mainline support could be achieved with similar approach. See: https://patchwork.ffmpeg.org/project/ffmpeg/cover/20201209202513.27449-1-jonas@kwiboo.se/
First attempt:
sudo ./FFmpeg-encoder-new/ffmpeg -encoders|grep v4l2
ffmpeg version e243e8d Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 11 (Ubuntu 11.4.0-1ubuntu1~22.04)
configuration: --prefix=/usr --disable-libxvid --disable-libx264 --disable-libx265 --enable-rkmpp --enable-nonfree --enable-gpl --enable-version3 --enable-libmp3lame --enable-libpulse --enable-libv4l2 --enable-libdrm --enable-libxml2 --enable-librtmp --enable-libfreetype --enable-openssl --enable-opengl --enable-libopus --enable-libvorbis --disable-shared --enable-decoder='aac,ac3,flac' --extra-cflags=-I/usr/src/5.10.110-rk3588-v4l2/include --disable-cuvid
libavutil 58. 2.100 / 58. 2.100
libavcodec 60. 3.100 / 60. 3.100
libavformat 60. 3.100 / 60. 3.100
libavdevice 60. 1.100 / 60. 1.100
libavfilter 9. 3.100 / 9. 3.100
libswscale 7. 1.100 / 7. 1.100
libswresample 4. 10.100 / 4. 10.100
libpostproc 57. 1.100 / 57. 1.100
V..... h263_v4l2m2m V4L2 mem2mem H.263 encoder wrapper (codec h263)
V..... h264_v4l2m2m V4L2 mem2mem H.264 encoder wrapper (codec h264)
V..... hevc_v4l2m2m V4L2 mem2mem HEVC encoder wrapper (codec hevc)
V..... mpeg4_v4l2m2m V4L2 mem2mem MPEG4 encoder wrapper (codec mpeg4)
V..... vp8_v4l2m2m V4L2 mem2mem VP8 encoder wrapper (codec vp8)
sudo ./FFmpeg-encoder-new/ffmpeg -device /dev/dri/card0 -framerate 30 -f kmsgrab -i - -vf 'hwdownload,format=bgr0' -c:v h264_v4l2m2m -b:v 2000 -y screencast_30fps_2.mkv
ffmpeg version e243e8d Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 11 (Ubuntu 11.4.0-1ubuntu1~22.04)
configuration: --prefix=/usr --disable-libxvid --disable-libx264 --disable-libx265 --enable-rkmpp --enable-nonfree --enable-gpl --enable-version3 --enable-libmp3lame --enable-libpulse --enable-libv4l2 --enable-libdrm --enable-libxml2 --enable-librtmp --enable-libfreetype --enable-openssl --enable-opengl --enable-libopus --enable-libvorbis --disable-shared --enable-decoder='aac,ac3,flac' --extra-cflags=-I/usr/src/5.10.110-rk3588-v4l2/include --disable-cuvid
libavutil 58. 2.100 / 58. 2.100
libavcodec 60. 3.100 / 60. 3.100
libavformat 60. 3.100 / 60. 3.100
libavdevice 60. 1.100 / 60. 1.100
libavfilter 9. 3.100 / 9. 3.100
libswscale 7. 1.100 / 7. 1.100
libswresample 4. 10.100 / 4. 10.100
libpostproc 57. 1.100 / 57. 1.100
[kmsgrab @ 0x55c2c29bd0] Using plane 54 to locate framebuffers.
[kmsgrab @ 0x55c2c29bd0] Template framebuffer is 245: 3840x1080 format 34325258 modifier 0 flags 2.
Input #0, kmsgrab, from 'fd:':
Duration: N/A, start: 1692209160.156394, bitrate: N/A
Stream #0:0: Video: wrapped_avframe, drm_prime, 3840x1080, 29.67 tbr, 1000k tbn
Stream mapping:
Stream #0:0 -> #0:0 (wrapped_avframe (native) -> h264 (h264_v4l2m2m))
[h264_v4l2m2m @ 0x55c2c2d2d0] Could not find a valid device
[h264_v4l2m2m @ 0x55c2c2d2d0] can't configure encoder
[vost#0:0/h264_v4l2m2m @ 0x55c2c2cfc0] Error initializing output stream: Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Conversion failed!
[h264_v4l2m2m @ 0x55c2c2d2d0] Could not find a valid device [h264_v4l2m2m @ 0x55c2c2d2d0] can't configure encoder
This is self explanatory, ffmpeg can not find /dev/video-enc0 device, or does not recognize it. Most likely something is broken.
[ 848.333335] rkisp0-vir0: nonsupport pixelformat:BGR4 [ 848.333373] rkisp0-vir0: nonsupport pixelformat:BGR4 [ 848.333395] rkisp0-vir0: nonsupport pixelformat:BGR4 [ 973.053371] rkisp0-vir0: nonsupport pixelformat:BGR4 [ 973.053405] rkisp0-vir0: nonsupport pixelformat:BGR4 [ 973.053424] rkisp0-vir0: nonsupport pixelformat:BGR4 [ 1428.290486] rkisp0-vir0: nonsupport pixelformat:BGR4 [ 1428.290529] rkisp0-vir0: nonsupport pixelformat:BGR4 [ 1428.290551] rkisp0-vir0: nonsupport pixelformat:BGR4 [ 1444.206694] rkisp0-vir0: nonsupport pixelformat:BGR4 [ 1444.206730] rkisp0-vir0: nonsupport pixelformat:BGR4 [ 1444.206748] rkisp0-vir0: nonsupport pixelformat:BGR4
crw-rw----+ 1 root video 81, 0 Aug 15 10:19 /dev/video0
crw-rw----+ 1 root video 81, 1 Aug 15 10:19 /dev/video1
crw-rw----+ 1 root video 81, 10 Aug 15 10:19 /dev/video10
crw-rw----+ 1 root video 81, 11 Aug 15 10:19 /dev/video11
crw-rw----+ 1 root video 81, 12 Aug 15 10:19 /dev/video12
crw-rw----+ 1 root video 81, 13 Aug 15 10:19 /dev/video13
crw-rw----+ 1 root video 81, 14 Aug 15 10:19 /dev/video14
crw-rw----+ 1 root video 81, 15 Aug 15 10:19 /dev/video15
crw-rw----+ 1 root video 81, 16 Aug 15 10:19 /dev/video16
crw-rw----+ 1 root video 81, 17 Aug 15 10:19 /dev/video17
crw-rw----+ 1 root video 81, 18 Aug 15 10:19 /dev/video18
crw-rw----+ 1 root video 81, 19 Aug 15 10:19 /dev/video19
crw-rw----+ 1 root video 81, 2 Aug 15 10:19 /dev/video2
crw-rw----+ 1 root video 81, 20 Aug 15 10:19 /dev/video20
crw-rw----+ 1 root video 81, 26 Aug 15 10:19 /dev/video21
crw-rw----+ 1 root video 81, 3 Aug 15 10:19 /dev/video3
crw-rw----+ 1 root video 81, 4 Aug 15 10:19 /dev/video4
crw-rw----+ 1 root video 81, 5 Aug 15 10:19 /dev/video5
crw-rw----+ 1 root video 81, 6 Aug 15 10:19 /dev/video6
crw-rw----+ 1 root video 81, 7 Aug 15 10:19 /dev/video7
crw-rw----+ 1 root video 81, 8 Aug 15 10:19 /dev/video8
crw-rw----+ 1 root video 81, 9 Aug 15 10:19 /dev/video9
-rw-rw---- 1 root video 71 Aug 15 10:19 /dev/video-dec0
-rw-rw---- 1 root video 4 Aug 15 10:19 /dev/video-enc0
Hah, it is because bgr0 is not supported by mpp, and may be rga is not in use
How to force RGA use?
Dunno, but something simpler to verify nv12 drm plane.
ffmpeg -t 5 -init_hw_device drm=dr:/dev/dri/renderD128 -filter_hw_device dr -f lavfi -i testsrc=s=1280x720,format=nv12 -vf hwupload,format=drm_prime -c:v h264_v4l2m2m out.mp4
This only works with my fork. It creates drm test sources directly.normal ffmpeg can not.
I have nv12 file, can you give me some cmd examples? my reply was too late
ffmpeg -t 5 -init_hw_device drm=dr:/dev/dri/renderD128 -filter_hw_device dr -f lavfi -i testsrc=s=1280x720,format=nv12 -vf hwupload,format=drm_prime -c:v h264_v4l2m2m out.mp4
h264_v4l2m2m @ 0x558ab07310] Could not find a valid device
[h264_v4l2m2m @ 0x558ab07310] can't configure encoder
[vost#0:0/h264_v4l2m2m @ 0x558ab07040] Error initializing output stream: Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
I will try to grab DRM NV12 from my camera and see what i get, it can take some time.. i was planning to do so and now seems to be the right time, ;)
Another simple idea, generate drm frames with v4l2m2m decoder and encode it again with v4l2m2m encoder
ffmpeg -c:v h264_v4l2m2m -i input.mp4 -c:v h264_v4l2m2m out.mp4
@nyanmisaka it seems that having the encoder multithreaded is not as straightforward as i thought.
The reason is that, FFMpeg is giving each encoder thread completely seperate contexts, meaning that i can not access the MPP encoder that i have initiated from another thread. I understand the design choice here but it is completely opposite what mpp does. Mpp needs 1 instance to be availble and the frames/packets to be fed in from multiple sources. It seems that it is not possible to achieve this witout actually modifying the ffmpeg threading onterfaces.
Do you have any suggestion or experience?
thread count vs fps
The reason is that, FFMpeg is giving each encoder thread completely seperate contexts, meaning that i can not access the MPP encoder that i have initiated from another thread. I understand the design choice here but it is completely opposite what mpp does. Mpp needs 1 instance to be availble and the frames/packets to be fed in from multiple sources. It seems that it is not possible to achieve this witout actually modifying the ffmpeg threading onterfaces.
In fact, commonly used video encoders such as x264, x265 and svt-av1 do not use ffmpeg's frame_thread, but implement it externally and set AV_CODEC_CAP_OTHER_THREADS in ffmpeg.
Since MPP doesn't provide native methods for this, maybe we can use pthread
and FIFO in rkmppenc
to create child threads for each put_frame
and get_packet
.
https://github.com/rockchip-linux/mpp/blob/develop/test/mpi_enc_mt_test.cpp https://ffmpeg.org/doxygen/3.1/group__lavc__encdec.html
pseudo code:
rkmppenc.init() {
codec_ctx->fifo*
codec_ctx->lock*
codec_ctx->thd*
fifo_in = fifo_alloc(threads)
fifo_out = fifo_alloc(threads)
//ffmpeg avfifo thread safe? no
pthread_mutex_init(lock_in)
pthread_mutex_init(lock_out)
foreach (threads) {
pthread_create(thd_out, out_thread())
pthread_create(thd_in, in_thread())
}
//set mpp output non-block mode for get_packet?
control(MPP_SET_OUTPUT_TIMEOUT, MPP_POLL_NON_BLOCK)
}
rkmppenc.uninit() { ... }
in_thread() {
while (1) {
pthread_mutex_lock(lock_in)
if (fifo_can_read(fifo_in)) {
frame = fifo_read(fifo_in)
mppframe = frame2mppframe
mpi_encode_put_frame(mppframe)
}
pthread_mutex_unlock(lock_in)
}
}
out_thread() {
while (1) {
pthread_mutex_lock(lock_out)
if (fifo_can_write(fifo_out)) {
if (ok == mpi_encode_get_packet(&packet))
fifo_write(fifo_out, packet)
}
pthread_mutex_unlock(lock_out)
}
}
rkmppenc.send_frame() {
pthread_mutex_lock(lock_in)
if (fifo_can_write(fifo_in))
fifo_write(frame)
pthread_mutex_unlock(lock_in)
}
rkmppenc.receive_packet() {
pthread_mutex_lock(lock_out)
if (fifo_can_read(fifo_out)) {
packet = fifo_read(fifo_out)
}
pthread_mutex_unlock(lock_out)
}
ok i understood what you suggest, but i'm now even more confused about mpp.
https://github.com/rockchip-linux/mpp/blob/develop/test/mpi_enc_mt_test.cpp https://github.com/rockchip-linux/mpp/blob/develop/test/mpi_enc_test.cpp
if i am not mistaken both of those are multi-threaded, by default to single thread.
The thread count can be set by -s
argument.
When those implmentations are multi threaded, they are not sharing the input buffer to consume in the parallel threads, but encoding the same file number of threads
times in paralllel. This is quite weird and has practical use case, except measuring the performance of mpp may be.
This can also be verified by the stats pushed to dmesg at the end of encoding, each thread always have the same number of frames.
Also in the graph it is also visible that by paralelism you dont get particular benefit, because the bottleneck is the hardware chip.
FFmpeg is different story, paralism benefits, but i am not sure why, and i am also quite confused how mpp handles multithreads at all.
Indeed. The -s
option is not used to improve encoding performance, but to simulate the case of multiple instances.
Then the difference between the two is that mt_enc_test
has independent input and output threads, while the input and output of enc_test
are the same thread.
still scratching my head why would this matter at all. one thing i see fundementally different also is input timeout is NON_BLOCK for mpi_enc_mt_test
and BLOCK for mpi_enc_test
. But when i set input timeout to NON_BLOCK on my own code where put & pull are in same thread (like mpi_enc_test), it does not give extra speed advantage.
May be if the locking primitives of individual put & pull interfaces are locked to different threads (as in mt
variant), they wont be waiting for each other and unleash some kind of monster, but why would such synchronism gizmos would bring such huge performance change, is beyond me. may be it is just better to ask mpp devs first how to use mpp to get the best out of it.
I was about to do so. It'd be better if they can update docs for multithreaded MPP to maximize the performance.
yeah good call, im just speculating at this point :)
Same issue here https://github.com/JeffyCN/rockchip_mirrors/issues/21
Got this error in the middle of one file playing in kodi:
kernel: RKNPU fdab0000.npu: RKNPU: set rknpu freq: 300000000, volt: 675000
mpp[546]: mpp_buffer: mpp_group 0x7e8070f668 tag unknown caller rkmpp_init_codec mode internal type ion deinit with 8110080 bytes not released
mpp[546]: mpp_buffer:
dumping buffer group 0x7e8070f668 id 3 from mpp_buffer_group_deinit
mpp[546]: mpp_buffer: mode internal
mpp[546]: mpp_buffer: type ion
mpp[546]: mpp_buffer: limit size 0 count 0
mpp[546]: mpp_buffer: used buffer count 2
mpp[546]: mpp_buffer: buffer 0x7ea8029208 fd 46 size 4055040 ref_count 1 discard 0 caller try_proc_dec_task
mpp[546]: mpp_buffer: buffer 0x7ea84e1d48 fd 45 size 4055040 ref_count 1 discard 0 caller try_proc_dec_task
mpp[546]: mpp_buffer: unused buffer count 0
kodi-standalone[546]: *** pw_stream_destroy called from wrong context, check thread and locking: Operation not permitted
The video froze and couple of seconds later kodi was back to home screen.
thats fine, the error code is ignorable, and fixed in other branch but i can not merge now, i am re-writing this thing from scratch again.
Are you using DRM Prime on kodi. If so that is a known issue on some files. https://github.com/hbiyik/FFmpeg/issues/19
Are you using DRM Prime on kodi
Do you in general suggest to use it, or better not to use it?
Can you suggest/hint other settings of kodi that you use (related to video)?
@kyak it took me several months to answer this because i was developing the next version of this codec, but yes, drm prime mode is definetely superior, have a look at here. https://forum.radxa.com/t/updated-ffmpeg-with-mpp/17771/18?u=boogiepop
@hbiyik any idea why some videos (it seems that those of SD quality) play fine with "Direct to Plane" and display green square with "EGL" in Kodi?
ffplay
displays black screen with audio.
At the same time, "Direct to Plane" hides all on-screen controls behind the video itself. I.e. I can see that controls are there, but they are overlapped by the video image. So "Direct to Plane" is not really usable, while "EGL" displays green with SD videos.
@hbiyik any idea why some videos (it seems that those of SD quality) play fine with "Direct to Plane" and display green square with "EGL" in Kodi?
This is a bug in panfork, i only encountered it once, and i think dimentions of the file is relevant
At the same time, "Direct to Plane" hides all on-screen controls behind the video itself. I.e. I can see that controls are there, but they are overlapped by the video image. So "Direct to Plane" is not really usable, while "EGL" displays green with SD videos.
This is also a bug in kodi, i saw it was fixed in latest git version, but still direct to plane sometimes provides black screen video. Could also be kernel related.
I think the right way to handle this to always use EGL, EGL is more compatabile with rest of the players, can you provide other files that you receive green screen. May be i can do something in panfork.
btw you are using the exp_refactor_all branch right?
btw you are using the exp_refactor_all branch right?
Nope, I'm using your ffmpeg-mpp AUR package. I'm not sure it points to this branch.
Here is the file with green output in kodi and blank/black window in ffplay: https://file.io/eYLePwBKjabT
please use this
https://aur.archlinux.org/packages/kodi-ext-git
add LD_LIBRARY_PATH=/path/to/ffmpeg:$LD_LIBRARY_PATH of ffmpeg, and better to use the new branch, there are bunch of improvements in related to drm_prime.
I will check the file with the new branch when i have the environment.
So far I've built ffmpeg from https://github.com/hbiyik/FFmpeg/tree/exp_refactor_all branch and the video displays blank in ffplay.
I will try to build kodi-ext-git nevertheless and force it to use the ffmpeg you've built.
What's strange is that I follow https://github.com/hbiyik/FFmpeg/wiki#compilation, and there are no "*.so" files after make
. There are static libs these, though. I wonder what's the point of LD_LIBRARY_PATH as you suggested above?
P.S. Figured I need to use --enable-shared
with configure in ffmpeg
@kyak i reproduced all of the issues you have, they are also in latest version of kodi and with the new decoder as well.
I think the way to go is use EGL, direct to plane simply is not mature, it does not also scale the video if the video resolution is bigger than the screen size, also video is on top time to time.
And the green screen issue i think is due to panfork. i will look at it if i an do something, but i am no expert in mesa/panfork.
ok i have no idea, we need a mesa expert here like or wait for panthor
adding file for reference
here is a path maniplation script for your refernce:
adapt the path accordingly and save it to setpaths.sh
then source setpaths.sh
export PATH="/home/alarm/extusb/ffmpeg:$PATH"
export LD_LIBRARY_PATH="/home/alarm/extusb/ffmpeg/libavcodec:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="/home/alarm/extusb/ffmpeg/libavdevice:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="/home/alarm/extusb/ffmpeg/libavfilter:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="/home/alarm/extusb/ffmpeg/libavformat:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="/home/alarm/extusb/ffmpeg/libavutil:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="/home/alarm/extusb/ffmpeg/libpostproc:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="/home/alarm/extusb/ffmpeg/libswresample:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="/home/alarm/extusb/ffmpeg/libswscale:$LD_LIBRARY_PATH"
@kyak i understood the issue, should be fixed in b0256779e63aafc8601a4961fbb5f1d5d6491446
@nyanmisaka here we have another alignment requirement, drmprime frames horizontal strides must be 64 aligned to be properly imported through EGL. Otherwise you get a green screen. Could be a restriction in panfork.
@hbiyik This issue has existed in panfork for a long time. I think we can start giving pancsf/panthor a try. If possible, our feedback about MPP & RGA may help upstream provide AFBC YUV support earlier.
It seems the initial upstream display support is in a good shape. So I merged the changes from bbrezillon/linux/panthor-v3+rk3588
into Collabora's rk3588
branch and polished it with the latest patches from the upstream mailing lists.
https://github.com/nyanmisaka/linux-rockchip/tree/mainline-hdmi-gpu-test
In theory, HDMI is available. After lighting up the screen, building the bbrezillon/mesa/v10+panthor
branch should enable basic EGL support. I haven't had time to build an image to test. I noticed that you use Arch Linux for daily use, and it's very convenient to create AUR packages on it to test these. You might be interested in it.
thats great, if i can make gbm+egl+mesa work, then may be even i can to to port mpp related drivers.
https://github.com/Joshua-Riek/rockchip-kernel/commits/develop-6.1/drivers/video/rockchip/mpp https://github.com/Joshua-Riek/rockchip-kernel/commits/develop-6.1/drivers/video/rockchip/rga3
Btw RK has adapted their MPP & RGA drivers to the 6.1 kernel. I think it shouldn't be too hard to port them into 6.7-rc1 kernel.
@kyak i understood the issue, should be fixed in b025677
@nyanmisaka here we have another alignment requirement, drmprime frames horizontal strides must be 64 aligned to be properly imported through EGL. Otherwise you get a green screen. Could be a restriction in panfork.
@hbiyik i tried the latest exp_refactor_all
branch and ffplay
is still showing black screen on the test video.
I'm going to try this with kodi-ext-git
and LD_LIBRARY_PATH
shortly.
@avafinger
A lot has been added specially hevc and vp8 encoders with scaling support:
https://github.com/hbiyik/FFmpeg/wiki
should be stable to test if you are interested.