Open gen-angry opened 4 months ago
My ASROCK ARC 310 is working. Installed normal drivers on plain Ubuntu on Proxmox and passed through the render device. Did some troubleshooting with the Intel Driver and for my previous AMD card for jellyfin beforehand though.
Has there been any progress on this? I experience the same with Podman on an N5105 chip.
HandBrakeCLI --help | grep -A15 "video encoder:"
shows
[21:01:54] hb_display_init: using VA driver 'iHD'
libva info: VA-API version 1.22.0
libva info: User environment variable requested driver 'iHD'
libva info: Trying to open /usr/lib/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva info: va_openDriver() returns 0
[21:01:54] qsv: is available on this system
so it should work, but there is no QSV entry in the ui.
Having the same issue with the container running in a Proxmox LXC. Jellyfin works fine so I know how to passthrough the hardware encoder, and I'm certain it's working. When I go to try and select a hardware encoder in video, nothing is available.
HandBrakeCLI --help | grep -A15 "video encoder:"
[21:56:44] Compile-time hardening features are enabled
Cannot load libnvidia-encode.so.1
libva info: VA-API version 1.22.0
libva info: Trying to open /usr/lib/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva info: va_openDriver() returns 0
libva info: VA-API version 1.22.0
libva info: Trying to open /usr/lib/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva info: va_openDriver() returns 0
[21:56:44] hb_display_init: attempting VA driver 'iHD'
libva info: VA-API version 1.22.0
libva info: User environment variable requested driver 'iHD'
libva info: Trying to open /usr/lib/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva info: va_openDriver() returns 0
libva info: VA-API version 1.22.0
libva info: User environment variable requested driver 'iHD'
libva info: Trying to open /usr/lib/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva info: va_openDriver() returns 0
libva info: VA-API version 1.22.0
libva info: User environment variable requested driver 'iHD'
libva info: Trying to open /usr/lib/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva info: va_openDriver() returns 0
[21:56:44] hb_display_init: using VA driver 'iHD'
libva info: VA-API version 1.22.0
libva info: User environment variable requested driver 'iHD'
libva info: Trying to open /usr/lib/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva info: va_openDriver() returns 0
[21:56:44] qsv: is available on this system
[21:56:44] hb_init: starting libhb thread
[21:56:44] thread 79a7aeebdb30 started ("libhb")
-e, --encoder <string> Select video encoder:
svt_av1
svt_av1_10bit
qsv_av1
qsv_av1_10bit
ffv1
x264
x264_10bit
qsv_h264
x265
x265_10bit
x265_12bit
qsv_h265
qsv_h265_10bit
mpeg4
mpeg2
HandBrake has exited.
I get the following in the HandBrake activity log;
[21:42:46] Compile-time hardening features are enabled
Cannot load libnvidia-encode.so.1
[21:42:46] hb_qsv_make_adapters_list: MFXVideoCORE_QueryPlatform failed impl=0 err=-16
[21:42:46] qsv: is available on this system
[21:42:46] hb_init: starting libhb thread
[21:42:46] hb_init: starting libhb thread
[21:42:46] hb_init: starting libhb thread
Installing libva-utils
and running vainfo --display drm --device /dev/dri/renderD129
gives me;
Trying display: drm
libva info: VA-API version 1.22.0
libva info: Trying to open /usr/lib/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.22 (libva 2.20.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 24.4.4 ()
vainfo: Supported profile and entrypoints
VAProfileNone : VAEntrypointVideoProc
VAProfileNone : VAEntrypointStats
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSliceLP
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSliceLP
VAProfileJPEGBaseline : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointEncPicture
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
VAProfileHEVCMain : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointEncSliceLP
VAProfileHEVCMain10 : VAEntrypointVLD
VAProfileHEVCMain10 : VAEntrypointEncSliceLP
VAProfileVP9Profile0 : VAEntrypointVLD
VAProfileVP9Profile0 : VAEntrypointEncSliceLP
VAProfileVP9Profile1 : VAEntrypointVLD
VAProfileVP9Profile1 : VAEntrypointEncSliceLP
VAProfileVP9Profile2 : VAEntrypointVLD
VAProfileVP9Profile2 : VAEntrypointEncSliceLP
VAProfileVP9Profile3 : VAEntrypointVLD
VAProfileVP9Profile3 : VAEntrypointEncSliceLP
VAProfileHEVCMain12 : VAEntrypointVLD
VAProfileHEVCMain422_10 : VAEntrypointVLD
VAProfileHEVCMain422_10 : VAEntrypointEncSliceLP
VAProfileHEVCMain422_12 : VAEntrypointVLD
VAProfileHEVCMain444 : VAEntrypointVLD
VAProfileHEVCMain444 : VAEntrypointEncSliceLP
VAProfileHEVCMain444_10 : VAEntrypointVLD
VAProfileHEVCMain444_10 : VAEntrypointEncSliceLP
VAProfileHEVCMain444_12 : VAEntrypointVLD
VAProfileHEVCSccMain : VAEntrypointVLD
VAProfileHEVCSccMain : VAEntrypointEncSliceLP
VAProfileHEVCSccMain10 : VAEntrypointVLD
VAProfileHEVCSccMain10 : VAEntrypointEncSliceLP
VAProfileHEVCSccMain444 : VAEntrypointVLD
VAProfileHEVCSccMain444 : VAEntrypointEncSliceLP
VAProfileAV1Profile0 : VAEntrypointVLD
VAProfileAV1Profile0 : VAEntrypointEncSliceLP
VAProfileHEVCSccMain444_10 : VAEntrypointVLD
VAProfileHEVCSccMain444_10 : VAEntrypointEncSliceLP
Clearly it's being detected in the container, but I can't get it to actually initialise when handbrake is loaded.
The only thing I can think might be the error is that the card is card1
and renderD128
, rather than card0
. I can't do - /dev/dri/card1:/dev/dri/card0
though, as that fails to run with;
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error creating device nodes: mount src=/dev/dri/card0, dst=/var/lib/docker/overlay2/c1499c1f9f02555f29ba8b8b989570433c62a3a02ea85ab03ea10bdfcbdb8daf/merged/dev/dri/card0, dstFd=/proc/thread-self/fd/8, flags=0x1000: no such file or directory: unknown
Actually now that I look at @gen-angry 's post in more detail, we do have that in common - Our primary card is card1
with renderD128
.
I wonder if the issue is that it's not card0
with renderD128
, or card1
with renderD129
, and that's causing some sort of initilisation issue?
It also seems like even if I don't include /dev/dri:/dev/dri
etc that they're still all accessible in the container, which makes me wonder if that line is even necessary now?
And could that be the issue with not being able to crossbind a render device?
Actually now that I look at @gen-angry 's post in more detail, we do have that in common - Our primary card is
card1
withrenderD128
.I wonder if the issue is that it's not
card0
withrenderD128
, orcard1
withrenderD129
, and that's causing some sort of initilisation issue?It also seems like even if I don't include
/dev/dri:/dev/dri
etc that they're still all accessible in the container, which makes me wonder if that line is even necessary now? And could that be the issue with not being able to crossbind a render device?
Good catch with the card1 vs card0. I have the onboard disabled entirely as it's not needed with the arc card so it is strange that mine is choosing to keep it card1 anyways.
One thing that I've noticed that fixes it is if I chmod 777 /dev/dri/card1 and /dev/dri/renderD128 on the host, then restart the container. Obviously Id rather not keep doing that though, it's not persistent through reboots and I don't want to make it that way. So in that case, it might work if I run the container as root but would greatly prefer to keep everything rootless for security reasons. I don't need to have it as root for jellyfin and immich which both work great, so I wonder if it's something weird with the way this software stack is set up and permissions.
al@202server:/dev/dri$ ls -l
total 0
drwxr-xr-x 2 root root 80 Mar 12 00:52 by-path
crw-rw---- 1 root video 226, 1 Mar 12 00:52 card1
crw-rw---- 1 root render 226, 128 Mar 12 00:52 renderD128
al@202server:/dev/dri$ groups
al adm cdrom sudo dip video plugdev lxd libvirt render
al@202server:/dev/dri$
my host user has both render and video groups on it and the render/video groups own the device paths. Then, I've also noticed in the readme, it says:
Changing, on the host, the group owning the /dev/dri device. For example, to change the group to video:
sudo chown root:video /dev/dri/*
so I need to change renderD128 to owned by group video as well? Maybe this is where the issue lies if that's what the software stack is expecting? If I do this, it feels like it would break access for my other containers so it wouldn't be a real solution.
I've got an IPMI so I've got a permanent card0
that I can't get rid of.
Interesting that chmod 777
on the host works, I've just tested that and it worked perfectly for me too. However user: 0:0
and adding the video
and render
groups to my docker compose file didn't. I also didn't need to pass the card through at all, just passing renderD128
(the VAAPI interface iirc?) worked absolutely fine.
I'm going to do some more testing, I have absolutely no idea why but this seems to be a permissions issue. It's strange that granting groups and running the container as root doesn't work though.
Current Behavior
I can't seem to get QSV working in Handbrake and I'm not sure where to look further.
I am running this container in an ubuntu 24.10 machine using podman in rootless on my user account. The host machine is an i5-6500 with an Arc A310 card. It works well in jellyfin.
User account on host has both the video (44) and render (993) groups added.
Attached container file below.
in the container seems to have the group passed ok?
When I cd to '/output' and do 'touch test.txt', the created file correctly shows as owned by my user:
so it is using my user account.
I found this command in an earlier issue #265 which seems to work and correctly find my card?:
Yet no QSV options in the GUI:
I'm not sure where to go from here.
Expected Behavior
QSV options to appear in the GUI and be usable.
Steps To Reproduce
Install this container using the container file that I had supplied above. Attach video and render groups to user account.
Environment
Container creation
podman quadlet container in user account.
My container file:
Container log
Container inspect
Anything else?
Thank you, would appreciate any advice if I missed something or doing something wrong.
edit: have since upgraded to ubuntu 24.10 and podman 5.0.3 due to needing podman 5 for another container. The issue persists though.