jlesage / docker-handbrake

Docker container for HandBrake
MIT License
855 stars 97 forks source link

Support for Unraid-Nvidia #49

Open Shifter2600 opened 5 years ago

Shifter2600 commented 5 years ago

There is a new plugin available for unraid that passes the GPU to the docker container. See https://forums.unraid.net/topic/77813-plugin-linuxserverio-unraid-nvidia/

It would be great if your container since it is the most popular in unraid would support this pass thru.

jlesage commented 5 years ago

I think that nvidia gpu encoding is not supported by the Linux version of HandBrake, only the Windows version :(

ferrellw commented 5 years ago

I too was looking for hardware acceleration but for generic Linux and I stumbled across the below link. As noted in the comments it's faster but not as efficient when encoding. My NVENC h.264 files ended up being twice the size of CPU encoded h.264.

https://negativo17.org/handbrake-with-nvenc-support/

zocker-160 commented 5 years ago

@jlesage I'm using Handbrake 1.2.1 on my linux machine and there is the option for me to use the NVEnc in the drop down menu. And it works as expected.

Could it be possible to get that working in docker as well? I don't get those options there running on the same machine.

Screenshot_20190321_132714

jlesage commented 5 years ago

Thanks for confirming that NVENC is supported on Linux. I will check if this can be supported by the container, but I'm not sure if required nvidia libraries have their source code available...

Aterfax commented 5 years ago

Can confirm I would certainly also desire this feature - I know that the Emby and Plex dockers already include the required drivers / libraries as they use NVENC to accelerate transcoding - the libraries are available!

It should therefore be relatively straightforward to include them but it depends on what your docker base OS is? I've not tried with Alpine!

One example of a docker image being built with the CUDA libraries is here:

https://hub.docker.com/r/tleyden5iwx/ubuntu-cuda/dockerfile/

Here's how tensorflow do it: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/dockerfiles/dockerfiles/gpu.Dockerfile

I am less familiar on how they can pull it from the Nvidia Dockers rather than installing outright but documentation for this is here? - https://hub.docker.com/r/nvidia/cuda/

Just be aware it might bump the image size considerably! BUT you can strip out some of the bulk by not installing the extras that aren't needed.

Edit: Just thought - that when / if you do this you might need to add some documentation that the Docker host will also require the same libraries / drivers installing.

For certain OSs this may already exist as downloadable drivers + the nvidia docker container runtime (Ubuntu etc...) or as a plugin (Unraid)

Aterfax commented 5 years ago

Hadn't looked at this in a while - but the nVidia wiki https://github.com/NVIDIA/nvidia-docker/wiki/CUDA says:

Running a CUDA container requires a machine with at least one CUDA-capable GPU and a driver compatible with the CUDA toolkit version you are using.
The machine running the CUDA container only requires the NVIDIA driver, the CUDA toolkit doesn't have to be installed.

So that should save some space, albeit on the host*

Here's an example of how nVidia are making their own images:

https://gitlab.com/nvidia/cuda/blob/ubuntu16.04/10.0/base/Dockerfile

And an example of someone doing it with Alpine?

https://github.com/cxhernandez/alpinecuda/blob/master/Dockerfile https://ar.to/notes/cuda

Aterfax commented 5 years ago

Running this with the nvidia docker runtime shows the following with lsmod*

lsmod
Module                  Size  Used by    Tainted: P
nvidia_uvm            864256  0
xt_CHECKSUM            16384  1
iptable_mangle         16384  2
ipt_REJECT             16384  2
ebtable_filter         16384  0
ebtables               32768  1 ebtable_filter
ip6table_filter        16384  0
ip6_tables             24576  1 ip6table_filter
vhost_net              20480  0
tun                    36864 24 vhost_net
vhost                  32768  1 vhost_net
tap                    20480  1 vhost_net
veth                   16384  0
xt_nat                 16384 15
ipt_MASQUERADE         16384 19
iptable_nat            16384  1
nf_conntrack_ipv4      16384 37
nf_defrag_ipv4         16384  1 nf_conntrack_ipv4
nf_nat_ipv4            16384  2 ipt_MASQUERADE,iptable_nat
iptable_filter         16384  2
ip_tables              24576  5 iptable_mangle,iptable_nat,iptable_filter
nf_nat                 24576  2 xt_nat,nf_nat_ipv4
xfs                   663552  3
nfsd                   90112 11
lockd                  73728  1 nfsd
grace                  16384  1 lockd
sunrpc                204800 14 nfsd,lockd
md_mod                 49152  2
nvidia_drm             40960  0
nvidia_modeset       1019904  1 nvidia_drm
nvidia              16510976  2 nvidia_uvm,nvidia_modeset
x86_pkg_temp_thermal    16384  0
intel_powerclamp       16384  0
coretemp               16384  0
drm_kms_helper        126976  1 nvidia_drm
kvm_intel             196608  6
kvm                   364544  1 kvm_intel
drm                   319488  3 nvidia_drm,drm_kms_helper
agpgart                32768  1 drm
crct10dif_pclmul       16384  0
crc32_pclmul           16384  0
crc32c_intel           24576  0
ghash_clmulni_intel    16384  0
pcbc                   16384  0
aesni_intel           200704  0
aes_x86_64             20480  1 aesni_intel
crypto_simd            16384  1 aesni_intel
cryptd                 20480  3 ghash_clmulni_intel,aesni_intel,crypto_simd
ipmi_ssif              24576  0
i2c_core               40960  4 nvidia,drm_kms_helper,drm,ipmi_ssif
glue_helper            16384  1 aesni_intel
intel_cstate           16384  0
syscopyarea            16384  1 drm_kms_helper
intel_uncore          102400  0
sysfillrect            16384  1 drm_kms_helper
ahci                   40960  5
tg3                   155648  0
libahci                28672  1 ahci
thermal                20480  0
button                 16384  0
intel_rapl_perf        16384  0
sysimgblt              16384  1 drm_kms_helper
fb_sys_fops            16384  1 drm_kms_helper
ipmi_si                53248  0
pcc_cpufreq            16384  0
ie31200_edac           16384  0
jworcester92 commented 5 years ago

@jlesage Has there been a status update for this feature? Just curious as I get a lot of use out of your docker, and would get even more by having this feature. It goes without saying, but thank you for all the work you have put into this!

Aterfax commented 5 years ago

From my investigations on another docker https://github.com/binhex/arch-jellyfin/issues/2

It appears that the only bits that need doing from the maintainer side are providing a FFMPEG compiled with the CUDA SDK (either one version or several versions to support which ever SDK version the user has on their host!)

Aterfax commented 5 years ago

This might help? Here's some existing FFMPEG NVENC automatic compile scripts:

https://gist.github.com/Brainiarc7/3f7695ac2a0905b05c5b

https://github.com/ilyaevseev/ffmpeg-build

jlesage commented 5 years ago

I didn't had time to put a lot of time on this, but I'm still trying to see if source code is available for the CUDA SDK. So far, it seems to be easily available as a pre-built binary package, which of course won't not work on Alpine.

jworcester92 commented 5 years ago

I'm sure you probably already found this, but someone here created an Alpine docker that supports glibc:

https://github.com/frol/docker-alpine-glibc

https://stackoverflow.com/questions/44688200/how-to-install-a-minimal-cuda-driver-file-into-alpine-linux

Also, is this the source you would need to compile CUDA with musl?

https://developer.download.nvidia.com/compute/cuda/opensource/

(Found all of these links on that Stackoverflow link.)

jlesage commented 5 years ago

It's definitely possible to run glibc applications in Alpine. However, it's not possible for applications compiled with musl to dynamically load libraries compiled with glibc.

The next step is to make Nvidia source code compiles with musl.

Aterfax commented 5 years ago

It's definitely possible to run glibc applications in Alpine. However, it's not possible for applications compiled with musl to dynamically load libraries compiled with glibc.

The next step is to make Nvidia source code compiles with musl.

I'm pretty sure that all you'd need to do is provide an FFMPEG compiled with NVENC support (The nvidia docker runtime stuff will handle the rest.)

Chances are you could just pull in the libraries and FFMPEG from the Jellyfin FFMPEG compile docker here:

https://hub.docker.com/r/jellyfin/ffmpeg

https://github.com/jellyfin/ffmpeg-build

djaydev commented 5 years ago

I couldn't get alpine to work with nvidia at all, so I copy/pasted jlesage's dockerfile into his debian image to get handbrake nvenc working. It's not well done but maybe @jlesage can take it over? It's double the size of handbrake alpine but that's because I don't know what I'm doing.

jworcester92 commented 5 years ago

Thank you @djaydev for working on this; I noticed it recently and have been testing it out.

zocker-160 commented 5 years ago

@jworcester92 @Aterfax I've put some time into this and thanks to @djaydev 's template I've managed to get a Docker container working with NVENC

Screenshot_20191003_172524

you can check it out here if you are interested. Once the build process is done, it will be available in the docker hub repo as well.

djaydev commented 5 years ago

@jworcester92 @Aterfax I've put some time into this and thanks to @djaydev 's template I've managed to get a Docker container working with NVENC

you can check it out here if you are interested. Once the build process is done, it will be available in the docker hub repo as well.

The container I made already had NVENC enabled. I'm curious what does using the CUDA add? https://hub.docker.com/r/djaydev/handbrake

zocker-160 commented 5 years ago

@djaydev your image didn't work for me at all :-( it always used the CPU EDIT: and for me there was no option to select NVENC in the drop down

djaydev commented 5 years ago

@zocker-160 Ok no worries. The most typical reason why most users report not working is the NVIDIA variables, VISIBLE_DEVICES all, DRIVER_CAPABILITIES all, runtime=all. It definitely works a decent amount if ppl use it, though mostly on Unraid, that might be it.

If you already made another one no use troubleshooting it.

zocker-160 commented 5 years ago

@djaydev yes I am aware of that, that's why I put

ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES all

right into the image

but I wonder how your image can even work, without the startapp.sh. So I have grabbed your dockerfile and build upon it. (I hope that's ok for you)

when I add the CUDA drivers, the drop down menu suddenly shows NVENC as option. Those packages are a requirement to make nvidia GPUs work, at least that's what nvidia says......

djaydev commented 5 years ago

It's all good!!! I did the same thing and used jlesage's Dockerfile so feel free. Just discussing to learn about CUDA if need be or to help others if they have issues using my container as well.

It has the startapp.sh in the root of the container.
You can verify by running docker exec -ti {conainterName} bash ls /

Also, NVENC doesn't need CUDA packages. One example is the Plex docker which uses NVDEC and NVENC without adding any CUDA packages to it. The --runtime=nvidia(I think its this one) adds the nvidia libraries handbrake needs for nvidia.

Here's some sample info from my container showing the startapp.sh and nvidia stuff root@unraid:~# docker exec -ti HandBrake bash root@4d72c164647f:/tmp# ls / | grep startapp.sh startapp.sh root@4d72c164647f:/tmp# ls /usr//lib/x86_64-linux-gnu/ | grep nvidia ... libEGL_nvidia.so.0 libEGL_nvidia.so.430.14 libGLESv1_CM_nvidia.so.1 libGLESv1_CM_nvidia.so.430.14 libGLESv2_nvidia.so.2 libGLESv2_nvidia.so.430.14 libGLX_nvidia.so.0 libGLX_nvidia.so.430.14 libnvidia-cfg.so.1 libnvidia-cfg.so.430.14 libnvidia-compiler.so.430.14 libnvidia-eglcore.so.430.14 libnvidia-encode.so.1 libnvidia-encode.so.430.14 libnvidia-fatbinaryloader.so.430.14 libnvidia-fbc.so.1 libnvidia-fbc.so.430.14 libnvidia-glcore.so.430.14 libnvidia-glsi.so.430.14 libnvidia-ifr.so.1 libnvidia-ifr.so.430.14 libnvidia-ml.so.1 libnvidia-ml.so.430.14 libnvidia-opencl.so.1 libnvidia-opencl.so.430.14 libnvidia-ptxjitcompiler.so.1 libnvidia-ptxjitcompiler.so.430.14 libnvidia-tls.so.430.14 libvdpau_nvidia.so.1 libvdpau_nvidia.so.430.14

zocker-160 commented 5 years ago

ah ok understood I will have a look into that, removing the CUDA drivers would make my image way smaller.

will do some more testing, maybe I missed something EDIT: yes it does indeed work without it - thanks for the tip!

Does Unraid maybe do sth different (in regards to GPU and container) compared to a "normal" linux machine?

djaydev commented 5 years ago

Does Unraid maybe do sth different (in regards to GPU and container) compared to a "normal" linux machine?

Hmm that's a good question. Someone made an easy to use Unraid plugin that installs a kernel thingy and Nvidia drivers so I just use that. It would explain why it works on Unraid without Cuda packages inside the container.

zocker-160 commented 5 years ago

ok so after some testing I figured out how it works with the Nvidia GPU

since Docker version >=19.03 you don't need the nvidia-docker2 package nor do you have to install the nvidia-CUDA drivers into the docker image, unless you need CUDA

in our case handbrake doesn't use CUDA (AFAIK) so setting

ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,video,utility (or all)

with --runtime=nvidia or --gpus all is enough to enable NVENC

@djaydev thank you for the hints ;)

djaydev commented 5 years ago

in our case handbrake doesn't use CUDA (AFAIK) so setting

ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,video,utility (or all)

with --runtime=nvidia or --gpus=all is enough to enable NVENC

@djaydev thank you for the hints ;)

Sure thing :) That's what I'm tracking as well. Still not sure why my container didn't work for you if had those settings all correctly set, but since you built one already no need to troubleshoot further.

insaneaux commented 4 years ago

@djaydev Thank you for this container. I have everything working just great though I do have a couple of questions. When encoding I certainly notice the higher performance while using the NVENC preset, though my CPUs are still getting pegged. Is there a way to offload this completely to the GPU?

Also, during the process I noticed from the output of the nvidia-smi output, it didn't seem to be using for lack of a better way to put it "enough" of the resources given to it. image

I see 10% gpu util (while mu CPUs are pegged) and 82m out of 2gigs of memory? Am I seeing that correctly? Not sure what I am doing wrong or what I could do differently to shift the load to the GPU. I am running the NVidia version of unRaid with the correct env values.

Any input is appreciated.

djaydev commented 4 years ago

I see 10% gpu util (while mu CPUs are pegged) and 82m out of 2gigs of memory? Am I seeing that correctly? Not sure what I am doing wrong or what I could do differently to shift the load to the GPU. I am running the NVidia version of unRaid with the correct env values.

Any input is appreciated.

The NVENC present refers to Nvidia Encoding and Handbrake doesn't use NVDEC for Nvidia decoding. That just means that your CPU is doing the decoding while the GPU does the encoding. I believe that's why you're seeing high CPU usage.

Too fully offload to your Nvidia GPU you'd need something like FFMPEG which supports NVDEC and NVENC. FFMPEG is not very easy to use directly from what I can find.

Here's an example command I run on Unraid using a Docker container with FFMPEG: ffmpeg -hwaccel nvdec -i video.avi -c:v hevc_nvenc -rc:v vbr -rc-lookahead:v 32 -brand mp42 -ac 2 -c:a libfdk_aac -b:a 128k newvideo.mp4

insaneaux commented 4 years ago

Thanks, can you tell me ENC vs DEC? I have a file and I want to encode it to a specific format. You say handbrake doesn't use NVDEC but I am not using that. NVENC is the choice here. If you can clear the stupids here I would appreciate it. Most are mine :) Should I just rip native and then NVENC to what format I want and then the GPU will only be used yes?

djaydev commented 4 years ago

Thanks, can you tell me ENC vs DEC? I have a file and I want to encode it to a specific format. You say handbrake doesn't use NVDEC but I am not using that. NVENC is the choice here. If you can clear the stupids here I would appreciate it. Most are mine :) Should I just rip native and then NVENC to what format I want and then the GPU will only be used yes?

Hi, I think there's language barrier here? If so sorry about that.

You said "CPU are pegged" = CPU decoding NVENC = only encoding

insaneaux commented 4 years ago

Haha, gotcha. No sir I understand now. Just learning this stuff. I found a more complete answer below, but it doesn't really tell me why more resources aren't being used when encoding. Oh well. I could certainly jail off less CPUs for the container or move the decoding until late at night. image

Thanks djaydev again for all your hard work!

mneumark commented 4 years ago

@djaydev That's great work. @jlesage Any interesting in merging the changes @djaydev made into your repo?

joelang1699 commented 4 years ago

I too was looking for hardware acceleration but for generic Linux and I stumbled across the below link. As noted in the comments it's faster but not as efficient when encoding. My NVENC h.264 files ended up being twice the size of CPU encoded h.264.

https://negativo17.org/handbrake-with-nvenc-support/

Yeah I've got the same issue. I was recoding some 1080p h265 mkvs to 720p h264 mkvs (GPU only supports this) and ended up with files three times the size of the originals.

CPU 1080p h265 mkvs to 720p h265 mkvs results in half the file size.

zocker-160 commented 4 years ago

@joelang1699 this is not really an software issue, that is just how GPU video encoding works, you simply cannot reach the same file size and quality of CPU encoding

joelang1699 commented 4 years ago

@joelang1699 this is not really an software issue, that is just how GPU video encoding works, you simply cannot reach the same file size and quality of CPU encoding

Fair enough, I was just surprised at the file size.

harryt04 commented 4 years ago

EDIT: I'm an idiot. I had H.265 (H.265) set as the video encoder, not NVENC. For anyone else that is a noob to HandBrake like I am, you have to go to the video settings and select NVENC as the video encoder. Disregard the following.

Can someone help me? Maybe I'm missing something, but as @zocker-160 suggested, I have set the following variables and yet no matter what I try, handbrake will only transcode through CPU. I've successfully gotten plex to hardware transcode with my GPU. Can I not use the GPU for two docker containers simultaneously? Is that why it won't work?

NVIDIA_VISIBLE_DEVICES all
NVIDIA_DRIVER_CAPABILITIES  all
--runtime=nvidia
zocker-160 commented 4 years ago

since @harryt04 found the issue I will not address that, but I want to comment on this

Can I not use the GPU for two docker containers simultaneously? Is that why it won't work?

just in case someone else is interested in that. Generally speaking you can use as many docker containers as you want with one single GPU, but you have to keep in mind, that Nvidia limits the amount of NVENC sessions / transcodes you can do at the same time on consumer level GTX and RTX GPUs it is limited to 2 (it's a pure driver / software limit, the hardware is capable of way more)

so running 3 handbrake docker instances and try to transcode at the same time, will not work

EDIT: just in case anyone wants the full details: https://developer.nvidia.com/video-encode-decode-gpu-support-matrix

harryt04 commented 4 years ago

on consumer level GTX and RTX GPUs it is limited to 2 (it's a pure driver / software limit, the hardware is capable of way more)

True, but if someone is wanting to go the route of potentially upsetting nvidia and unlocking the hardware they purchased (which I have no moral qualms with), they might find more information on how to do that, here.

ErroneousBosch commented 4 years ago

For NVENC and NVDEC to work, I am pretty sure that the image has to have the Nvidia drivers installed integrated. The Plex container, for instance, gains this by virtue of being based on the Ubuntu container. You can verify this by trying to run nvidia-smi inside the container, (which fails for this one)

Regarding unlocking encoding on Linux or Windows, you may find this useful

zocker-160 commented 4 years ago

For NVENC and NVDEC to work, I am pretty sure that the image has to have the Nvidia drivers installed integrated. The Plex container, for instance, gains this by virtue of being based on the Ubuntu container. You can verify this by trying to run nvidia-smi inside the container, (which fails for this one)

@ErroneousBosch no this is not the case, for NVENC / DEC you don't need the nvidia driver inside the container you only need that if you want to use CUDA, which Handbrake doesn't use.

you can look at my Dockerfile here which I use for the Handbrake NVENC container and it works.

ErroneousBosch commented 4 years ago

Hmm so what is missing from this container to let it use nvenc/dec? I already have stability issues with this one on my environment, though.

I'm gonna try your image and see

JeffBaumgardt commented 4 years ago

Forgive me for jumping in, I too would like to use my gtx for transcoding. I won't be crazy and run multiple containers at once so I'm not worried there. Is this (@jlesage) docker container working with this yet, I saw a link to someone (@zocker-160)'s dockerfile and I didn't know if that one is an addendum to or a replacement for the owners.

I really love the hands free nature of this and I want to put my 1080 to use.

zocker-160 commented 4 years ago

hey @JeffBaumgardt I have created a new repository with a handbrake docker image which supports NVenc encoding, you can find it here: https://github.com/zocker-160/handbrake-nvenc-docker

this container is standalone and gets updated by me, for now it is still Handbrake 1.3.2, but I am working on the 1.3.3 update already ;)

EDIT: my docker image is a replacement for this image here which doesn't support Nvidia NVenc

JeffBaumgardt commented 4 years ago

@zocker-160 Thanks I'll get it going and let you know if I run into any issues.

zocker-160 commented 4 years ago

@JeffBaumgardt sure no problem, feel free to open an issue ;)

stoli412 commented 3 years ago

hey @JeffBaumgardt I have created a new repository with a handbrake docker image which supports NVenc encoding, you can find it here: https://github.com/zocker-160/handbrake-nvenc-docker

this container is standalone and gets updated by me, for now it is still Handbrake 1.3.2, but I am working on the 1.3.3 update already ;)

EDIT: my docker image is a replacement for this image here which doesn't support Nvidia NVenc

@jlesage Any chance of getting NVenc incorporated into your image directly?

stavros-k commented 3 years ago

I'd also vote for nvenc integration into this image :)

ibasaw commented 3 years ago

+1

Gratobuchr commented 3 years ago

+1

isaacolsen94 commented 3 years ago

+1