Open inquam opened 4 years ago
To get Intel Quick Sync CPU hardware transcoding working I needed to add the following to the plex docker compose;
devices:
- /dev/dri:/dev/dri
Then I enabled it in Plex > Settings > Transcoder. You need Plex Pass also.
Maybe this will also work for GPU hardware transcoding?
@inquam if above answer worked, please report back so i can close the issue. Also, just adding that what @robflate suggested is what i use for plex on my Synology NAS.
I'm running Nvidia though. Nvidia driver installed on host, nvidia docker installed and able to run nvidia-smi docker and see the card so everything seems to work. I have also made the nvidia runtime the default runtime for docker since docker-compose does not support setting runtime in v3.
I have an Intel NUC and the added options seem to be working for me
@beloso I have a NUC too with an integrated Intel Iris Plus Graphics 655. Does your NUC have an Intel GPU too that supports hardware transcoding?
It does have an Intel GPU and it works. The fans don’t blastoff. And it overall just feel better.
On 28 Jun 2020, at 18:45, Proddy notifications@github.com wrote:
@beloso I have a NUC too with an integrated Intel Iris Plus Graphics 655. Does your NUC have an Intel GPU too that supports hardware transcoding?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
@beloso @proddy What model of NUCs do you have? I'm looking into getting one, but curious about the different models people are using?
I have NUC10i7FNB, so far I have no regrets.
mine is a NUC8i5BEH. It's running around 20 containers and CPU is 15%. Way too much power for what I need.
@beloso that's what i've been looking at. Care to share the memory and SSD you have in it? Also, your complete docker-compose section for Plex? I'm also assuming you're using Linux Mint?
@proddy thanks for that. Was looking at that too, but i'm probably going to go for the latest one to buy some time for a few years. I also plan on running some windows VMs on it, so hopefully the latest model will be better in that regard.
thanks the both of you. Gotta do something as my Synology is suffering too much. :)
@powerdude
Care to share the memory and SSD you have in it?
SSD: Samsung 970 Evo Plus 500GB RAM: 2x HyperX Impact 8GB DDR4-2666MHz
Also, your complete docker-compose section for Plex?
plexms:
image: plexinc/pms-docker:plexpass
container_name: plexms
restart: unless-stopped
networks:
- t2_proxy
security_opt:
- no-new-privileges:true
ports:
- "$PLEX_PORT:32400/tcp"
- "3005:3005/tcp"
- "8324:8324/tcp"
- "32469:32469/tcp"
- "1900:1900/udp" #conflicts with xTeVe
- "32410:32410/udp"
- "32412:32412/udp"
- "32413:32413/udp"
- "32414:32414/udp"
- "$PLEX_WEB_TOOLS_PORT:33400"
volumes:
- $USERDIR/docker/plexms:/config
- /media/external/Downloads/:/Downloads
- /media/external/:/Media
- /dev/shm:/transcode # Offload transcoding to RAM if you have enough RAM
devices:
- /dev/dri:/dev/dri
environment:
TZ: $TZ
HOSTNAME: "nucPlex"
PLEX_CLAIM: $PLEX_CLAIM
PLEX_UID: $PUID
PLEX_GID: $PGID
ADVERTISE_IP: http://$ADVERTISE_IP:$PLEX_PORT/
labels:
- "traefik.enable=true"
## HTTP Routers
- "traefik.http.routers.plexms-rtr.entrypoints=https"
- "traefik.http.routers.plexms-rtr.rule=Host(`nucplex.$DOMAINNAME`)"
## Middlewares
- "traefik.http.routers.plexms-rtr.middlewares=chain-no-auth@file"
## HTTP Services
- "traefik.http.routers.plexms-rtr.service=plexms-svc"
- "traefik.http.services.plexms-svc.loadbalancer.server.port=32400"
## Watchtower
- "com.centurylinklabs.watchtower.enable=true"
I'm also assuming you're using Linux Mint?
Ubuntu Server 20.04
nice, my docker-compose is almost the same (now that I added /dev/dri
).
My setup is less powerful with Corsair Vengeance 2x8GB 2400 DDR4, Intel 760P 256GB M.2 SSD running Ubuntu 20.04
I kinda think I went a bit overkill with it, but I bought it this month, I'm hoping it will last me for years to come. Hoping to throw more stuff at it also xD
But this is only for Intel Quicksync right? Nvidia hardware transcoding does not seem to work with this.
I don't know of any specifics for other cards. /dev/dri
should map to your video card(s) regardless of vendor.
I had a quick look on Plex forums, there are some topics there that provide more details on this.
How do you map your Nvidia card to other containers? Have you tried mapping it the same way?
I have the same NUC and I am running 64 containers without any issues. And this includes motioneye and zoneminder that are recording videos constantly.
mine is a NUC8i5BEH. It's running around 20 containers and CPU is 15%. Way too much power for what I need.
I'm running Nvidia though. Nvidia driver installed on host, nvidia docker installed and able to run nvidia-smi docker and see the card so everything seems to work. I have also made the nvidia runtime the default runtime for docker since docker-compose does not support setting runtime in v3.
Hi @inquam,
have you tried adding the following environment variables? NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
And mapping the following devices: /dev/dri/card0:/dev/dri/card0 /dev/dri/renderD128:/dev/dri/renderD128
This should be helpful with NVIDIA GPUs
I'm running Nvidia though. Nvidia driver installed on host, nvidia docker installed and able to run nvidia-smi docker and see the card so everything seems to work. I have also made the nvidia runtime the default runtime for docker since docker-compose does not support setting runtime in v3.
Hi @inquam,
have you tried adding the following environment variables? NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
And mapping the following devices: /dev/dri/card0:/dev/dri/card0 /dev/dri/renderD128:/dev/dri/renderD128
This should be helpful with NVIDIA GPUs
I have checked the nvidia-smi container to see if they have any other devices etc and also added the ones you mention.
So I have tried
- "/dev/nvidia-uvm:/dev/nvidia-uvm"
- "/dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools"
- "/dev/nvidia0:/dev/nvidia0"
- "/dev/nvidiactl:/dev/nvidiactl"
- "/dev/dri/card0:/dev/dri/card0"
- "/dev/dri/renderD128:/dev/dri/renderD128"
along with
NVIDIA_VISIBLE_DEVICES: all
NVIDIA_DRIVER_CAPABILITIES: compute,video,utility
No go...
The container from linuxserver.io with a run command (or in docker-compose with previous version that supported the runtime command) worked without any issue.
For anyone interested, I got this going with two pieces. One: running nvidia-driver as a docker container instead of installing native drivers (to deal with all the X11/Xorg dependencies from the meta package) and two nvidia-container-toolkit.
General reference here: https://github.com/NVIDIA/nvidia-docker and OS requirements here at the top: https://docs.nvidia.com/datacenter/cloud-native/driver-containers/overview.html
First, install the container toolkit. Ignore that it states in its prerequisites that you need the driver installed first: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker Ignore the last step after restarting docker about testing, it will fail without drivers.
Next, follow the driver docker instructions (per the Github wiki page link) here: https://docs.nvidia.com/datacenter/cloud-native/driver-containers/overview.html. Be sure to change or update the driver container at the 'Run the driver container' step to the appropriate OS from the available OSes. Test with the docker call to nvidia-smi.
If that worked, manually kill the docker run command with a stop and remove, then add it into compose to keep it cleaner. Here's my compose:
nvidia-driver:
image: nvidia/driver:450.80.02-ubuntu20.04
container_name: nvidia-driver
restart: unless-stopped
privileged: true
pid: host
volumes:
- /run/nvidia:/run/nvidia:shared
- /var/log:/var/log
labels:
## Disable Watchtower Updates
- "com.centurylinklabs.watchtower.enable=false"
plex:
#image: plexinc/pms-docker:beta
image: linuxserver/plex
container_name: plex
restart: unless-stopped
networks:
- $TRAEFIK_NETWORK
security_opt:
- no-new-privileges
runtime: nvidia
ports:
- "32400:32400/tcp"
- "3005:3005/tcp"
- "8324:8324/tcp"
- "32469:32469/tcp"
- "1900:1900/udp"
- "32410:32410/udp"
- "32412:32412/udp"
- "32413:32413/udp"
- "32414:32414/udp"
volumes:
- ${DOCKERDIR}/plex:/config
#- /dev/shm:/transcode
- ${MEDIADIR}:/data
environment:
- TZ
- PUID
- PGID
- VERSION=docker
- NVIDIA_VISIBLE_DEVICES=all
#- PLEX_UID=${PUID}
#- PLEX_GID=${PGID}
#- PLEX_CLAIM=
#- ADVERTISE_IP=https://plex.${DOMAINNAME}:443,http://192.168.1.15:32400
#- ALLOWED_NETWORKS=172.16.0.0/12,192.168.0.0/16
labels:
- "traefik.enable=true"
## HTTP Routers
- "traefik.http.routers.plex-rtr.entrypoints=https"
- "traefik.http.routers.plex-rtr.rule=Host(`plex.$DOMAINNAME`)"
## Middlewares
- "traefik.http.routers.plex-rtr.middlewares=chain-no-auth@file"
## HTTP Services
- "traefik.http.routers.plex-rtr.service=plex-svc"
- "traefik.http.services.plex-svc.loadbalancer.server.port=32400"
Note that I have some stuff commented out on Plex that's only relevant to the official Plex container. For Nvidia, I found it easier to set up passthrough on the Linuxserver container.
How to solve nvidia issue
1) edit /etc/docker/daemon.json
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}
2) restart docker, if it fails, do which nvidia-container-runtime
if this is not found, you do not have the dependencies to do hardware transcoding and you should fix this. Debian needs backported drivers.
3) add the stanza after plex or jellyfin container to add to nvidia-runtime instead of runc
plex1:
image: linuxserver/plex:latest
container_name: plex1
restart: always
ports:
- 32400:32400
networks:
- yeetmaster
runtime: nvidia
privileged: True
devices:
- /dev/dri/:/dev/dri/
# security_opt:
# - no-new-privileges:true
volumes:
- $USERDIR/docker/plex:/config
- /mnt/data:/data
- $USERDIR/docker/plex:/transcode
restart: unless-stopped
environment:
NVIDIA_VISIBLE_DEVICES: all
PUID: $PUID
PGID: $PGID
TZ: $TZ
UMASK_SET: 002
VERSION: docker
labels:
- "runtime=nvidia"
- "gpus=all"
- "traefik.enable=true"
and your traefik labels if you use plex over https://plex.domain.tld
The linuxserver docker image I used before I was able to get Nvidia hardware transcoding working. But what are the settings needed to get that working with the setup in this stack?