Closed craigcabrey closed 3 months ago
From the looks of it the dockermod on the worker never actually began initializing. Logs should look quite different (it should install dependencies and also download all codecs, for example), plus the actual plex binary should never start, since the dockermod replaces it with itself.
So, should the health checks be disabled for the worker containers? That stood out to me as odd.
Regarding healthchecks yes, healthchecks should be enabled.
But your pod is failing because the dockermod is not running during startup so plex is starting unchanged, which is not correct. It's odd because in the first few lines linuxserver says it downloaded and installed the dockermod, but nothing ran after that. Can you try uninstalling the helm chart and trying again? Maybe linuxserver's dockermod download was corrupted or something like that.
Also (but unrelated to this current bug) for hardware transcoding to work on workers you're going to also need additional network shares for "cache" and "drivers". The working values you linked shows how to map them.
plus the actual plex binary should never start
Regarding healthchecks yes, healthchecks should be enabled.
This doesn't add up for me. Does the dockermod run on as the same ports as plex & respond to the same health checks?
I believe something on the Plex side has changed considerably. @craigcabrey try pinning your version to 1.40.2. I was able to get my workers to run at this version but it seems things break afterwards.
Yea, that seemed to have moved the needle, thanks! So, definitely broken after that version.
Workers become healthy on their own without any modifications using that tag. @pabloromeo I'll leave it up to you if you want to keep this open to track post-1.40.2 issues.
Was finally able to take a look at this. Indeed something had changed, but this time it wasn't Plex's fault, but rather the LinuxServer image. They have removed support for s6-overlay v2, and only allow v3 now. I'm rewriting the setup logic now, and hopefully later today or tomorrow i should release a version that works with the latest images from linuxserver.
Was finally able to take a look at this. Indeed something had changed, but this time it wasn't Plex's fault, but rather the LinuxServer image. They have removed support for s6-overlay v2, and only allow v3 now. I'm rewriting the setup logic now, and hopefully later today or tomorrow i should release a version that works with the latest images from linuxserver.
would this effect both the worker and the pms dockermods? i am seeing breaks in the main server as well, it boots up just like a standard pms instance
Yes, the initialization code breaks for both workers and PMS. Ill get a new release out ASAP, that should cover both install options too: dockermod or custom docker images.
I just released v1.4.13 of Clusterplex, new Helm Chart version 1.1.8 is also available here: https://pabloromeo.github.io/clusterplex/ It should fix this s6-overlay issue and both PMS and Workers should start up as expected.
can confirm the latest chart + image comes up healthy!
Describe the bug
I'm trying to use the Helm chart to start up the cluster. Everything comes up except the worker pods. The worker pods use the PMS docker image with an init container. However, I don't see anything else that makes them distinct from a standard Plex docker container. Is this intended? It seems like Plex is trying to start but can't, so the pod health checks never become ready.
For what it's worth, this is an IPv6 dual-stack first cluster, so usually I have problems with software hardcoding
0.0.0.0
. But the Plex container didn't require any modifications.I cross referenced against a known working config, but no dice: https://github.com/pabloromeo/clusterplex/issues/305
See my values.yaml below.
To Reproduce Steps to reproduce the behavior:
Expected behavior Worker pods come up
Screenshots If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
k8s + rook ceph
Additional context
values.yaml:
Worker logs:
Add any other context about the problem here.