Closed geek-baba closed 5 years ago
Could we have the docker run or docker-compose command you used, please? We can then look into this further.
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: plex namespace: production labels: app: plex spec: replicas: 1 selector: matchLabels: app: plex template: metadata: labels: app: plex spec: containers:
apiVersion: v1 kind: PersistentVolumeClaim metadata: namespace: production name: plex-config annotations: volume.beta.kubernetes.io/storage-class: "k8s-apps" spec: accessModes:
apiVersion: v1 kind: PersistentVolumeClaim metadata: namespace: production name: plex-transcode annotations: volume.beta.kubernetes.io/storage-class: "fastnas" spec: accessModes:
apiVersion: v1 kind: Service metadata: name: plex namespace: production spec: type: LoadBalancer ports:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: plex namespace: production annotations: kubernetes.io/ingress.class: traefik spec: rules:
Yeah we do not include their manual XML build logic for parsing a claim code environment variable. https://github.com/plexinc/pms-docker/blob/master/root/etc/cont-init.d/40-plex-first-run In fact we do not do anything to manually generate anything we have their binary do so and modify from there for stuff like plex pass.
We are not the maintainers of the Plex project , they have the ability to coordinate with their own internal software development team to determine how to structure a config file and ensure stuff will work before releasing changes.
Anything we did would just copy them and probably get out of sync.
If you need this functionality you cannot use our image as it lacks it and their is not a current push to add it.
I was not proposing to add claim code logic to the linuxserver builds, I actually hate the concept when you have 4 mins to use the claim code and passing it to the k8s cluster or docker swarm. I loved the way lsio image used to work, i.e. you deploy it and login to plex and it will discover an unclaimed server in the network, however that is not working in either docker swarm and k8s deployment, so if you want to look into it, that would be great, I will be happy to provide any details you may need it. I am traveling overseas for 2 weeks so responses would be delayed.
As far as I know we have not modified anything that would effect this claim process. Plex from an application standpoint just makes sure you are coming in from a local network to claim the server. So you need to simulate that in the Kubernetes env. From a workaround standpoint you could deploy a VDI on the same network and hop into it to login and claim your server.
The workaround already exists - https://forums.plex.tv/t/plex-cant-find-my-server-help/274370/9 - and its working perfectly fine after manually claiming it. Also YES nothing changed on LSIO image, if you install it on a simple docker setup its just WORKS! However when you start to use K8s or Docker Swarm env, the local networking concepts are not as simple as simple docker setup. I don't believe that there are enough folks yet using K8s and hence we are not seeing this issue widespread. However k8s is the future and look at this interesting project which will explain why I am so fascinated with k8s: https://github.com/munnerz/kube-plex - self healing and elastic deployment of Plex!
If you are spit balling than what are you proposing? With the claim concept even when run via that script you still have a race condition for the validity of the claim code. So it really does not solve your core issue of using the claim code vs just logging in.
As the input is an ephemeral code, how are you proposing we make it any easier for anyone?
You might be giving us too much credit here for coming up with an elegant solution to this. Do you have any ideas?
Manual steps are required to claim the server, precisely https://forums.plex.tv/t/plex-cant-find-my-server-help/274370/9
I have tried it on debian 9 and ubuntu 18.04, running kubernetes and docker swarm respectively. The plexinc container is identified upon deployement.
Thanks, team linuxserver.io