truenas / charts

TrueNAS SCALE Apps Catalogs & Charts
BSD 3-Clause "New" or "Revised" License
306 stars 293 forks source link

[Plex] Invalid value: 32400: provided port is already allocated #1001

Closed seanthewebber closed 1 year ago

seanthewebber commented 1 year ago

Version: TrueNAS-SCALE-22.12.1

Steps to reproduce:

1.

[EFAULT] Failed to update chart release: Error: UPGRADE FAILED: failed to create resource: Service "plexmediaserver-2-tcp" is invalid: spec.ports[0].nodePort: Invalid value: 32400: provided port is already allocated

Screenshot 2023-02-23 183152 Screenshot 2023-02-23 183209

Error: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 426, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 461, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1186, in nf
    res = await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1318, in nf
    return await func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/chart_release.py", line 553, in do_update
    await self.middleware.call('chart.release.helm_action', chart_release, chart_path, config, 'update')
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1386, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1346, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1249, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/helm.py", line 44, in helm_action
    raise CallError(f'Failed to {tn_action} chart release: {stderr.decode()}')
middlewared.service_exception.CallError: [EFAULT] Failed to update chart release: Error: UPGRADE FAILED: failed to create resource: Service "plexmediaserver-2-tcp" is invalid: spec.ports[0].nodePort: Invalid value: 32400: provided port is already allocated

Expected result:

Both instances should launch listening on two respective ports, 32401/TCP and 32402/TCP.

Actual result:

The plexmediaserver-2 fails to deploy due to a nodePort collision.

Additional information:

The port collision is apparent by issuing a get on active services:

root@truenas[~]# k3s kubectl get service -A
NAMESPACE                    NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                                     AGE
~~~
ix-plexmediaserver-1   plexmediaserver-1-tcp   NodePort       172.17.146.233   <none>          32401:32401/TCP,80:9076/TCP,443:41599/TCP,1900:38335/TCP    80s
ix-plexmediaserver-1   plexmediaserver-1-udp   ClusterIP      172.17.179.180   <none>          1900/UDP,32410/UDP,32412/UDP,32413/UDP,32414/UDP            80s
ix-plexmediaserver-2   plexmediaserver-2-tcp   NodePort       172.17.231.49    <none>          32402:32402/TCP,80:45109/TCP,443:31382/TCP,1900:36891/TCP   53s
ix-plexmediaserver-2   plexmediaserver-2-udp   ClusterIP      172.17.25.120    <none>          1900/UDP,32410/UDP,32412/UDP,32413/UDP,32414/UDP            53s

The pod plexmediaserver-2-udp remains in ClusterIP type which is conflicting with the ports of pod plexmediaserver-1-udp. If the UDP ClusterIP service type were changed to NodePort, this issue should be resolved.

Why?

I am attempting to run two Plex servers on the same TrueNAS SCALE installation. This is because a friend is colocating their resources with mine, and they will retain administrator rights to their own instance.

stavros-k commented 1 year ago

Hello @seanthewebber, I'm trying to replicate, but I can't.

But I don't think the ClusterIP is the issue here. In the above output of k3s kubectl get service -A that you shared, I don't see any conflicts (and specifically no port 32400 at all).

NodePort binds ports on the host, so there would be conflicts if 2 services tried to bind the same NodePort.

ClusterIP binds the port only on the POD and each POD has a different IP address, so it shouldn't have any port conflicts on that front. But I saw that you said it stays in Deploying 0/2. Which is not something that should happening. Is the 2 a typo? Normally it should be 1. If that's not a typo. Then this is the cause of the port conflict. But still I don't see how this could happen

Also I understand that this is not fresh installs but existing installs trying to change ports, is that correct? Did you, by any chance, any of those applications, had previously enabled host network? And lastly, can you confirm if the Upgrade Policy is set to Kill existing pods before creating new ones?

Thanks

stavros-k commented 1 year ago

@seanthewebber I'm closing this as I can't reproduce it and a potential fix for your issue is already merged. However if you still experience this issue, please let me know. Thanks