The TCP/IP port numbers below 1024 are considered privileged ports. Normal users and processes are not allowed to use them for various security reasons.
We are running our container with non-root and after recent upgrades in the GCP cluster (Jul 7th), the ElectrumX instances stopped working due to failures in binding TCP and SSL ports:
ERROR:SessionManager:TCP server failed to listen on all_interfaces:80: [Errno 13] error while attempting to bind on address ('0.0.0.0', 80): permission denied
ERROR:SessionManager:SSL server failed to listen on all_interfaces:443: [Errno 13] error while attempting to bind on address ('0.0.0.0', 443): permission denied
INFO:SessionManager:WS server listening on all_interfaces:8080
INFO:SessionManager:WSS server listening on all_interfaces:8443
We updated the ports to use the defaults described in ElectrumX documentation: 50001-50004.
The service is still exposed on ports 80, 443, 8080, 8443.
Define podManagementPolicy: Parallel
The default podManagementPolicy is OrderedReady, which waits with spinning up subsequent replicas until the previous replica is ready.
In our use case, it's inaccurate, as we want to have three independent replicas. If the first one is broken, we still want to have a chance to
have another working.
Use non-privileged ports for ElectrumX
The TCP/IP port numbers below 1024 are considered privileged ports. Normal users and processes are not allowed to use them for various security reasons.
We are running our container with non-root and after recent upgrades in the GCP cluster (Jul 7th), the ElectrumX instances stopped working due to failures in binding TCP and SSL ports:
We updated the ports to use the defaults described in ElectrumX documentation:
50001-50004
.The service is still exposed on ports
80
,443
,8080
,8443
.Define podManagementPolicy: Parallel
The default podManagementPolicy is
OrderedReady
, which waits with spinning up subsequent replicas until the previous replica is ready. In our use case, it's inaccurate, as we want to have three independent replicas. If the first one is broken, we still want to have a chance to have another working.Read more: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
Changes were already introduced to the clusters.