Open owenchenxy opened 2 years ago
I just encountered the same thing, and by chance saw that you already opened an issue for it :) In addition to the solution that you proposed, there is a different one: Have the port
setting only affect the service, but not the container (so that the container will expose 5432 to match postgres, and the service will know to use port 5432 on the container, but expose 8000 to everything else). Yet better would be if the service wouldn't listen to a port number, but rather a port name - that way, the container can do whatever it wants, as long as it adds a name to the port spec, and then, the service only needs to know about that port name.
I just encountered the same thing, and by chance saw that you already opened an issue for it :) In addition to the solution that you proposed, there is a different one: Have the
port
setting only affect the service, but not the container (so that the container will expose 5432 to match postgres, and the service will know to use port 5432 on the container, but expose 8000 to everything else). Yet better would be if the service wouldn't listen to a port number, but rather a port name - that way, the container can do whatever it wants, as long as it adds a name to the port spec, and then, the service only needs to know about that port name.
Thanks a lot for the alternative! But in this way, we can't set the port we want for the service. Because in real production environment, we must use a particular port specified by other team, which in this case is 5433 for pg service.
Thank you for your messages. I will investigate this issue this Wednesday and keep you updated.
@owenchenxy After investigation, changing the port will not be as easy as just setting the port parameter in postgres.conf. If we want to change a port, we also need to pass the new port value in:
pg_isready
in the StatefulSets specs which is under the fields livenessProbe
and readinessProbe
pg_basebackup
in the BaseConfigMap
pg_dumpall
in the BaseConfigMap
Unfortunately, the binaries pg_*
do not check the values in postgres.conf and therefore we will have to specify the new port number among their function options.
Moreover the backup logic in BaseConfigMap
can be overridden by users in a custom and separate ConfigMap. And Kubegres will be unable to update the port in a custom ConfigMap.
I think that there's been a misunderstanding in my alternative suggestion ;) The suggestion is:
other pods --> <configured service> --> service created by kubegres --> named port 5432 --> kubegres pods
This specifically means that the setting in the kubegres resource could only affect the service, but not affect the pod. Then, the service itself could use the default port to talk to the kubegres pods - and that way, everything related to the postgres binaries could simply continue to use their defaults. I added that naming of that port in just for good measure / engineering / whatever.
That way, the only thing that would need to change is that the configuration in the kubegres resource should not affect the port exposed on the kubegres pods (as it currently does), but continue to affect the port on the service (as it also currently does).
This would make the implementation a lot easier, and allow users to choose the ports they prefer on the services.
@LanDinh thank you for clarifying your previous message. Could you please provide YAML examples about how your approach will be applied if I have to make a change the way Kubegres works.
Here's a minimal example, stripping away about everything not related to ports :)
Service:
apiVersion: v1
kind: Service
metadata:
name: some-service-name
spec:
selector:
app: some-label
ports:
- protocol: TCP
port: <port configured in kubegres resource here>
targetPort: postgres # this is how to use named ports
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-deployment-name
labels:
app: some-label
spec:
replicas: 3
selector:
matchLabels:
app: some-label
template:
metadata:
labels:
app: some-label
spec:
containers:
- name: some-container-name
image: some-image:latest
ports:
- name: postgres
containerPort: 5432 # this can be different from the service port!
I hope this makes more clear what I'm trying to describe :) This specifically means that the pods that are created by the deployment will expose port 5432, which is the postgres default. This means that the images can continue to use the defaults. And then, the service that exposes the pods to the rest of the cluster, can expose port whatever, as long as it knows that it should use port 5432 (or the name of the port, if port naming is used!) to talk to the pods.
@LanDinh thank you for the YAML examples.
Hi Alex,
I just found that the kubegres config 'port' doesn't take effect. When I change the port to 5433, it indeed changed the svc/container port, but postgres is still listenning on port 5432. This is because in 'postgresql.conf', the port remained the default value which is 5432.
primary/replica are both headless service, thus will not listen on port 5433. Other pod in the cluster can only connect to the primary by
psql -h pgserver-1-0.namespace.svc.cluster.local -p 5432 -U postgres -d postgres
In conclusion, postgresql.conf need to be modified while enforcing the port spec .