Closed IJOL closed 2 years ago
Thank you for your message.
Could you please list the issues that you see as showstoppers?
I have provided an update about this project here: https://github.com/reactive-tech/kubegres/issues/126 The development of Kubegres will resume by the end of 2022.
Ok, glad to hear from you:
Well our concerns source on the development schedule tha's posted in github, we need the pvc management (we routinely make real time copies of our db and to run a test cluster for big software changes ) , and be able to do major version upgrades via pg_upgrade, and be able to open our db datafiles with kubegres, we routinely integrate our data volumes in cluster to a big bare metal pg server , for really big operations, so we use a subdir for data based on project or customer name, so we can integrate those with minimal config in the bpg...
But we will try our way with kubegres.. i will keep you informed,
thanks and good work!!!
Ok, we got management to allow me to give kubegres a spin, but in my first tests, i was really very surprised to see you create 3 statefulsets, can't be done with only one? that is supposed to be the k8s aim in itself, of course there will be a rational for this design that we need understand... and given this design why choose statefulsets and not plain deployments?
@IJOL you should also note that you will get an ever increasing number of stateful sets being created (unavoidable it seems) and old data needs to be discarded continuously.
See #88
@IJOL: I asked the same question as you about there being so many stateful sets here: #57
Ok, it seems i've overlooked much of the discussions happening in previous issues, sorry for the noise,
Thank you for your message.
Could you please list the issues that you see as showstoppers?
Given our use case, the most urgent issue is the hardcoded pgdata directory name, from the code i see that maybe putting a PGDATA env var in the spec should work at least for primary, if i read here correctly but in the replica creation at here it seems it yet forced another time, so i think if the initcontainer for replicas could get his PGDATA env var from the spec if present should do the trick
Another one, from time to time our pgdata volumes in k8s get full, we routinely expand volumes by updating the corresponding pvc, but we need to stop postgres for the operation, with kubegres it should be much better to be able to update the volume size in the spec, and kill replicas, and later kill master, volume expansion without offline time.. but alas updating the pvc size is not permitted, which is understandable, as a simple solution..
Thank you for your message.
Could you please list the issues that you see as showstoppers?
The development of Kubegres will resume in June 2022. Until then I am extremely busy with the development of a new platform. Note that Kubegres is also used by that new platform with 18 separate clusters.
I see no activity since June 2022. Will you resume work on this project @alex-arica or should I begin a plan to move on to another postgres solution eventually?
Kubegres is working for my use-case and I am not missing any particular feature at the moment. But sooner or later there will be issues that would need to be addressed as Kubernetes evolves.
I'm using stackgres postgres operator right now, it's a wonderful software and gives a much streamlined pg server with support for backups via wal-g and a nice and useful UI, check it out !! https://stackgres.io/
Ignacio J.Ortega skype:ignacioj.ortega
El lun, 8 ago 2022 a las 15:46, Joe Siponen @.***>) escribió:
Thank you for your message.
Could you please list the issues that you see as showstoppers?
The development of Kubegres will resume in June 2022. Until then I am extremely busy with the development of a new platform. Note that Kubegres is also used by that new platform with 18 separate clusters.
I see no activity since June 2022. Will you resume work on this project @alex-arica https://github.com/alex-arica or should I begin a plan to move on to another postgres solution eventually?
Kubegres is working for my use-case and I am not missing any particular feature at the moment. But sooner or later there will be issues that would need to be addressed as Kubernetes evolves.
— Reply to this email directly, view it on GitHub https://github.com/reactive-tech/kubegres/issues/108#issuecomment-1208152422, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAXHYV7XW6DYNBFPXHUXTNTVYEFQZANCNFSM5UHYG3JQ . You are receiving this because you were mentioned.Message ID: @.***>
Thank for the tip @IJOL - will check it out.
Was hoping not to have to install and maintain minio - I do like the low-level simple backup options of kubegres.
Hi All, Apologies for the delay to answer. I have provided an update about this project here: #126
A new version of Kubegres 1.16 is available and it is compatible with Kubernetes 1.25 and it was upgraded to use the latest version of the framework Kubebuilder 3.6.0 : https://github.com/reactive-tech/kubegres/releases/tag/v1.16
Just found it and as i see it, it's a little outdated last commit from 4 months ago, some of the issues we've seen are showstoppers for us..
Is this project alive a worth pursuing it and try to help it, or better redirect our efforts to some other place?
Sorry for by that rude, but as promising this project is, it seems not very ready for production use,,,