Closed Sjnahak closed 4 years ago
Hi @Sjnahak,
the gerrit-gerrit-stateful-set-0
-volume uses a RWO-volume. It is used to persist most of the Gerrit site (except for the repositories). Thus it would use the default storageClass, or the one you configured in its stead. The persistent volume claim is looking for a storageClass/Volume with the AccessMode ReadWriteOnce. This is hardcoded in the chart. SInce you want to use a NFS volume, it doesn't find a volume with the spec it is looking for.
To make it work, go to the ./helm-charts/gerrit/templates/gerrit.stateful-set.yaml
-file and change the AccessMode in line 179 to ReadWriteMany
. Then it should work. However, then also other pods might just mount the same volume, which will cause unwanted side effects like concurrent write access.
Another solution might be to create a second StorageClass for the NFS provisioner, that declares the AccessMode ReadWriteOnce
. That should also work, I think.
Since you stated, that you also tried the EBS volume, the best solution would be to use the equivalent in your infrastructure. Which hyperscaler/infrastructure are you using? Cinder for OpenStack or GCEPersistentDisk for GCP would work as well. So do all equivalent block storage types that are supported by Kubernetes [1].
The chart is designed to run Gerrit 3.1 at the moment. This version of Gerrit does not use a relational database anymore. It uses git to store its data (notedb).
I hope that helps, Thomas
PS.: Please consider asking your questions on the Gerrit mailing list: repo-discuss@googlegroups.com. That is the main support platform being used and will reach more people, if they have similar questions. Since GitHub is just a mirror for the repository (development is happening on Gerrit), a lot of people won't check here.
[1] https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes
@thomasdraebing thank you so much for the feedback , will give it try .
Hi Team, Thanks for updating the repo with adapt to Helm version 3:
I was testing gerrit master setup yesterday , please find setup details
As Per Prerequisites
Ingress /Nfs and helm was required , since i had nfs server already created/provided by nfs team created nfs client provisioner using
I have passed these nfs details into values.yaml But deployment fails with below
There were two storage class defined in values.yaml
Values.yaml Used by me
Deployment Step
Describe pod Output
What Could be the missing piece and Does this chart use Default H2 database ?