zalando / postgres-operator

Postgres operator creates and manages PostgreSQL clusters running in Kubernetes
https://postgres-operator.readthedocs.io/
MIT License
4.37k stars 980 forks source link

no persistent volumes available for this claim and no storage class is set #1487

Open wilkerd1 opened 3 years ago

wilkerd1 commented 3 years ago

Which image of the operator are you using? I'm using registry.opensource.zalan.do/acid/postgres-operator:v1.6.2

Where do you run it - cloud or metal? Kubernetes or OpenShift? Running it on my local RHEL 8.x VM using RKE2's flavor of K8s

Are you running Postgres Operator in production? Not in production this is a development machine

Type of issue? [Bug report, question, feature request, etc.] Not a bug. Its an ask for some assistance from the community. Tried reading the docs thoroughly and still am confused.

Deployed the 1.6.2 zalando operator to an RKE2 cluster and was able to get it running fairly easily using the quick start guide. Was also able to get the operator-ui to function easily as well. I tried deploying the minimal-postgres-manifest.yaml and hit an issue with volumes. I'm not new to Postgres by any means, but RKE2 is new for me as well as installing and configuring Postgres using the zalando operator. It's making it tricky to root out the source of my volume issues deploying the minimal cluster.

I see pgdata-acid-minimal-cluster-0 was created by checking with kubectl get pvc. I receive the error "no persistent volumes available for this claim and no storage class is set" when running describe on the pvc, pgdata-acid-minimal-cluster-0. Shouldn't the operator be creating and managing the PV in the minimal instance? I see in the manifest labeled "complete...." there is a fairly elaborate volume definition there. Tried to weave that into the minimal version with no luck.

On my debugging quest, I created a pv directory in /mnt for the operator to use for mounting volumes since the pgdata-acid-minimal-cluster-0 was not working and the cluster not deploying cleanly. Likewise I created a corresponding PVC that binds to it. Can I tell the operator to use this instead? How/where would I configure that in zalando? If it were a standard pod manifest I definitely understand how to set it up, but being an operator and working with CRD's has me a bit confused.

Thank you for any advice here. I apologize for my noob-ness on Kubernetes based Postgres and zalando. It's cool so far and am looking forward to learning more. ...so I can ask better questions in the future.

FxKu commented 3 years ago

The operator relies on dynamic volume provisioning of your K8s environment. It only creates PVCs and by default the default storage class will be used. You can specify a dedicated storage class in the volume section of the manifest, though.

Have seen users creating PVs manually for the operator and that worked for them. There's also another issue open where the mapping of PVs and PVCs is discussed. The corresponding PR looks good too and we are planning to integrate it to the 1.7 release

wilkerd1 commented 3 years ago

Thank you for the response and guidance FxKu. I appreciate it much. Good to know we have the option to manually create them. I started going down that route after going in circles trying to get the operator to do it dynamically.

I'm glad to hear there is PR for creating mappings for version 1.7. It seems like a logical, intuitive feature to have in the config. Kind of seems that's what's missing here really: an intuitive way to control how the operator provisions volumes for a cluster.

Which yaml files would I look at to better understand how the dynamic provisioning occurs? I read through the cluster manifest docs and have poured over the example files. I'm still at the challenging point of trying to run kubectl create -f minimal-postgres-mainfest.yaml cleanly then testing connectivity to them with psql. If the operator's dynamic provisioning isn't working correctly, what are some methods I can use to debug the cluster config or operator config? Have others experienced the same issue running the minimal config that you know of?

Very specific question here. Looking at the kube-controller-manager log file. I see a message:

"Event occurred" object="default/pgdata-acid-minimal-cluster-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set"

For dynamic configuration purposes, letting the operator manage the volume claim/volume attachment, do I set the storage class in the minimal-postgres-config.yaml, or is this configuration of storage class made in the operator so it applies it in some sort of template like way? When and where does the operator look on the k8s node and get a list of what volumes are available? I would assume if the operator is dynamically managing it, it would already know about which storage class type to use and which volumes are available (perhaps even creating a default volume if one isn't statically set or none are existing and available)?

Sicaine commented 3 years ago

I just configured it myself and setting a storageclass besides the default one works as expected:

I'm using openEBS to provision storage which means when i do kubectl get storageclass, non is set as default.

Normally on something like GKE (google kubernetes engine), your default storageclass is set and every service will just use it automatically. If your cluster doesn't have a default storage class set, you should talk to your cluster operator about it. Be aware though that it can make sense to optimize what storageclass you use for a database instance. There are storage classes available for ssd storage for example (on GKE).

It looks like this:

spec:
  teamId: "acid"
  volume:
    size: 10Gi
    storageClass: "openebs-hostpath"

Make sure to delete your statefulset and everything else as i don't think that the operator can handle storageClass change. You also need to make sure to delete your pvc and pv.

Always check your pvc to see if your storageclass change is synced up properly