Closed domenicbove closed 10 months ago
Hey @domenicbove ,
thanks for bringing this up. I tried and was not able to reproduce the issue.
I did the following:
First, tried through regular YAML manifests. Changed storageClass for both replicasets (L206) and configservers (L410) in default cr.yaml. I applied it and storage class was changed.
I tried in helm as well. Modified default values.yaml: for replicaset - L258 for configrs - L371
As a result I got the desired storage class for both.
Can you please share your values.yaml with me?
Also you may check if the change was applied in psmdb object by running kubectl get psmdb YOURDB -o yaml
and find a corresponding section under configsvrReplSet
.
Hi @spron-in - thanks for getting back to me, here's my CR:
% kubectl get psmdb mongodb -o yaml
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
annotations:
creationTimestamp: "2023-11-08T05:32:26Z"
finalizers:
- delete-psmdb-pods-in-order
generation: 3
labels:
app.kubernetes.io/instance: mongodb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mongodb
app.kubernetes.io/version: 1.15.0
argocd.argoproj.io/instance: percona-mongodb
helm.sh/chart: psmdb-db-1.15.0
name: mongodb
namespace: percona
resourceVersion: "265248927"
uid: 3a1cae19-61de-4f2f-8fbd-355bb665bb79
spec:
backup:
enabled: true
image: percona/percona-backup-mongodb:2.3.0
pitr:
enabled: false
serviceAccountName: percona-server-mongodb-operator
crVersion: 1.15.0
image: percona/percona-server-mongodb:6.0.9-7
imagePullPolicy: Always
multiCluster:
enabled: false
pause: false
pmm:
enabled: false
image: percona/pmm-client:2.39.0
serverHost: monitoring-service
replsets:
- name: rs0
resources:
limits:
cpu: 300m
memory: 0.5G
requests:
cpu: 300m
memory: 0.5G
sidecars:
- args:
- --discovering-mode
- --compatible-mode
- --collect-all
- --log.level=debug
- --mongodb.uri=$(MONGODB_URI)
env:
- name: EXPORTER_USER
valueFrom:
secretKeyRef:
key: MONGODB_CLUSTER_MONITOR_USER
name: internal-mongodb-users
- name: EXPORTER_PASS
valueFrom:
secretKeyRef:
key: MONGODB_CLUSTER_MONITOR_PASSWORD
name: internal-mongodb-users
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: MONGODB_URI
value: mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_NAME)
image: percona/mongodb_exporter:0.39
name: metrics
size: 3
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 3Gi
storageClassName: kafka
secrets:
users: mongodb-secrets
sharding:
balancer:
enabled: true
configsvrReplSet:
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
expose:
enabled: false
exposeType: ClusterIP
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: 300m
memory: 0.5G
requests:
cpu: 300m
memory: 0.5G
sidecars:
- args:
- --discovering-mode
- --compatible-mode
- --collect-all
- --log.level=debug
- --mongodb.uri=$(MONGODB_URI)
env:
- name: EXPORTER_USER
valueFrom:
secretKeyRef:
key: MONGODB_CLUSTER_MONITOR_USER
name: internal-mongodb-users
- name: EXPORTER_PASS
valueFrom:
secretKeyRef:
key: MONGODB_CLUSTER_MONITOR_PASSWORD
name: internal-mongodb-users
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: MONGODB_URI
value: mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_NAME)
image: percona/mongodb_exporter:0.39
name: metrics
size: 3
volumeSpec:
persistentVolumeClaim:
resources:
requests:
storage: 3Gi
storageClassName: kafka
enabled: true
mongos:
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
expose:
exposeType: ClusterIP
podDisruptionBudget:
maxUnavailable: 1
resources:
limits:
cpu: 300m
memory: 0.5G
requests:
cpu: 300m
memory: 0.5G
size: 2
unmanaged: false
updateStrategy: SmartUpdate
upgradeOptions:
apply: disabled
schedule: 0 2 * * *
setFCV: false
versionServiceEndpoint: https://check.percona.com
status:
conditions:
- lastTransitionTime: "2023-11-08T20:48:10Z"
message: 'rs0: ready'
reason: RSReady
status: "True"
type: ready
- lastTransitionTime: "2023-11-08T20:48:10Z"
status: "True"
type: initializing
- lastTransitionTime: "2023-11-08T20:48:16Z"
status: "True"
type: ready
- lastTransitionTime: "2023-11-08T23:26:31Z"
status: "True"
type: initializing
- lastTransitionTime: "2023-11-08T23:27:11Z"
message: 'cfg: ready'
reason: RSReady
status: "True"
type: ready
- lastTransitionTime: "2023-11-08T23:27:11Z"
status: "True"
type: initializing
- lastTransitionTime: "2023-11-08T23:27:50Z"
message: 'cfg: ready'
reason: RSReady
status: "True"
type: ready
- lastTransitionTime: "2023-11-08T23:27:50Z"
status: "True"
type: initializing
- lastTransitionTime: "2023-11-08T23:28:54Z"
message: 'cfg: ready'
reason: RSReady
status: "True"
type: ready
- lastTransitionTime: "2023-11-08T23:28:54Z"
status: "True"
type: initializing
- lastTransitionTime: "2023-11-08T23:29:40Z"
message: 'rs0: ready'
reason: RSReady
status: "True"
type: ready
- lastTransitionTime: "2023-11-08T23:29:40Z"
status: "True"
type: initializing
- lastTransitionTime: "2023-11-08T23:30:23Z"
message: 'rs0: ready'
reason: RSReady
status: "True"
type: ready
- lastTransitionTime: "2023-11-08T23:30:23Z"
status: "True"
type: initializing
- lastTransitionTime: "2023-11-08T23:30:57Z"
message: 'rs0: ready'
reason: RSReady
status: "True"
type: ready
- lastTransitionTime: "2023-11-08T23:30:57Z"
status: "True"
type: initializing
- lastTransitionTime: "2023-11-08T23:31:02Z"
status: "True"
type: ready
- lastTransitionTime: "2023-11-10T03:46:00Z"
message: 'create pbm object: create PBM connection to mongodb-rs0-0.mongodb-rs0.percona.svc.cluster.local:27017,mongodb-rs0-1.mongodb-rs0.percona.svc.cluster.local:27017,mongodb-rs0-2.mongodb-rs0.percona.svc.cluster.local:27017:
setup a new backups db: ensure lock index on pbmLock: write exception: write
concern error: (PrimarySteppedDown) Primary stepped down while waiting for replication'
reason: ErrorReconcile
status: "True"
type: error
- lastTransitionTime: "2023-11-10T03:46:18Z"
status: "True"
type: initializing
- lastTransitionTime: "2023-11-10T03:46:36Z"
status: "True"
type: ready
host: mongodb-mongos.percona.svc.cluster.local
mongoImage: percona/percona-server-mongodb:6.0.9-7
mongoVersion: 6.0.9-7
mongos:
ready: 2
size: 2
status: ready
observedGeneration: 3
ready: 8
replsets:
cfg:
initialized: true
ready: 3
size: 3
status: ready
rs0:
added_as_shard: true
initialized: true
ready: 3
size: 3
status: ready
size: 8
state: ready
and my pvcs:
% k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongod-data-mongodb-cfg-0 Bound pvc-5c40c40a-d1ce-4636-b3e2-1e9c36755047 3Gi RWO gp2 6d21h
mongod-data-mongodb-cfg-1 Bound pvc-d3b39739-780a-4358-8487-1db81dd9eb5e 3Gi RWO gp2 6d21h
mongod-data-mongodb-cfg-2 Bound pvc-115a1fd6-bd83-4235-beea-f366aa18bb3d 3Gi RWO gp2 6d21h
mongod-data-mongodb-rs0-0 Bound pvc-e966d021-f864-4122-9c3a-deb85118a252 3Gi RWO kafka 6d21h
mongod-data-mongodb-rs0-1 Bound pvc-dc2846c4-4bf1-437d-b061-2e2feb372820 3Gi RWO kafka 6d21h
mongod-data-mongodb-rs0-2 Bound pvc-6b3868e3-d880-47b6-923f-b635278f0cea 3Gi RWO kafka 6d21h
heres my values that created that CR:
nameOverride: mongodb
replsets:
- name: rs0
size: 3
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
pvc:
# annotations:
# volume.beta.kubernetes.io/storage-class: example-hostpath
storageClassName: kafka # kafka storage class has allow volume expansion
resources:
requests:
storage: 3Gi
sidecars:
- image: percona/mongodb_exporter:0.39
env:
- name: EXPORTER_USER
valueFrom:
secretKeyRef:
name: internal-mongodb-users
key: MONGODB_CLUSTER_MONITOR_USER
- name: EXPORTER_PASS
valueFrom:
secretKeyRef:
name: internal-mongodb-users
key: MONGODB_CLUSTER_MONITOR_PASSWORD
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: MONGODB_URI
value: "mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_NAME)"
args: ["--discovering-mode", "--compatible-mode", "--collect-all", "--log.level=debug", "--mongodb.uri=$(MONGODB_URI)"]
name: metrics
sharding:
configrs:
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
pvc:
# annotations:
# volume.beta.kubernetes.io/storage-class: example-hostpath
storageClassName: kafka # kafka storage class has allow volume expansion
resources:
requests:
storage: 3Gi
sidecars:
- image: percona/mongodb_exporter:0.39
env:
- name: EXPORTER_USER
valueFrom:
secretKeyRef:
name: internal-mongodb-users
key: MONGODB_CLUSTER_MONITOR_USER
- name: EXPORTER_PASS
valueFrom:
secretKeyRef:
name: internal-mongodb-users
key: MONGODB_CLUSTER_MONITOR_PASSWORD
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: MONGODB_URI
value: "mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_NAME)"
args: ["--discovering-mode", "--compatible-mode", "--collect-all", "--log.level=debug", "--mongodb.uri=$(MONGODB_URI)"]
name: metrics
I took your values yaml, truncated it to the following and applied:
nameOverride: mongodb
replsets:
- name: rs0
size: 3
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
pvc:
# annotations:
# volume.beta.kubernetes.io/storage-class: example-hostpath
storageClassName: premium-rwo # kafka storage class has allow volume expansion
resources:
requests:
storage: 3Gi
sharding:
configrs:
resources:
limits:
cpu: "300m"
memory: "0.5G"
requests:
cpu: "300m"
memory: "0.5G"
volumeSpec:
pvc:
# annotations:
# volume.beta.kubernetes.io/storage-class: example-hostpath
storageClassName: premium-rwo # kafka storage class has allow volume expansion
resources:
requests:
storage: 3Gi
I got my PVCs with the correct storage class.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongod-data-my-db-mongodb-cfg-0 Bound pvc-928d1070-35f7-408b-a7ba-4ae389a2f17d 3Gi RWO premium-rwo 2m17s
mongod-data-my-db-mongodb-cfg-1 Bound pvc-983be77a-b6ba-47ff-9036-213d23b30e0a 3Gi RWO premium-rwo 96s
mongod-data-my-db-mongodb-cfg-2 Bound pvc-7f0b788f-eb97-4a58-af67-0bf92db1d64a 3Gi RWO premium-rwo 54s
mongod-data-my-db-mongodb-rs0-0 Bound pvc-aba09c8e-f50e-4406-acf5-0fdc4b333f41 3Gi RWO premium-rwo 2m16s
mongod-data-my-db-mongodb-rs0-1 Bound pvc-32eede90-7410-42e5-9e11-d2729661ac2d 3Gi RWO premium-rwo 93s
mongod-data-my-db-mongodb-rs0-2 Bound pvc-af0c0df6-22e8-4341-a754-9088bf0807b1 3Gi RWO premium-rwo 63s
Just to be sure, are you doing it on the existing cluster or new one? Cause changing a storage class for an already running cluster will not work.
@domenicbove please let me know if you still face the issue.
@domenicbove I will close this one. Please let me know if you still need help.
Hi! I want to make sure my pvcs use a storageclass with allowVolumeExpansion, so I've added these helm values to make sure the storageclass is getting set:
And I'm seeing the pvcTemplate ending up in the corresponding statefulsets:
But unfortunately, the cfg pvcs seem to be created w the EKS default storage class:
Any idea why that would be? I've deleted and redeployed a few times. Any ideas?