Closed KarooolisZi closed 1 month ago
hey @KarooolisZi thanks for opening an Issue!
looking at your applied sts and applied cr one can see that you've increased the storage of the pvcClaim.
mdbc
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 70G <---- look at me!
storageClassName: ebs-sc
- metadata:
name: logs-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
vs
sts
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: data-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50G <---- look at me!
storageClassName: ebs-sc
volumeMode: Filesystem
status:
phase: Pending
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: logs-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
storageClassName: ebs-sc
volumeMode: Filesystem
I suggest that you update your claim to be equal to the one in the sts and you should be fine
@nammn Thank you for such a fast response. I might missed this one. I can see my PVs are now 70GB yet sts config has 50GB. I remember changing storage because of the urgent need. Is there any chance to make sts use these PVs already created by mdbc. Or the only resolution is to either get back to old size in mdbc or recreate sts?
@KarooolisZi you can try the following to have the operator use the new pvc sizes, please note that this is a limitation of sts. You cannot resize the storage used by an sts. Read more here: https://github.com/kubernetes/enhancements/pull/4651/commits/763b35fd9272edff361162227377c4670f79d8ce
steps:
@nammn Yes, I am aware of limitation just thinking about the workaround. Would these steps guarantee sts works as previously and attaches required pvc?
yes, since the pvcs are having the same name and therefore the sts will re-attach them, all assuming you didn't change the name.
Closing this one, since the issue itself was a miss-configuration.
Yes, great misconfiguration. I have definitely spent too much time on it.. Thank you for your patience and swift response
@nammn After doing so, my sts shows as 0/0 with replicas 2 and does not recreate pods. I have one pod but it is not in sts somehow.
It looks like adding an arbiter crashes operator. Trying to do 2 members and 1 arbiter. 2 members work perfectly.
After getting it to work, I get 0/2 as mongo agent is not ready after adding arbiter. Strange behaviour
@KarooolisZi please open a new issue and provide the required information as given in the issue template
Hello,
I was trying to adjust replicas number in the CR MongoDB yaml manifest. Only change to current CR was changing 'replicas' from 2 to 3.
That is strange because according to operator I should be able to do this. My statefulset is not scaling. I checked last applied configurations, last success apply and no any differences were there. I compared these configuration to my VCS configuration. No changes were detected.
Using 6.0.4 MongoDB community version. Using 0.7.8 operator version.
I tried removing annotations for existing CRD which states as 'failed' after apply and reapply again - no result. Nothing was changed so previous setup is still online and working. However, it prompts errors on operator without any reasons even after applying same configuration with 2 'replicas' again.
I have another environment with same specific operator and MongoDB versions. I was able to add replica and even arbiter to specs there. That was also the only change made to CR of MongoDB.
The error I get:
ERROR controllers/mongodb_status_options.go:104 Error deploying MongoDB ReplicaSet: error creating/updating StatefulSet: error creating/updating StatefulSet: StatefulSet.apps “mongodb” is invalid: spec: Forbidden: updates to statefulset spec for fields other than ‘replicas’, ‘ordinals’, ‘template’, ‘updateStrategy’, ‘persistentVolumeClaimRetentionPolicy’ and ‘minReadySeconds’ are forbidden
What did you do to encounter the bug? Steps to reproduce the behavior:
spec.members: 2
tospec.members: 3
kubectl apply -f database.yaml
What did you expect? I expected operator to add an additional member to existing MongoDB cluster in statefulset. Making members number 3 instead of existing 2.
What happened instead? Statefulset of MongoDB still had 2 members. MongoDB operator threw error (pasted in description).
Screenshots If applicable, add screenshots to help explain your problem.
Operator Information
Kubernetes Cluster Information
If possible, please include:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:227 2024-09-04T07:11:25.133Z INFO controllers/replica_set_controller.go:137 Reconciling MongoDB {"ReplicaSet": "mongodb-<NAME>/mongodb"} 2024-09-04T07:11:25.134Z DEBUG controllers/replica_set_controller.go:139 Validating MongoDB.Spec {"ReplicaSet": "mongodb-<NAME>/mongodb"} 2024-09-04T07:11:25.134Z DEBUG controllers/replica_set_controller.go:148 Ensuring the service exists {"ReplicaSet": "mongodb-<NAME>/mongodb"} 2024-09-04T07:11:25.134Z DEBUG agent/replica_set_port_manager.go:122 No port change required {"ReplicaSet": "mongodb-<NAME>/mongodb"} 2024-09-04T07:11:25.142Z INFO controllers/replica_set_controller.go:463 Create/Update operation succeeded {"ReplicaSet": "mongodb-<NAME>/mongodb", "operation": "updated"} 2024-09-04T07:11:25.142Z DEBUG controllers/replica_set_controller.go:409 Scaling up the ReplicaSet, the StatefulSet must be updated first {"ReplicaSet": "mongodb-<NAME>/mongodb"} 2024-09-04T07:11:25.142Z INFO controllers/replica_set_controller.go:330 Creating/Updating StatefulSet {"ReplicaSet": "mongodb-<NAME>/mongodb"} 2024-09-04T07:11:25.151Z ERROR controllers/mongodb_status_options.go:104 Error deploying MongoDB ReplicaSet: error creating/updating StatefulSet: error creating/updating StatefulSet: StatefulSet.apps "mongodb" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'ordinals', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden github.com/mongodb/mongodb-kubernetes-operator/controllers.messageOption.ApplyOption /workspace/controllers/mongodb_status_options.go:104 github.com/mongodb/mongodb-kubernetes-operator/pkg/util/status.Update /workspace/pkg/util/status/status.go:25 github.com/mongodb/mongodb-kubernetes-operator/controllers.ReplicaSetReconciler.Reconcile /workspace/controllers/replica_set_controller.go:200 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:114 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:311 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:266 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2 /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:227
mongo-<>
. For instance:❯ k get mdbc NAME PHASE VERSION mongo Running 4.4.0
apiVersion: v1 items:
kubectl get sts -oyaml
apiVersion: apps/v1 kind: StatefulSet metadata: creationTimestamp: "2024-01-03T07:47:03Z" generation: 31 labels: app: mongodb-
name: mongodb
namespace: mongodb-
ownerReferences:
apiVersion: mongodbcommunity.mongodb.com/v1 blockOwnerDeletion: true controller: true kind: MongoDBCommunity name: mongodb uid:
resourceVersion: ""
uid:
spec:
persistentVolumeClaimRetentionPolicy:
whenDeleted: Retain
whenScaled: Retain
podManagementPolicy: OrderedReady
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: mongodb-
serviceName: mongodb-
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2024-09-02T08:55:35Z"
creationTimestamp: null
labels:
app: mongodb-
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
matchExpressions:
run post-start hook to handle version changes
/hooks/version-upgrade
wait for config and keyfile to be created by the agent
while ! [ -f /data/automation-mongod.conf -a -f /var/lib/mongodb-mms-automation/authentication/keyfile ]; do sleep 3 ; done ; sleep 2 ;
start mongod with this configuration
exec mongod -f /data/automation-mongod.conf;
env:
kubectl get pods -oyaml
apiVersion: v1 kind: Pod metadata: annotations: agent.mongodb.com/version: "6" kubectl.kubernetes.io/restartedAt: "2024-09-02T08:55:35Z" creationTimestamp: "2024-09-02T08:56:31Z" generateName: mongodb- labels: app: mongodb-
apps.kubernetes.io/pod-index: "0"
controller-revision-hash: mongodb-
statefulset.kubernetes.io/pod-name:
name: mongodb-0
namespace: mongodb-
ownerReferences:
command:
run post-start hook to handle version changes
/hooks/version-upgrade
wait for config and keyfile to be created by the agent
while ! [ -f /data/automation-mongod.conf -a -f /var/lib/mongodb-mms-automation/authentication/keyfile ]; do sleep 3 ; done ; sleep 2 ;
start mongod with this configuration
exec mongod -f /data/automation-mongod.conf;
env:
kubectl exec -it mongo-0 -c mongodb-agent -- cat /var/lib/automation/config/cluster-config.json
kubectl exec -it mongo-0 -c mongodb-agent -- cat /var/log/mongodb-mms-automation/healthstatus/agent-health-status.json
kubectl exec -it mongo-0 -c mongodb-agent -- cat /var/log/mongodb-mms-automation/automation-agent-verbose.log