Closed tobias-neubert closed 3 years ago
The strange thing is, that the one claim data-volume-system-mongodb-0 is now bound to two volumes, although the access mode is ReadWriteOnce. But the preexisting volume, the one that I would like the mongodb to bind to, is "available", while only the newly created volume is "bound".
Ok, next try: I created the PVCs manually. That prevented the two new volumes from being created, but the mongodb pod did not start because it thought the PVs were in use by someone else.
Ok, I have a solution. It has nothing to do with the operator, so, sorry again for this issue. It works simply by selecting the volume by using the matchLabels selector.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-data-volume
labels:
app: moderetic
type: mongodb
role: data
spec:
storageClassName: hcloud-volumes
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
csi:
volumeHandle: "11099996"
driver: csi.hetzner.cloud
fsType: ext4
---
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: system-mongodb
labels:
app: moderetic
type: mongodb
spec:
members: 1
type: ReplicaSet
version: "4.2.6"
logLevel: INFO
security:
authentication:
modes: ["SCRAM"]
users:
- name: moderetic
db: moderetic
passwordSecretRef:
name: mongodb-secret
roles:
- name: clusterAdmin
db: moderetic
- name: userAdminAnyDatabase
db: moderetic
scramCredentialsSecretName: moderetic-scram-secret
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
persistent: true
statefulSet:
spec:
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 1
memory: 1Gi
limits:
memory: 8Gi
- name: mongodb-agent
resources:
requests:
memory: 50Mi
limits:
cpu: 500m
memory: 256Mi
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: hcloud-volumes
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: moderetic
type: mongodb
role: data
- metadata:
name: logs-volume
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: hcloud-volumes
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: moderetic
type: mongodb
role: logs
I'm glad you got everything working @tobias-neubert !
@tobias-neubert I also tried to create resource with preexisting persistent volumes but i see below error Error creating /data/automation-mongod.conf : open /data/automation-mongod.conf: permission denied
for mongodb-agent container, Have you done any additonal steps to set permissions?
In regard to the persistent volumes and claims? No. But of course I followed all the other steps described in the documentation, for example, create the necessary roles and rolebindings. This is especially important if you, like I did, try to create the mongodb in a different namespace than the mongodb-kubernetes-operator.
Moreover, to me it seems that persistent volumes are one of the borders, where we are forced to leave kubernetes more or less clean abstraction. You always have to know exactly how your underlying storage is functioning.
For example: If I see a permission denied exception in a docker context, I always have to ask if somehow a windows folder is used. Maybe in a development environment, you try to mount a windows folder into a container. If that container tries to limit the file permissions or expects limited file permissions, it will not work because of the Windows Filesystem.
@tobias-neubert Thanks,I found it was issue with my PV,It was not configured properly, Its working now :-)
Sorry for upping this thread @tobias-neubert, as I'm struggling with something similar to mount pre-existing volumes on a MongoDBCommunity deployment.
But did you considereded on defining the explicit volume using the volumeName annotation?
statefulSet:
spec:
volumeClaimTemplates:
- metadata:
name: data-volume #data-volume-mongodb-0
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "gp3"
resources:
requests:
storage: 10Gi
volumeName: pv-data-mongodb
- metadata:
name: logs-volume #logs-volume-mongodb-0
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "gp3"
resources:
requests:
storage: 2Gi
volumeName: pv-logs-mongodb
In general - from my undersanding - the
statefulSet
content is a "regular" StatefulSet definition, so I assume that one can follow the general principles when customizing it.Originally posted by @LMolr in https://github.com/mongodb/mongodb-kubernetes-operator/issues/460#issuecomment-836243821
Ok, I am sorry, I did not want to open a new issue. This is a response to a comment of my last issue. But now it's here and I don't know how to delete it. So, anyways...
I want to reuse preexisting volumes for my mongodb after recreating the kubernetes cluster in my dev environment. As far as I understand, I can therefor simply bind the volume and the volume claim statically by referencing the claim in the volume definition, like so:
This a volume at hetzner.com. And the claimRef references the logs claim of the following StatefulSet:
Now, I can see how the volume claim is created and bound to the volume. But it stays in the state
Pending
until the mongodb pod is created using the claim. But then two new volumes are being created instead of using the preexisting ones.So, how can I achieve this?
Any help is welcome, Tobias