ceph / ceph-helm

Curated applications for Kubernetes
Apache License 2.0
108 stars 36 forks source link

ceph-etc configmaps changing after adding and removing or changing osd-s #49

Closed zlangi closed 6 years ago

zlangi commented 6 years ago

Is this a request for help?: No

Is this a BUG REPORT or FEATURE REQUEST? (choose one): It's a bug report

Version of Helm and Kubernetes: helm version 2.8.0

kubernetes version: 1.9.2

Which chart: ceph-helm

What happened: After the deployment, added a new disk to the ceph-overrides.yml and ran: helm upgrade ceph local/ceph --namespace=ceph -f ~/ceph-overrides.yaml This added the new disk in, but also changed the ceph-etc configmap which caused the cluster fsid to change. If the machine which runs the OSD containers gets rebooted, the new containers come up with the new fsid, the mon-s are still running with the old fsid which causes the osd containers to crash loop back off.

What you expected to happen: Configmap should have stayed the same.

How to reproduce it (as minimally and precisely as possible): Finish deployment, add a new disk or change an existing osd mapping to a new disk in the ceph-overrides.yml file then issue the command: helm upgrade ceph local/ceph --namespace=ceph -f ~/ceph-overrides.yaml

Anything else we need to know:

zlangi commented 6 years ago

Ultimately we solved this by putting the fsid in the ceph-overrides.yaml file.

It looks like this for example:

conf: ceph: config: global: fsid: 12ead961-8c1d-4211-b943-ac8682edca39

Doing this stopped the fsid to change.