pkdone / gke-mongodb-shards-demo

MongoDB Sharded Cluster Deployment Demo for Kubernetes on GKE
MIT License
48 stars 33 forks source link

GCE Persistent disks never used #2

Closed marekaf closed 6 years ago

marekaf commented 6 years ago

Hi, it seems the created pd is never used. gcloud compute disks create --size 8GB --type pd-ssd pd-ssd-disk-8g-$i this creates a persistent disk, however, later when creating persistentVolumeClaim and pod requesting a persistent volume, it creates its own persistentVolume with autogenerated name and the former pd just hangs there. I believe the point was to have a persistentVolume (to not lose the data) which will be named like pd-ssd-disk-8g-$i and that doesn't work. I may submit a PR if you want. Just tell me if you prefer naming the pds as intented or its not important and we can just delete the loops with pd creations. Thanks for the repo and blog btw! It was really helpful.

pkdone commented 6 years ago

Hi, thanks for the notes. The scripts actually use the disks I explicitly create for the PVs/PVCs. What is actually redundant is the line "kubectl apply -f ../resources/gce-ssd-storageclass.yaml" in genrate.sh to craete a storage class. This is only needed if the Dynamic PVCs, but in my example I create the disks explicitly (partly to ensure XFS is used - may not be required anymore?). I've commented out that line with an explanation (I've left it in with a comment though as its a useful reference point for people to see). Both methods do result in persistent volume claims which are tolerant of recycling StatefulSet pods, I've just made it more explicit which method I'm using (Static PVs vs Dynamic PVs). See below which shows how the explicitly created disks via PVs are being used for the PVCs.. Thanks Paul

kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE data-volume-4g-1 4Gi RWO Retain Bound default/mongo-configdb-persistent-storage-claim-mongod-configdb-2 fast 2m data-volume-4g-2 4Gi RWO Retain Bound default/mongo-configdb-persistent-storage-claim-mongod-configdb-0 fast 2m data-volume-4g-3 4Gi RWO Retain Bound default/mongo-configdb-persistent-storage-claim-mongod-configdb-1 fast 2m data-volume-8g-1 8Gi RWO Retain Bound default/mongo-shard1-persistent-storage-claim-mongod-shard1-0 fast 2m data-volume-8g-2 8Gi RWO Retain Bound default/mongo-shard2-persistent-storage-claim-mongod-shard2-2 fast 2m data-volume-8g-3 8Gi RWO Retain Bound default/mongo-shard2-persistent-storage-claim-mongod-shard2-0 fast 2m data-volume-8g-4 8Gi RWO Retain Bound default/mongo-shard3-persistent-storage-claim-mongod-shard3-1 fast 2m data-volume-8g-5 8Gi RWO Retain Bound default/mongo-shard1-persistent-storage-claim-mongod-shard1-1 fast 2m data-volume-8g-6 8Gi RWO Retain Bound default/mongo-shard2-persistent-storage-claim-mongod-shard2-1 fast 2m data-volume-8g-7 8Gi RWO Retain Bound default/mongo-shard3-persistent-storage-claim-mongod-shard3-2 fast 2m data-volume-8g-8 8Gi RWO Retain Bound default/mongo-shard1-persistent-storage-claim-mongod-shard1-2 fast 2m data-volume-8g-9 8Gi RWO Retain Bound default/mongo-shard3-persistent-storage-claim-mongod-shard3-0 fast 2m

marekaf commented 6 years ago

what kubernetes version are you using? because it works completely different for me..

root@vps:~/gke-mongodb-shards-demo/scripts# k get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                                                               REASON    AGE
data-volume-4g-1                           4Gi        RWO           Retain          Available                                                                                 3d
data-volume-4g-2                           4Gi        RWO           Retain          Available                                                                                 3d
data-volume-4g-3                           4Gi        RWO           Retain          Available                                                                                 3d
data-volume-50g-1                          50Gi       RWO           Retain          Available                                                                                 3d
data-volume-50g-2                          50Gi       RWO           Retain          Available                                                                                 3d
data-volume-50g-3                          50Gi       RWO           Retain          Available                                                                                 3d 
data-volume-50g-4                          50Gi       RWO           Retain          Available                                                                                 3d
data-volume-50g-5                          50Gi       RWO           Retain          Available                                                                                 3d
data-volume-50g-6                          50Gi       RWO           Retain          Available                                                                                 3d
pvc-4934d8ee-d5be-11e7-a4b9-42010a8401b3   4Gi        RWO           Delete          Bound       default/mongo-configdb-persistent-storage-claim-mongod-configdb-0             3d
pvc-49a57c43-d5be-11e7-a4b9-42010a8401b3   50Gi       RWO           Delete          Bound       default/mongo-shard1-persistent-storage-claim-mongod-shard1-0                 3d
pvc-501bc8b4-d5be-11e7-a4b9-42010a8401b3   50Gi       RWO           Delete          Bound       default/mongo-shard2-persistent-storage-claim-mongod-shard2-0                 3d
pvc-5600bd65-d5be-11e7-a4b9-42010a8401b3   4Gi        RWO           Delete          Bound       default/mongo-configdb-persistent-storage-claim-mongod-configdb-1             3d
pvc-56851c5b-d5be-11e7-a4b9-42010a8401b3   50Gi       RWO           Delete          Bound       default/mongo-shard3-persistent-storage-claim-mongod-shard3-0                 3d
pvc-5bc633d4-d5be-11e7-a4b9-42010a8401b3   50Gi       RWO           Delete          Bound       default/mongo-shard2-persistent-storage-claim-mongod-shard2-1                 3d
pvc-61dd56df-d5be-11e7-a4b9-42010a8401b3   50Gi       RWO           Delete          Bound       default/mongo-shard3-persistent-storage-claim-mongod-shard3-1                 3d
pvc-63215897-d5be-11e7-a4b9-42010a8401b3   4Gi        RWO           Delete          Bound       default/mongo-configdb-persistent-storage-claim-mongod-configdb-2             3d
pvc-87c17743-d60a-11e7-a4b9-42010a8401b3   50Gi       RWO           Delete          Bound       default/mongo-shard1-persistent-storage-claim-mongod-shard1-1                 3d
pkdone commented 6 years ago

I'm just using current GKE defaults.

$ kubectl version Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:48:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.8-gke.0", GitCommit:"a7061d4b09b53ab4099e3b5ca3e80fb172e1b018", GitTreeState:"clean", BuildDate:"2017-10-10T18:48:45Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

$ gcloud container get-server-config Fetching server config for europe-west1-b defaultClusterVersion: 1.7.8-gke.0

Are you running the project on a Kubernetes platform that is not GKE?

marekaf commented 6 years ago
# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.8-gke.0", GitCommit:"a7061d4b09b53ab4099e3b5ca3e80fb172e1b018", GitTreeState:"clean", BuildDate:"2017-10-10T18:48:45Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

# gcloud container get-server-config
Fetching server config for europe-west1-d
defaultClusterVersion: 1.7.8-gke.0

could it be the client version?

marekaf commented 6 years ago

I upgraded my client kubectl version

Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.8-gke.0", GitCommit:"a7061d4b09b53ab4099e3b5ca3e80fb172e1b018", GitTreeState:"clean", BuildDate:"2017-10-10T18:48:45Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

and the persistentVolumes are created as intented. We can close this issue now :) Thanks.

pkdone commented 6 years ago

Great, thanks for testing!