Closed davidcommarmond closed 6 years ago
~Looking at the chart, I'm not sure if it's even supposed to be able to do that. The usage of a PV kind of makes it impossible, as a PV can only be mounted on a single machine afaik. If you want to work around that, you'll have to create a custom image with all your required plugins and themes included and use an object store as a backend. I think there are efforts to make this easy on Kubernetes, but a quick Google search did not give me the required plugins and the like.~
Thanks for correcting me, @krancour! We mostly work on AWS and it's not possible there, so I kind of generalised my experience with that.
The usage of a PV kind of makes it impossible, as a PV can only be mounted on a single machine afaik.
This isn't so. Firstly, PVs never get mounted directly. The thing that gets mounted is a PVC. PVCs can operate in different access modes and the different access modes that are supported vary by storage class.
Here's what happens in this chart, by default. There is a PVC (Persistent Volume Claim) that references the default StorageClass. Since is references a kind of storage instead of a specific, existing volume, this is a dynamic provisioning scenario. A new PV (volume) is created exclusively for this PVC, and the details are spelled out by the default StorageClass. The default storage class in GKE (and Azure, and probably elsewhere) happens to only support being mounted once.
Let's suppose for a moment that a different StorageClass were available in your cluster that is backed by (for instance) a file share (e.g. via samba). This sort of StorageClass would support multiple mounts. Or more accurately, PVCs that claim a PV (volume) of that StorageClass can be mounted to multiple pods simultaneously.
Now... on top of all that, there is also the PVCs AccessMode to consider. The default access mode in this chart is ReadWriteOnce. Even if the underlying storage supports multiple mounts, that policy forbids it.
So... if you want to scale WordPress to multiple pods, here's what you have to do...
$ helm install stable/wordpress ..... -set persistence.storageClass=<whatever> --set persistence.accessMode=ReadWriteMany
If your existing volume is already full of data you care about, you should find a Wordpress plugin that helps you do backup/restore to help facilitate transplanting the data from the current volume to the new one.
And fwiw... I just went through all of this on Friday and can confirm it worked in Azure using an azureFile StorageClass. Off the top of my head, I'm not sure what the equivalent StorageClass is in GCP/GKE, or if they even have an equivalent, but the k8s docs on StorageClass will say.
Thanks @krancour According to https://kubernetes.io/docs/concepts/storage/persistent-volumes/ GCEPersistentDisk can't do ReadWriteMany.
Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany |
---|---|---|---|
AWSElasticBlockStore | ✓ | - | - |
AzureFile | ✓ | ✓ | ✓ |
AzureDisk | ✓ | - | - |
CephFS | ✓ | ✓ | ✓ |
Cinder | ✓ | - | - |
FC | ✓ | ✓ | - |
FlexVolume | ✓ | ✓ | - |
Flocker | ✓ | - | - |
GCEPersistentDisk | ✓ | ✓ | - |
Glusterfs | ✓ | ✓ | ✓ |
HostPath | ✓ | - | - |
iSCSI | ✓ | ✓ | - |
PhotonPersistentDisk | ✓ | - | - |
Quobyte | ✓ | ✓ | ✓ |
NFS | ✓ | ✓ | ✓ |
RBD | ✓ | ✓ | - |
VsphereVolume | ✓ | - | - (works when pods are collocated) |
PortworxVolume | ✓ | - | ✓ |
ScaleIO | ✓ | ✓ | - |
StorageOS | ✓ | - | - |
4 solutions spring to mind:
The last solution could work, I could setup some rules in the load balancer, if you're on wp-admin then talk to this pod, otherwise talk to any pod you want (I hope this is possible in k8s LBs...) and theoretically this would be fine with 90% of websites we develop (corporate stuff).
But this wouldn't work I suppose with stuff like marketplaces? Since these have a front-end, and if they end-up on the read-only pod, then...
Anyone of you ever tried NFS / GlusterFS solution ? Is this stable for production use ? Can the cheapest VM on GCE sustain it ? Or I need something bigger to have good performance ?
This is kind of defeats the purpose though. I'm trying to move to the cloud and K8S to overcome these issues having to setup file servers, backup servers, etc etc, as from what I've seen so far, K8S + GKE + HELM was kind-of mature enough now to do that without having to worry about DR...
And now I'm considering having to spin-up an NFS server.
Is there any other solution / workaround ? I don't want to have to stick to my dedicated servers with low-availability.
1 and 2: I wouldn't want to have to start firing up VMs either.
3: I'm biased. I work for Azure. But Azure Container Service is a fine place to run Kubernetes.
4: What you're talking about doing would require some advanced knowledge of Ingress resources, and you would also need an ingress controller. But before trying to figure that out... I don't quite understand how you imagine scenario 4 working. Splitting admin traffic and normal visitor traffic to two different installations of WordPress won't in any way resolve the need for one common volume to be mounted to multiple pods.
@krancour
4, GCEPersistentDisk support ReadOnlyMany :-)
I don't know how Azure would perform though... And putting it in Europe / Asia / Sydney / RSA if it doesn't perform the same will end up costing us too much... Unless you have a better biased solution? :-)
Still re: 4...
GCEPersistentDisk support ReadOnlyMany
And if that's the access mode on the PVC, how do you also mount that same PVC to the one node that's handling site admin traffic?
I'm not trying to steer you in any specific direction, except away from no. 4, because I can foresee it not working out.
Yep, saw that too. I went ahead and spun up a standard VM for now, couldn't hold off on this deployment any longer.
Will revisit this again when persistent storage becomes better across platforms as well as the other thread on external existing db connection gets merged.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale
Is this a request for help?: No
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes: $ helm version Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
K8S: 1.8.4-gke.0
Which chart: stable/wordpress
What happened: Tried to set replicas of the -wordpress workload to 2 using the following command: $ kubectl scale --replicas 2 deployments tinseled-vulture-wordpress
What you expected to happen: The wordpress service gets replicated to 2 pods instead of the original one
How to reproduce it (as minimally and precisely as possible):
./get_helm.sh
followed byhelm init
helm install stable/wordpress
kubectl scale --replicas 2 deployments tinseled-vulture-wordpress
Of course you need to replace
tinseled-vulture-wordpress
by the randomly generated nameWhen you then go into workloads, click on the wordpress worker, scroll to the bottom to see the pods, the new one created gets stuck in "ContainerCreating" status.
And when you open it and go to "events" it says "Multi-Attach error for volume "pvc-7da84c87-ddc2-11e7-8e49-42010a80014d" Volume is already exclusively attached to one node and can't be attached to another" with reason "FailedAttachVolume"
And then "Unable to mount volumes for pod "tinseled-vulture-wordpress-5d9cb55b7-dzgpq_default(03ad6046-ddc3-11e7-8e49-42010a80014d)": timeout expired waiting for volumes to attach/mount for pod "default"/"tinseled-vulture-wordpress-5d9cb55b7-dzgpq". list of unattached/unmounted volumes=[wordpress-data]" with reason "FailedMount"
Anything else we need to know: I haven't tested this on other platforms
I did however try different scenarios, renaming stuff, creating disk claims in advance, etc, nothing works...
I also tried with k8s version 1.7.8-gke.0 with the same results...