Closed christian-posta closed 8 years ago
docker gerrit image & kube app already use a persistent storage (https://github.com/fabric8io/docker-gerrit#volumes, https://github.com/fabric8io/quickstarts/blob/master/apps/gerrit/pom.xml#L55-L57) which is of course not external as Google Cloud platform proposes
We absolutely should support persistent volumes, the problem being this is dependent on what the user wants to use. Not sure how we can set this up in a generic way? Perhaps NFS is the simplest to get going - an example at https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/nfs might be useful to try out.
@cmoulliard That's using a host dir which will not move with the pod if it moves hosts for whatever reason (e.g. host dies).
I've a working prototype; will push soon using PersistentVolume / PersistentVolumeClaims - just need to update the gofabric8 installer so its easier to create a PV
FWIW the PV / PVC didn't work for me last time I tried :(
It looks like persistent volumes now works. I've tested it on OpenShift v1.3.0-alpha.2
/ Kubernetes v1.3.0-alpha.1
to have Gogs repositories and DB persistent across restarts of my local dev environment.
I've configured the PV with the HostPath
plugin. It's for single node cluster only, though that provides a much better experience, as a developer at least.
For production environment, we could have a NFS server service, configured as in the Kubernetes NFS volume example.
We could reuse the fabric8
PV that's already created (not yet claimed by any PVC) and share the same PVC across all the F8 apps. To be able to do that (as opposed to having multiple PV/PVC for each apps), we may need to rely on the new subPath
API introduced with kubernetes/kubernetes#22575, and wait a little as it's been merged in k8s v1.3.0-alpha.4
. Each apps could mount one or more sub-paths from the fabric8
PV, so that would ease operations.
The Docker images for the corresponding apps should be amended to remove any VOLUME
directives. Besides, it looks like the Helm fork comes from an old version of Helm, which causes validation issues with the PersistentVolume
/PersistentVolumeClaim
API. So we may need to rebase it (maybe when the integration into the Kubernetes organisation is stabilized) so that we can update the charts.
Let me know what you think. I'd be glad to work on it if that's the right direction 😃.
That all sounds awesome! Lets do it!
We can have all our apps (openshift templates) use the PVCs then leave it up to the installer to decide what PV implementations to use (NFS / HostPath / Gluster or whatever). We should be able to add the PV generation to gofabric8
I guess rather than having one big PV; having separate ones for gogs / jenkins / nexus etc sounds good; then folks could use different PV implementations for different things
Great! Let me work on this.
Let us know if you need any help on anything!
BTW we're in the process of migrating to the new shiny fabric8 maven plugin; which makes extending/customizing the kuberentes/openshift yaml much simpler. https://github.com/fabric8io/fabric8-maven-plugin/
you can then configure things in the pom.xml like this: https://github.com/fabric8io/fabric8-devops/blob/master/grafana/pom.xml#L72
or by adding a partial yaml file (e.g. to add an SA and volume mount) https://github.com/jstrachan/springboot-config-demo/blob/master/src/main/fabric8/deployment.yml
while adding a PVC is supported in the 2.x fabric8-maven-plugin with custom properties; you might find it a little easier to use a yaml fragment and the new maven plugin. We're hoping to migrate all of fabric8-devops over to the new plugin soon. We've got most quickstarts migrated already: https://github.com/fabric8-quickstarts
Awesome! Thanks a lot!
@astefanutti any progress on this issue? No biggie if not only persistence is increasingly a must have feature ;) it'd be nice to start adding PVC's to most apps
@astefanutti BTW all the apps have been migrated to the new 3.x fabric8-maven-plugin now so it should be easy to add any PVC metadata to any of the apps by just editing/adding to src/main/fabric8/foo-deployment.yml
file.
e.g. here's the gogs one: https://github.com/fabric8io/fabric8-devops/blob/master/gogs/src/main/fabric8/gogs-deployment.yml#L28
I figure if we add PVCs to all apps that need persistence; we can then add the capability to gofabric8 to automatically create equivalent PVs for each named PVC using user defaults. e.g. if a user wants to use hostPath or NFS or whatever for all the PVs, then gofabric8 could just do that on install time. Or users could opt out and manually make the PVs.
Figure if we have named PVCs for each app (gogs, nexus, jenkins etc) we can then let folks pick the best PV for each PVC etc?
@jstrachan ah I've been working on the quickstarts lately. Let me finish quickly on these then I should be able to work back on the PV stuff. I have persistence working for Gogs and Jenkins already on my local environment (it's just CI/CD for developer made in heaven with it enabled, it's like having it's own little CI/CD cluster for dev that you can restart at will 😄). Still need to polish things up a little and integrate with the new f8-m-p. I should be able to work on it this week if all goes well!
@astefanutti sounds awesome! Hopefully adding the necessary YAML to the src/main/fabric8/*-deployment.yml
files should be fairly straightforward!
Is there a workaround for this? The changes could be a while being merged. In the meantime, if there's a simple workaround, that would be great!
@n-k a workaround is to apply the configuration manually into your cluster directly. The complexity vary depending on your setup, though for instance, if you want Gogs persistence on a single-node cluster for local development, you can create the PV with hostPath
persistence, e.g.:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-gogs-repositories
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 100Mi
hostPath:
path: /gogs/repositories
persistentVolumeReclaimPolicy: Recycle
Then, create the PVC, e.g.:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gogs-repositories
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
volumeName: pv-gogs-repositories
status:
accessModes:
- ReadWriteMany
capacity:
storage: 100Mi
Finally, retrieve the Gogs RC configuration (kubectl get rc gogs -oyaml > gogs.yaml
), update it and apply it (kubectl apply -f gogs.yaml
) with the following changes:
volumeMounts:
- mountPath: /opt/gogs/data
name: gogs-data
- mountPath: /home/git
name: gogs-repositories
volumes:
- name: gogs-data
persistentVolumeClaim:
claimName: gogs-data
- name: gogs-repositories
persistentVolumeClaim:
claimName: gogs-repositories
When I run the
kubectl apply -f gogs.yaml
I get:
proto: no encoder for TypeMeta unversioned.TypeMeta [GetProperties]
proto: tag has too few fields: "-"
proto: no coders for struct *reflect.rtype
proto: no encoder for sec int64 [GetProperties]
proto: no encoder for nsec int32 [GetProperties]
proto: no encoder for loc *time.Location [GetProperties]
proto: no encoder for Time time.Time [GetProperties]
proto: no encoder for i resource.int64Amount [GetProperties]
proto: no encoder for d resource.infDecAmount [GetProperties]
proto: no encoder for s string [GetProperties]
proto: no encoder for Format resource.Format [GetProperties]
proto: no encoder for InitContainers []v1.Container [GetProperties]
proto: no coders for intstr.Type
proto: no encoder for Type intstr.Type [GetProperties]
proto: no encoder for InitContainerStatuses []v1.ContainerStatus [GetProperties]
The Pod "gogs-217092172-injio" is invalid.
spec: Forbidden: pod updates may not change fields other than containers[*].image or spec.activeDeadlineSeconds
Any thoughts on what I am doing wrong? I am running on a single-node cluster.
@chbe8475 be careful updating the RC definition and not the pod one. It looks like you're trying to update the latter from the error message.
@astefanutti you are right, it was the pod I had. So that explains it I guess.
However, there is no rc for gogs available:
kubectl get rc gogs
Error from server: replicationcontrollers "gogs" not found
When running:
kubectl get rc
I get:
NAME DESIRED CURRENT AGE
fabric8 1 1 3h
So there is only one rc and that one is for fabric8.
I used gofabric8 deploy
to deploy.
I had a look in the other namespaces as well, but no rc for gogs.
On plain kubernetes we don't use replicationcontollers anymore, they've been replaced with replicasets..
kubectl get rs
If you're updating this though you'll want to update the deployment
kubectl edit deployment gogs
Finally, retrieve the Gogs RC configuration (kubectl get rc gogs -oyaml > gogs.yaml), update it and apply it (kubectl apply -f gogs.yaml) with the following changes:
Not sure the volumes are correct in the yaml above, I had to change the claim name for the gogs-data...
volumes:
- name: gogs-data
persistentVolumeClaim:
claimName: gogs-repositories
- name: gogs-repositories
persistentVolumeClaim:
claimName: gogs-repositories
OK I've a PR ready for gogs to push once the new fabric8-maven-plugin is released; which will require some changes in gofabric8...
@rawlingsj you just need a second PVC gogs-data
for the data similar to the gogs-repositories
PVC.
I didn't included for conciseness. Sorry about that.
here's the first PR for gogs: https://github.com/fabric8io/fabric8-devops/pull/541 needs work in gofabric8 though to be able to create the required PVs etc
here's the work needed in gofabric8 to automatically create PVs for any pending PVCs https://github.com/fabric8io/gofabric8/pull/103
The gofabirc8 change above defaults to using hostpath, I wonder if we should use a configmap instead to decide what type of PV impl to use. We can populate this config map using defaults when deploying which can be easily overiden by a CLI flag?
Maybe we just use hostpath for mini* and single node clusters. Then default to emptyDir for clusters but allow it to be overriden at gofabric8 deploy
/ gofabric8 volumes
with ebs
etc?
BTW I've updated jenkins now which works great. There's still some issues getting permissions on gogs working nicely with HostPaths mind you; see the discussions here: https://github.com/fabric8io/fabric8-devops/pull/544
OK this issue is finally fixed - yay!!!
Details of how to use persistence here: http://fabric8.io/guide/getStarted/persistence.html
This is huge! 😄
the easiest way to get persistent volumes working with openshift is to use minishift instead of vagrant: http://fabric8.io/guide/getStarted/minishift.html
they just work there.
BTW the accessModes should be accessModes: ReadWriteOnce if you want to create the PVs yourself
On 9 September 2016 at 10:49, totto notifications@github.com wrote:
Hi, thanks for this great feature :-)
I can't get it working on my first try though. Probably you could give me a hint on how to create the persistent volumes on OpenShift?
After spinning up OpenShift / fabric8 via vagrant on a Windows 7 laptop, there is an error when trying to deploy pods like "gogs" or "jenkins":
- example : FailedMount Unable to mount volumes for pod "gogs-1-3yb0x_default(c31abee6-7670-11e6-8151-080027b5c2f4)": unsupported volume type
I am using the Vagrantfile of the fabric8-installer, and I am on this version: https://github.com/fabric8io/fabric8-installer.git --> vagrant/openshift/Vagrantfile
Revision: 8720a8fbc3734e1d3d47a07439cf2c0bd92f60a5 Author: fusesource-ci fuse-infra@redhat.com Date: 23.08.2016 16:02:26 Message: Update to gofabric8 0.4.45
A fresh vagrant destroy vagrant up results in OpenShift and fabric8 console running. But here is the state of my pods:
$ oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-wggoq 1/1 Running 1 19h exposecontroller-1-deploy 0/1 Error 0 19h fabric8-74ljw 1/1 Running 2 19h fabric8-docker-registry-1-oyhi7 1/1 Running 2 19h fabric8-forge-1-deploy 0/1 Error 0 19h gogs-1-deploy 0/1 Error 0 19h jenkins-1-deploy 0/1 Error 0 19h nexus-1-deploy 0/1 Error 0 19h router-1-deploy 0/1 Error 0 19h
These are my persistent volume claims:
$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE gogs-data Pending gogs-data 0 19h gogs-repositories Pending gogs-repositories 0 19h jenkins-jobs Pending jenkins-jobs 0 19h jenkins-workspace Pending jenkins-workspace 0 19h nexus-storage Pending nexus-storage 0 19h
And these are my persistent volumes:
$ oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE fabric8 100G RWX Available 19h gogs-data 100G RWX Available 19h gogs-repositories 100G RWX Available 19h jenkins-jobs 100G RWX Available 19h jenkins-workspace 100G RWX Available 19h nexus-storage 100G RWX Available 19h
I created the persistent volumes as hostpath volumes, and I am running on a Windows 7 laptop.
PV were created like this (with gofabric8 volume):
mkdir /vagrant mkdir /vagrant/fabric8-data
for pvname in gogs-data gogs-repositories jenkins-jobs jenkins-workspace nexus-storage; do mkdir /vagrant/fabric8-data/$pvname gofabric8 volume -y --name $pvname --host-path /vagrant/fabric8-data/$pvname done
oc get events shows an error unsupported volume type:
4m 4m 2 gogs-1-3yb0x Pod Warning FailedMount {kubelet 172.28.128.4} Unable to mount volumes for pod "gogs-1-3yb0x_default(c31abee6-7670-11e6-8151-080027b5c2f4)": unsupported volume type 4m 4m 2 gogs-1-3yb0x Pod Warning FailedSync {kubelet 172.28.128.4} Error syncing pod, skipping: unsupported volume type
4m 4m 2 jenkins-1-gjaaw Pod Warning FailedMount {kubelet 172.28.128.4} Unable to mount volumes for pod "jenkins-1-gjaaw_default(c54f3f97-7670-11e6-8151-080027b5c2f4)": unsupported volume type 4m 4m 2 jenkins-1-gjaaw Pod Warning FailedSync {kubelet 172.28.128.4} Error syncing pod, skipping: unsupported volume type
Also tried to create the PVs with oc create (same error):
mkdir /vagrant/fabric8-data
for pvname in gogs-data gogs-repositories jenkins-jobs jenkins-workspace nexus-storage; do mkdir /vagrant/fabric8-data/$pvname cat <
/tmp/$pvname.yaml apiVersion: v1 kind: PersistentVolume metadata: name: $pvname spec: accessModes:
- ReadWriteMany capacity: storage: 100Mi hostPath: path: /vagrant/fabric8-data/$pvname persistentVolumeReclaimPolicy: Recycle EOF oc create -f /tmp/$pvname.yaml done
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/fabric8io/fabric8/issues/4413#issuecomment-245869830, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB1vLEpRK6WkBfAy1J5UrYgsyddzRiEks5qoSulgaJpZM4FS8F7 .
Red Hat
Twitter: @jstrachan Email: james.strachan@gmail.com Blog: https://medium.com/@jstrachan/
fabric8: http://fabric8.io/ open source microservices platform
right now when gerrit/jenkins/taiga pods get restarted, the projects stored there disappear. we can try to store some of that data in a persistent volume https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md