Open jocelynthode opened 5 years ago
@humblec Hey could you give us an answer on this topic please ?
@jocelynthode this was working smoothly and in one stage it became broken. Let me take a look at this in deep. I will get back,
@humblec Any news ?
@humblec any update? Running into the lvm2 issue (installing OKD v3.11) that was fixed in February.
I'm interested in this too. GlusterFS is becoming unusable in k8s. :S
I'm interested in this too. GlusterFS is becoming unusable in k8s. :S
While this repo may take some more time (depending on MAINTAINERs cycles) to sort out things, If you are looking at having Gluster in Container native way in k8s, try kadalu.io project too. It has the latest glusterfs-7.3 version in its storage servers.
Hey,
I recently saw that we have added a workaround for the lvm2 bug that caused issues on https://github.com/gluster/gluster-containers/issues/128 .
I see that the 4u1 as well as latest have not been rebuilt to take into account these fixes.
First @humblec could you rebuild these so that I can test and confirm that https://github.com/gluster/gluster-containers/issues/128 is indeed fixed by the downgrade please ?
Secondly, I think we should setup on Dockerhub a GitHub integration that would trigger an image build everytime something is pushed to the branch as explained here (https://docs.docker.com/docker-hub/builds/)
We could have automated builds for every branch you support for example, every time there is a commit on the gluster-4.1 branch this would trigger a build for the tag 4u1_centos7. Everytime there is a commit on master, this would trigger a build for the tag latest etc.
This would remove the need to open an issue each time we need you to rebuild the image.
What do you think ?