Open Descartes1981 opened 4 years ago
This works for me. It includes the "--show-all" fix I suggested in a pull request but hasn't been merged.
diff --git a/deploy/gk-deploy b/deploy/gk-deploy
index e3735e1..4309b92 100755
--- a/deploy/gk-deploy
+++ b/deploy/gk-deploy
@@ -921,7 +921,7 @@ while [[ "x${heketi_service}" == "x" ]] || [[ "${heketi_service}" == "<none>" ]]
heketi_service=$(${CLI} describe svc/heketi | grep "Endpoints:" | awk '{print $2}')
done
-heketi_pod=$(${CLI} get pod --no-headers --show-all --selector="heketi" | awk '{print $1}')
+heketi_pod=$(${CLI} get pod --no-headers --selector="heketi" | awk '{print $1}')
if [[ "${CLI}" == *oc\ * ]]; then
heketi_service=$(${CLI} describe routes/heketi | grep "Requested Host:" | awk '{print $3}')
diff --git a/deploy/kube-templates/deploy-heketi-deployment.yaml b/deploy/kube-templates/deploy-heketi-deployment.yaml
index 94f2cf7..e8a7938 100644
--- a/deploy/kube-templates/deploy-heketi-deployment.yaml
+++ b/deploy/kube-templates/deploy-heketi-deployment.yaml
@@ -17,7 +17,7 @@ spec:
targetPort: 8080
---
kind: Deployment
-apiVersion: extensions/v1beta1
+apiVersion: apps/v1
metadata:
name: deploy-heketi
labels:
@@ -26,6 +26,10 @@ metadata:
annotations:
description: Defines how to deploy Heketi
spec:
+ selector:
+ matchLabels:
+ glusterfs: heketi-pod
+ deploy-heketi: pod
replicas: 1
template:
metadata:
diff --git a/deploy/kube-templates/gluster-s3-template.yaml b/deploy/kube-templates/gluster-s3-template.yaml
index 60045bc..17bb941 100644
--- a/deploy/kube-templates/gluster-s3-template.yaml
+++ b/deploy/kube-templates/gluster-s3-template.yaml
@@ -21,7 +21,7 @@ items:
status:
loadBalancer: {}
- kind: Deployment
- apiVersion: extensions/v1beta1
+ apiVersion: apps/v1
metadata:
name: gluster-s3-deployment
labels:
@@ -30,6 +30,9 @@ items:
annotations:
description: Defines how to deploy gluster s3 object storage
spec:
+ selector:
+ matchLabels:
+ glusterfs: s3-pod
replicas: 1
template:
metadata:
diff --git a/deploy/kube-templates/glusterfs-daemonset.yaml b/deploy/kube-templates/glusterfs-daemonset.yaml
index c37a5f4..5f37d67 100644
--- a/deploy/kube-templates/glusterfs-daemonset.yaml
+++ b/deploy/kube-templates/glusterfs-daemonset.yaml
@@ -1,6 +1,6 @@
---
kind: DaemonSet
-apiVersion: extensions/v1beta1
+apiVersion: apps/v1
metadata:
name: glusterfs
labels:
@@ -9,6 +9,10 @@ metadata:
description: GlusterFS DaemonSet
tags: glusterfs
spec:
+ selector:
+ matchLabels:
+ glusterfs: pod
+ glusterfs-node: pod
template:
metadata:
name: glusterfs
diff --git a/deploy/kube-templates/heketi-deployment.yaml b/deploy/kube-templates/heketi-deployment.yaml
index ecc6cef..9d25214 100644
--- a/deploy/kube-templates/heketi-deployment.yaml
+++ b/deploy/kube-templates/heketi-deployment.yaml
@@ -17,7 +17,7 @@ spec:
targetPort: 8080
---
kind: Deployment
-apiVersion: extensions/v1beta1
+apiVersion: apps/v1
metadata:
name: heketi
labels:
@@ -26,6 +26,9 @@ metadata:
annotations:
description: Defines how to deploy Heketi
spec:
+ selector:
+ matchLabels:
+ glusterfs: heketi-pod
replicas: 1
template:
metadata:
@renich, thanks for positing the solution. After I applied the diff, it looks like it is still failing.
Creating node node1 ... Unable to create node: New Node doesn't have glusterd running
Someone else also reported this, I wonder if you have seen this and how you solved this.
Thank You
@r1cebank you probably forgot to install glusterfs-client. Make sure all the nodes have it installed. Also, you need the dm_thin_pool kernel module. https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md
@renich Thank you, but it looks like something else is happening. I do have glusterfs-client installed on the nodes. But inside the heketi container it is giving the following log:
@r1cebank it seems like the gluster nodes aren't being created for some reason. No idea why. It might have to do with your k8s setup. Try creating other pods and see if they run.
I don't think it's a good idea to put the glusterfs pods in the default namespace either. I'd use some other one. I, personally, use kube-system.
Yeah, i tried creating other pods and they work. I am on ARM i wonder if thats why. I already made sure I used all arm images. I do install mine in the kube-system namespace
I noticed looking the logs of container "heketi-deploy" created by gk-deploy script that took some time to tasks be finished, so the script fails prematurely. Waiting some time and running script again give me to the next step correctly, but in some point running the script again without "--abort" lead to conflicts.
So, to get it working in just one script run, I add "sleep 900" commands in script like below to wait 15 min, or you can press "crtl+z" to pause script, wait to "heketi-deploy" logs finish tasks and do "fg" the get script running again, and so on in nexts tasks...
...
else
output "heketi topology loaded."
fi
echo sleep...
sleep 900
if [[ $("${heketi_cli}" volume list 2>&1) != *heketidbstorage* ]]; then
durability=''
if [ ${SINGLE_NODE} -eq 1 ] ; then
durability='--durability=none'
eval_output "${heketi_cli} setup-openshift-heketi-storage --help ${durability} >/dev/null 2>&1"
if [[ ${?} != 0 ]]; then
output "Not able to setup openshift heketi storage with ${durability}"
output "This indicates that the heketi running in the pod does not support single-node deployments."
exit 1
fi
fi
echo "${heketi_cli}" volume list
echo eval_output "${heketi_cli} setup-openshift-heketi-storage --listfile=/tmp/heketi-storage.json ${durability} 2>&1"
eval_output "${heketi_cli} setup-openshift-heketi-storage --listfile=/tmp/heketi-storage.json ${durability} 2>&1"
echo sleep...
sleep 900
if [[ ${?} != 0 ]]; then
output "Failed on setup openshift heketi storage"
output "This may indicate that the storage must be wiped and the GlusterFS nodes must be reset."
exit 1
fi
else
output "Volume heketidbstorage not found."
exit 1
fi
eval_output "${CLI} exec -i ${heketi_pod} -- cat /tmp/heketi-storage.json | ${CLI} create -f - 2>&1"
echo sleep...
sleep 900
if [[ ${?} != 0 ]]; then
output "Failed on creating heketi storage resources."
exit 1
fi
check pods "job-name=heketi-storage-copy-job" "Completed"
if [[ ${?} != 0 ]]; then
output "Error waiting for job 'heketi-storage-copy-job' to complete."
exit 1
fi
...
Kubernetes deprecated DaemonSet for API v1beta1 and v1beta2
Ref: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/
The current deployment templates are still using it and the script fails. Please make a script to deploy with v1.16
Thank you.