Closed pravinrajr9 closed 3 years ago
hello !
to determine if there is a bug on the readiness template, can you please bring the full logs ? we should see the helm upgrade command that was sent, as well as the things spray is waiting for (statefulsets, deployments, jobs, ...)
in addition, if you can also during the infinite loop send me the output of kubectl get -o yaml deploy xxx so I can see the actual values managed by the readiness template.
thanks !
helm spray --namespace mynamespace usecases/host-attestation/ --verbose
[spray] processing chart from local file or directory "usecases/host-attestation/"... [spray] looking for "#! .Files.Get" clauses into the values file of the umbrella chart... [spray] looking for "tags" in values provided through "--values/-f", "--set", "--set-string", and "--set-file"... [spray] deploying solution chart "usecases/host-attestation/" in namespace "mynamespace" [spray] | subchart | is alias of | targeted | weight || corresponding release | revision | status | [spray] | -------- | ----------- | -------- | ------ || --------------------- | -------- | ------ | [spray] | cms | - | true | 0 || cms | None | Not deployed | [spray] | aas | - | true | 1 || aas | None | Not deployed | [spray] | aas-manager | - | true | 2 || aas-manager | None | Not deployed | [spray] | hvs | - | true | 3 || hvs | None | Not deployed | [spray] processing sub-charts of weight 0 [spray] > upgrading release "cms": deploying first revision (appVersion v4.0.0)... [spray] o release: "cms" upgraded [spray] o helm status: deployed [spray] o warning: ignored part of helm upgrade output [spray] o warning: ignored part of helm upgrade output [spray] o warning: ignored part of helm upgrade output [spray] o warning: ignored part of helm upgrade output [spray] o warning: ignored part of helm upgrade output [spray] o warning: ignored part of helm upgrade output [spray] o warning: ignored part of helm upgrade output [spray] o release deployments: [cms] [spray] > waiting for liveness and readiness... [spray] o waiting for deployments [cms] [spray] o waiting for deployments [cms] [spray] o waiting for deployments [cms] [spray] o waiting for deployments [cms] [spray] o waiting for deployments [cms] [spray] o waiting for deployments [cms] [spray] o waiting for deployments [cms] [spray] o waiting for deployments [cms]
[root@localhost chart-helm]# kubectl get -o yaml deploy cms -n mynamespace
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
meta.helm.sh/release-name: cms
meta.helm.sh/release-namespace: mynamespace
creationTimestamp: "2021-08-18T04:58:47Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: cms
app.kubernetes.io/version: v4.0.0
helm.sh/chart: cms-0.1.0
name: cms
namespace: mynamespace
resourceVersion: "6835136"
selfLink: /apis/apps/v1/namespaces/mynamespace/deployments/cms
uid: 4fa25a05-64e6-47e6-ac68-c64aacad3a2e
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/name: cms
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/name: cms
spec:
containers:
- envFrom:
- configMapRef:
name: cms
image: cms:v4.0.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 30
httpGet:
path: /cms/v1/version
port: 8445
scheme: HTTPS
initialDelaySeconds: 1
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
name: cms
ports:
- containerPort: 8445
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
path: /cms/v1/version
port: 8445
scheme: HTTPS
initialDelaySeconds: 1
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
runAsGroup: 1001
runAsUser: 1001
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/log/cms
name: cms-logs
- mountPath: /etc/cms
name: cms-config
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: cms-config
persistentVolumeClaim:
claimName: cms-config
- name: cms-logs
persistentVolumeClaim:
claimName: cms-logs
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-08-18T04:58:51Z"
lastUpdateTime: "2021-08-18T04:58:51Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-08-18T04:58:47Z"
lastUpdateTime: "2021-08-18T04:58:51Z"
message: ReplicaSet "cms-56fd64954c" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
kubectl rollout status deployment/cms -n mynamespace
deployment "cms" successfully rolled out
thanks for the updates !
can you run again in adding --debug flag ? i should have been more specific in my initial request, sorry for that... with debug enabled, you should see after "kubectl template:" a "kubectl output:" section (and as usual if in // you can capture the kubectl get)
in the meantime, i would like to reproduce on my side. what is your umbrella chart composed of (several sub-charts, only deployments or some statefulset, any valuable other information, ...)
thanks !
[root@localhost ~]# cat output-debug
[spray] processing chart from local file or directory "usecases/host-attestation/"...
[spray] starting spray with flags: &{ChartName:usecases/host-attestation/ ChartVersion: Targets:[] Excludes:[] Namespace:imynamespace CreateNamespace:false PrefixReleases: PrefixReleasesWithNamespace:false ResetValues:false ReuseValues:false ValuesOpts:{ValueFiles:[] StringValues:[] Values:[] FileValues:[]} Force:false Timeout:300 DryRun:false Verbose:true Debug:true deployments:[] statefulSets:[] jobs:[]}
[spray] looking for "#! .Files.Get" clauses into the values file of the umbrella chart...
[spray] looking for "tags" in values provided through "--values/-f", "--set", "--set-string", and "--set-file"...
[spray] deploying solution chart "usecases/host-attestation/" in namespace "mynamespace"
[spray] running helm command : [list --namespace mynamespace -o json]
[spray] helm command returned:
[]
[spray] | subchart | is alias of | targeted | weight || corresponding release | revision | status |
[spray] | -------- | ----------- | -------- | ------ || --------------------- | -------- | ------ |
[spray] | cms | - | true | 0 || cms | None | Not deployed |
[spray] | aas | - | true | 1 || aas | None | Not deployed |
[spray] | aas-manager | - | true | 2 || aas-manager | None | Not deployed |
[spray] | hvs | - | true | 3 || hvs | None | Not deployed |
[spray] processing sub-charts of weight 0
[spray] > upgrading release "cms": deploying first revision (appVersion v4.0.0)...
[spray] o running helm command for "cms": [upgrade --install cms usecases/host-attestation/ --namespace mynamespace --timeout 300s -o json --set cms.enabled=true,aas.enabled=false,aas-manager.enabled=false,hvs.enabled=false, -f /tmp/spray-275158340/updatedDefaultValues-435816403.yaml]
[spray] o helm command for "cms" returned:
[spray] o release: "cms" upgraded
[spray] o helm status: deployed
[spray] o warning: ignored part of helm upgrade output
[spray] o warning: ignored '
[spray] o release deployments: [cms]
[spray] > waiting for liveness and readiness...
[spray] o waiting for deployments [cms]
[spray] o kubectl template: {{range .items}}{{if eq "cms" .metadata.name}}{{$ready := 0}}{{if .status.readyReplicas}}{{$ready = .status.readyReplicas}}{{end}}{{$current := .spec.replicas}}{{if .status.currentReplicas}}{{$current = .status.currentReplicas}}{{end}}{{$updated := 0}}{{if .status.updatedReplicas}}{{$updated = .status.updatedReplicas}}{{end}}{{printf "{name: %s, ready: %d, current: %d, updated: %d}" .metadata.name $ready $current $updated}}{{end}}{{end}}
[spray] o kubectl output:
[spray] o kubectl template: {{range .items}}{{if eq "cms" .metadata.name}}{{$ready := 0}}{{if .status.readyReplicas}}{{$ready = .status.readyReplicas}}{{end}}{{$current := .spec.replicas}}{{if .status.currentReplicas}}{{$current = .status.currentReplicas}}{{end}}{{$updated := 0}}{{if .status.updatedReplicas}}{{$updated = .status.updatedReplicas}}{{end}}{{if or (lt $ready .spec.replicas) (lt $current .spec.replicas) (lt $updated .spec.replicas)}}{{printf "%s " .metadata.name}}{{end}}{{end}}{{end}}
[spray] o waiting for deployments [cms]
[spray] o kubectl template: {{range .items}}{{if eq "cms" .metadata.name}}{{$ready := 0}}{{if .status.readyReplicas}}{{$ready = .status.readyReplicas}}{{end}}{{$current := .spec.replicas}}{{if .status.currentReplicas}}{{$current = .status.currentReplicas}}{{end}}{{$updated := 0}}{{if .status.updatedReplicas}}{{$updated = .status.updatedReplicas}}{{end}}{{printf "{name: %s, ready: %d, current: %d, updated: %d}" .metadata.name $ready $current $updated}}{{end}}{{end}}
[spray] o kubectl output:
[spray] o kubectl template: {{range .items}}{{if eq "cms" .metadata.name}}{{$ready := 0}}{{if .status.readyReplicas}}{{$ready = .status.readyReplicas}}{{end}}{{$current := .spec.replicas}}{{if .status.currentReplicas}}{{$current = .status.currentReplicas}}{{end}}{{$updated := 0}}{{if .status.updatedReplicas}}{{$updated = .status.updatedReplicas}}{{end}}{{if or (lt $ready .spec.replicas) (lt $current .spec.replicas) (lt $updated .spec.replicas)}}{{printf "%s " .metadata.name}}{{end}}{{end}}{{end}}
[spray] o waiting for deployments [cms]
[spray] o kubectl template: {{range .items}}{{if eq "cms" .metadata.name}}{{$ready := 0}}{{if .status.readyReplicas}}{{$ready = .status.readyReplicas}}{{end}}{{$current := .spec.replicas}}{{if .status.currentReplicas}}{{$current = .status.currentReplicas}}{{end}}{{$updated := 0}}{{if .status.updatedReplicas}}{{$updated = .status.updatedReplicas}}{{end}}{{printf "{name: %s, ready: %d, current: %d, updated: %d}" .metadata.name $ready $current $updated}}{{end}}{{end}}
[spray] o kubectl output:
[spray] o kubectl template: {{range .items}}{{if eq "cms" .metadata.name}}{{$ready := 0}}{{if .status.readyReplicas}}{{$ready = .status.readyReplicas}}{{end}}{{$current := .spec.replicas}}{{if .status.currentReplicas}}{{$current = .status.currentReplicas}}{{end}}{{$updated := 0}}{{if .status.updatedReplicas}}{{$updated = .status.updatedReplicas}}{{end}}{{if or (lt $ready .spec.replicas) (lt $current .spec.replicas) (lt $updated .spec.replicas)}}{{printf "%s " .metadata.name}}{{end}}{{end}}{{end}}
[spray] o waiting for deployments [cms]
[spray] o kubectl template: {{range .items}}{{if eq "cms" .metadata.name}}{{$ready := 0}}{{if .status.readyReplicas}}{{$ready = .status.readyReplicas}}{{end}}{{$current := .spec.replicas}}{{if .status.currentReplicas}}{{$current = .status.currentReplicas}}{{end}}{{$updated := 0}}{{if .status.updatedReplicas}}{{$updated = .status.updatedReplicas}}{{end}}{{printf "{name: %s, ready: %d, current: %d, updated: %d}" .metadata.name $ready $current $updated}}{{end}}{{end}}
[spray] o kubectl output:
[spray] o kubectl template: {{range .items}}{{if eq "cms" .metadata.name}}{{$ready := 0}}{{if .status.readyReplicas}}{{$ready = .status.readyReplicas}}{{end}}{{$current := .spec.replicas}}{{if .status.currentReplicas}}{{$current = .status.currentReplicas}}{{end}}{{$updated := 0}}{{if .status.updatedReplicas}}{{$updated = .status.updatedReplicas}}{{end}}{{if or (lt $ready .spec.replicas) (lt $current .spec.replicas) (lt $updated .spec.replicas)}}{{printf "%s " .metadata.name}}{{end}}{{end}}{{end}}
umbrella charts consists of deployments and jobs
apiVersion: v2
name: Host-Attestation
type: application
kubeVersion: ">= 1.17.17 <= 1.21.0"
version: 0.1.0
appVersion: "v4.0.0"
dependencies:
- name: cms
repository: file://../../services/cms/
condition: cms.enabled
version: 0.1.0
- name: aas
repository: file://../../services/aas/
version: 0.1.0
condition: aas.enabled
- name: aas-manager
repository: file://../../jobs/aas-manager/
version: 0.1.0
condition: aas-manager.enabled
- name: hvs
repository: file://../../services/hvs/
version: 0.1.0
condition: hvs.enabled
Hello,
From what I see, it seems kubectl is failing to execute the readiness template. I just uploaded a beta version with more logs and better error management in case of such failures. Can you please run again your use-case with this version and provide me the logs here ?
The release is here
Can you please provide me the installation steps with this release?
Even if the version is a pre-release (marked as a beta), it seems we can install it just like the others...
$ helm plugin install https://github.com/ThalesGroup/helm-spray Downloading and installing spray v4.0.10-beta.1 for Linux... Installed plugin: spray
[root@localhost helm]# helm spray --namespace mynamespace usecases/host-attestation/ --debug --verbose > output-debug
history.go:56: [debug] getting history for release cms
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /root/helm-demo/mynamespace-helm/usecases/host-attestation
client.go:203: [debug] checking 7 resources for changes
client.go:224: [debug] Created a new ConfigMap called "cms" in isecl
client.go:466: [debug] Looks like there are no changes for PersistentVolume "cms-config"
client.go:466: [debug] Looks like there are no changes for PersistentVolume "cms-logs"
client.go:466: [debug] Looks like there are no changes for PersistentVolumeClaim "cms-config"
client.go:466: [debug] Looks like there are no changes for PersistentVolumeClaim "cms-logs"
client.go:224: [debug] Created a new Service called "cms" in isecl
client.go:224: [debug] Created a new Deployment called "cms" in isecl
Error: cannot check readiness of [cms]: exec: "kubectl": executable file not found in $PATH
Error: plugin "spray" exited with error
helm.go:81: [debug] plugin "spray" exited with error
FYI I am using microk8s distribution.
Good to know :) helm-spray uses kubectl for readiness determination.
Maybe you should take a look to microk8s documentation to configure kubectl (either in creating a kubectl symlink that use microk8s embedded one or installing the "true" kubectl and configure microk8s to use it)
Can you confirm you have no error when kubectl is in the path ? If yes, I will close this issue. Thanks !
Yes we can close this, thanks for the support.
Does helm spray supports daemonsets? It would be very helpful if it would be supported in upcoming releases
usecases/attestation is a umbrella chart, with services defined in depenencies.
helm spray --namespace mynamespace usecases/attestation/ --verbose --debug
[spray] o kubectl output: [spray] o kubectl template: {{range .items}}{{if eq "cms" .metadata.name}}{{$ready := 0}}{{if .status.readyReplicas}}{{$ready = .status.readyReplicas}}{{end}}{{$current := .spec.replicas}}{{if .status.currentReplicas}}{{$current = .status.currentReplicas}}{{end}}{{$updated := 0}}{{if .status.updatedReplicas}}{{$updated = .status.updatedReplicas}}{{end}}{{if or (lt $ready .spec.replicas) (lt $current .spec.replicas) (lt $updated .spec.replicas)}}{{printf "%s " .metadata.name}}{{end}}{{end}}{{end}}
On the other side the subchart is deployed successfully [root@localhost helm-demo]# kubectl get pods -n mynamespace NAME READY STATUS RESTARTS AGE cms-56fd64954c-sq4v2 1/1 Running 0 3m28s