Closed smartkuk closed 1 year ago
Additionally, if I use the helm command to install the chart in the same repository, the status looks correct. Some information has been covered with garbage information.
The storage under the harbor name is the same storage used above.
$ helm repo list
NAME URL
prometheus-community https://prometheus-community.github.io/helm-charts
stable https://charts.helm.sh/stable
ingress-nginx https://kubernetes.github.io/ingress-nginx
harbor https://xxxxxxxxxx:0000/chartrepo/bxcp-system-common
argo https://argoproj.github.io/argo-helm
member-harbor https://yyyyyyyyyy:0000/chartrepo/bxcp-system-common
$ helm install taiga-1026 harbor/tomcat
NAME: taiga-1026
LAST DEPLOYED: Fri Jun 16 09:50:14 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: tomcat
CHART VERSION: 10.9.2
APP VERSION: 10.1.9
** Please be patient while the chart is being deployed **
1. Get the Tomcat URL by running:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace default -w taiga-1026-tomcat'
export SERVICE_IP=$(kubectl get svc --namespace default taiga-1026-tomcat --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo "Tomcat URL: http://$SERVICE_IP/"
echo "Tomcat Management URL: http://$SERVICE_IP/manager"
2. Login with the following credentials
echo Username: user
echo Password: $(kubectl get secret --namespace default taiga-1026-tomcat -o jsonpath="{.data.tomcat-password}" | base64 -d)
$ helm history taiga-1026
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Fri Jun 16 09:50:14 2023 deployed tomcat-10.9.2 10.1.9 Install complete
Can you please try increasing the timeout configuration on the HelmRelease
? 5 minutes matches the default time-out.
Can you please try increasing the timeout configuration on the
HelmRelease
? 5 minutes matches the default time-out.
Are you talking about timeout in the sub spec? I've already done that. But just in case, I tried again. But again, it didn't work out. Below is the spec.
# timeout 10m
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: tomcat-testing-1
namespace: default
spec:
timeout: 10m
chart:
spec:
chart: tomcat
interval: 10s
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: bxcp-system-common
namespace: bxcp-system
version: 10.9.2
install:
createNamespace: true
remediation:
retries: 10
interval: 10s
targetNamespace: default
Below is what I monitored with the watch command.
Every 2.0s: kubectl get helmrepo -A;echo;echo;kubectl get hr -A;echo;echo;helm list --all --all-namespaces;echo;echo;kubectl get pods -A; NB-21042711: Mon Jun 19 08:27:31 2023
NAMESPACE NAME URL AGE READY STATUS
bxcp-system bxcp-system-common https://xxxxxxxxxx:0000/chartrepo/bxcp-system-common 13m True stored artifact: revision
'sha256:ea7575a7761511264744c90f5e0f63c024ca822c4714a73ae5bb003adc1d09c0'
NAMESPACE NAME AGE READY STATUS
default tomcat-testing-1 9m26s Unknown Reconciliation in progress
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
default-tomcat-testing-1 default 1 2023-06-18 23:18:05.9933536 +0000 UTC pending-install tomcat-10.9.2 10.1.9
NAMESPACE NAME READY STATUS RESTARTS AGE
default default-tomcat-testing-1-5cf8d477d6-ktvjt 1/1 Running 0 9m26s
flux-system helm-controller-b4ffbcbf-tfv96 1/1 Running 0 14m
flux-system source-controller-659b55846c-7jkzz 1/1 Running 0 14m
kube-system coredns-57575c5f89-hjc2r 1/1 Running 0 14m
kube-system coredns-57575c5f89-r95vc 1/1 Running 0 14m
kube-system etcd-taiga-1026-control-plane 1/1 Running 0 14m
kube-system kindnet-74z2x 1/1 Running 0 14m
kube-system kindnet-f4m99 1/1 Running 0 14m
kube-system kube-apiserver-taiga-1026-control-plane 1/1 Running 0 14m
kube-system kube-controller-manager-taiga-1026-control-plane 1/1 Running 0 14m
kube-system kube-proxy-fwktv 1/1 Running 0 14m
kube-system kube-proxy-r5bbz 1/1 Running 0 14m
kube-system kube-scheduler-taiga-1026-control-plane 1/1 Running 0 14m
local-path-storage local-path-provisioner-c49b7b56f-vdw7f 1/1 Running 0 14m
At the point of 10 minutes, the JOB resource is automatically created and the status is changed to an error.
Every 2.0s: kubectl get helmrepo -A;echo;echo;kubectl get hr -A;echo;echo;helm list --all --all-namespaces;echo;echo;kubectl get pods -A; NB-21042711: Mon Jun 19 08:28:35 2023
NAMESPACE NAME URL AGE READY STATUS
bxcp-system bxcp-system-common https://host-host-infra-lb-2828a07e11a75d4e.elb.ap-northeast-2.amazonaws.com:5443/chartrepo/bxcp-system-common 15m True stored artifact: revision
'sha256:ea7575a7761511264744c90f5e0f63c024ca822c4714a73ae5bb003adc1d09c0'
NAMESPACE NAME AGE READY STATUS
default tomcat-testing-1 10m False Helm install failed: context deadline exceeded
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
default-tomcat-testing-1 default 1 2023-06-18 23:28:08.4813081 +0000 UTC pending-install tomcat-10.9.2 10.1.9
NAMESPACE NAME READY STATUS RESTARTS AGE
default default-tomcat-testing-1-5cf8d477d6-tbh22 0/1 Running 0 27s
flux-system helm-controller-b4ffbcbf-tfv96 1/1 Running 0 15m
flux-system source-controller-659b55846c-7jkzz 1/1 Running 0 15m
kube-system coredns-57575c5f89-hjc2r 1/1 Running 0 15m
kube-system coredns-57575c5f89-r95vc 1/1 Running 0 15m
kube-system etcd-taiga-1026-control-plane 1/1 Running 0 15m
kube-system kindnet-74z2x 1/1 Running 0 15m
kube-system kindnet-f4m99 1/1 Running 0 15m
kube-system kube-apiserver-taiga-1026-control-plane 1/1 Running 0 15m
kube-system kube-controller-manager-taiga-1026-control-plane 1/1 Running 0 15m
kube-system kube-proxy-fwktv 1/1 Running 0 15m
kube-system kube-proxy-r5bbz 1/1 Running 0 15m
kube-system kube-scheduler-taiga-1026-control-plane 1/1 Running 0 15m
local-path-storage local-path-provisioner-c49b7b56f-vdw7f 1/1 Running 0 15m
helm status default-tomcat-testing-1
NAME: default-tomcat-testing-1
LAST DEPLOYED: Sun Jun 18 23:28:08 2023
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: tomcat
CHART VERSION: 10.9.2
APP VERSION: 10.1.9
** Please be patient while the chart is being deployed **
1. Get the Tomcat URL by running:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace default -w default-tomcat-testing-1'
export SERVICE_IP=$(kubectl get svc --namespace default default-tomcat-testing-1 --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo "Tomcat URL: http://$SERVICE_IP/"
echo "Tomcat Management URL: http://$SERVICE_IP/manager"
2. Login with the following credentials
echo Username: user
echo Password: $(kubectl get secret --namespace default default-tomcat-testing-1 -o jsonpath="{.data.tomcat-password}" | base64 -d)
I've mentioned it before, but I want you to know that I've tried all of the ways to do this by Googling.
Would you be able to share the chart with me? Can be via email, hidde@weave.works
.
Would you be able to share the chart with me? Can be via email,
hidde@weave.works
.
OK I sent email, Subject is "[tomcat chart] hi this is my chart"
I tested the chart you provided, and I suspect it times out because the LoadBalancer does not get an external IP assigned.
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-testing-1-tomcat LoadBalancer 10.96.210.40 <pending> 80:31829/TCP 13m
This is confirmed by changing the .spec
of the HelmRelease
to include:
spec:
install:
disableWait: true
upgrade:
disableWait: true
which yields a successful install.
nice to see the answer I also avoided the problem once through the settings you showed. However, what I'm curious about is that the result of installing with the install subcommand with the helm binary command and the result of installing with the HelmRelease resource are different.
If you install both with the same values.yaml spec, they should be in the same state, but they are not Is the reason for this result because the application of helmrelease resources is still in the development stage?
Helm would have given you the same behavior if you run helm install
with the --wait
flag. Which is the default the controller runs with, as we want to be sure the resources are successfully deployed by default.
Helm would have given you the same behavior if you run
helm install
with the--wait
flag. Which is the default the controller runs with, as we want to be sure the resources are successfully deployed by default.
I didn't explore the options in detail I misunderstood. sorry Thank you so much for your sincere answer
Used Google translate
hi I am currently using the flux v1 version and want to change to the v2 version, so I am researching it. While trying to install and deploy to a kind cluster in my local WSL Ubuntu environment, a strange phenomenon occurred, so I made an inquiry here.
The status of the tomcat Pod resource created through the HelmRelease resource is normal, but why does the status of the HelmRelease resource become an error after waiting for about 5 minutes? I googled a lot and followed all the steps related to "retries exhausted". But nothing worked. Shouldn't the contents that are uploaded from Get Started or Installation that you guys have made work at least to build trust in the component?
Below is kind cluster configuration.
Below is the client version I am using.
I registered the Helm chart and wanted to distribute the chart, so I set it up as follows.
And the nginx chart was installed with the HelmRepository setting completed normally.
After waiting for about 5 minutes, the nginx-testing-1 HelmRelease resource was found in the following state. The problem is that the Pod resource status is normal, but I don't know why the HelmRelease resource is abnormal.
The following is the result of checking the status with the kubectl command. Some information has been covered with garbage information.
Here is the helm controller log:
Below is the tomcat pod log. You might be confused because nginx is included in the name, but it is tomcat.