Closed chipzoller closed 1 year ago
@chipzoller Timely PR (addon was breaking with new 1.27 version), the install works fine in Baremetal environment. Only thing thats pending is a functional test job which is mandatory for all ISV addons. Previously we added Kubecost for experimentation purposes not that we have many partners submitting addons, could you please submit a functional test job which validates Kubecost to pass this test? Here is the functional job requirements.
Yes, will try and do that soon.
Yes, will try and do that soon.
Thankyou much
@elamaran11, check if what I added will suffice.
@chipzoller Thankyou so much for submitting a test job. Please check our functional test job requirements here. Curl or health check are technical ones in nature and does not qualify for a functional test. We need a test job like cronjob which validates the functionality of the ISV product.
That's exactly what I've added. The request to our API indicates that not only is a response received but that the application is working. Note that this is not the same test which is performed by Pod probes.
Make sense, can you reword this echo Checking Kubecost health.;
to something meaning ful from functional standpoint?
Also could you please share as a valid token in our private channel. We are currently using dummy license.
@chipzoller The job is failing EKS Local Cluster environment. Please dump the complete error message from your failure
❯
─╯
kubectl logs kubecost-healthtest-001-7m292 -n kubecost
Checking Kubecost health.
Failure
Still facing error
kubectl logs kubecost-healthtest-00x-2k6q7 -n kubecost
Getting current Kubecost state.
Failure
Make sense, can you reword this
echo Checking Kubecost health.;
to something meaning ful from functional standpoint?
Done.
Also could you please share as a valid token in our private channel. We are currently using dummy license.
Not sure what you're looking for here.
The job is failing EKS Local Cluster environment. Please dump the complete error message from your failure
What is the name of the cost-analyzer Deployment?
Please see latest edits.
Make sense, can you reword this
echo Checking Kubecost health.;
to something meaning ful from functional standpoint?Done.
Also could you please share as a valid token in our private channel. We are currently using dummy license.
Not sure what you're looking for here.
The job is failing EKS Local Cluster environment. Please dump the complete error message from your failure
What is the name of the cost-analyzer Deployment?
Please see latest edits.
@chipzoller Here is the info on kubecost resources. I think the costanalyzer deployment name is different and youneed to adjust your job. Ignore the comment on token if we dont need a special secret to run kubecost.
kubectl get all -n kubecost
NAME READY STATUS RESTARTS AGE
pod/kubecost-healthtest-001-5cvdw 0/1 Error 0 84m
pod/kubecost-healthtest-001-7m292 0/1 Error 0 84m
pod/kubecost-healthtest-00x-2k6q7 0/1 Error 0 70m
pod/kubecost-healthtest-00x-6lfjm 0/1 Error 0 70m
pod/kubecost-healthtest-28256810-ghrpp 0/1 Error 0 7m15s
pod/kubecost-healthtest-28256810-pwwx6 0/1 Error 0 7m26s
pod/kubecost-kubecost-cost-analyzer-8557d98747-wwbgt 2/2 Running 0 89m
pod/kubecost-kubecost-grafana-5966c87795-vr9p5 2/2 Running 0 89m
pod/kubecost-kubecost-kube-state-metrics-6fc4646478-2gdx7 1/1 Running 0 89m
pod/kubecost-kubecost-prometheus-node-exporter-rv9bn 1/1 Running 0 89m
pod/kubecost-kubecost-prometheus-node-exporter-tz7v4 1/1 Running 0 89m
pod/kubecost-kubecost-prometheus-node-exporter-xjltf 1/1 Running 0 89m
pod/kubecost-kubecost-prometheus-server-86b676b7bc-hkt5h 2/2 Running 0 89m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubecost-kubecost-cost-analyzer ClusterIP 172.20.31.180 <none> 9003/TCP,9090/TCP 89m
service/kubecost-kubecost-grafana ClusterIP 172.20.165.110 <none> 80/TCP 89m
service/kubecost-kubecost-kube-state-metrics ClusterIP 172.20.43.248 <none> 8080/TCP 89m
service/kubecost-kubecost-prometheus-node-exporter ClusterIP None <none> 9100/TCP 89m
service/kubecost-kubecost-prometheus-server ClusterIP 172.20.215.70 <none> 80/TCP 89m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kubecost-kubecost-prometheus-node-exporter 3 3 3 3 3 <none> 89m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kubecost-kubecost-cost-analyzer 1/1 1 1 89m
deployment.apps/kubecost-kubecost-grafana 1/1 1 1 89m
deployment.apps/kubecost-kubecost-kube-state-metrics 1/1 1 1 89m
deployment.apps/kubecost-kubecost-prometheus-server 1/1 1 1 89m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kubecost-kubecost-cost-analyzer-8557d98747 1 1 1 89m
replicaset.apps/kubecost-kubecost-grafana-5966c87795 1 1 1 89m
replicaset.apps/kubecost-kubecost-kube-state-metrics-6fc4646478 1 1 1 89m
replicaset.apps/kubecost-kubecost-prometheus-server-86b676b7bc 1 1 1 89m
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/kubecost-healthtest */10 * * * * False 0 7m26s 85m
NAME COMPLETIONS DURATION AGE
job.batch/kubecost-healthtest-001 0/1 84m 84m
job.batch/kubecost-healthtest-00x 0/1 70m 70m
job.batch/kubecost-healthtest-28256810 0/1 7m26s 7m26s
Try with the latest changes which moves to a dynamic service fetcher which should catch custom names.
Thanks for all the feedback!
Signed-off-by: chipzoller chipzoller@gmail.com
Issue #, if available:
Description of changes:
Bumps Kubecost to 1.106.0 and simplifies values.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.