siderolabs / omni-feedback

Omni feature requests, bug reports
https://www.siderolabs.com/platform/saas-for-kubernetes/
MIT License
2 stars 0 forks source link

[bug] Cannot install kube-prometheus-stack via https://*.kubernetes.omni.siderolabs.io #40

Closed gerhard closed 1 year ago

gerhard commented 1 year ago

Is there an existing issue for this?

Current Behavior

Installing kube-prometheus-stack Helm chart fails with Error: create: failed to create: the server responded with the status code 413 but did not return more information (post secrets).

Here is the full error (when running with --debug):

history.go:56: [debug] getting history for release kube-prometheus-stack
Release "kube-prometheus-stack" does not exist. Installing it now.
install.go:200: [debug] Original chart version: ""
install.go:217: [debug] CHART PATH: /Users/gerhard/Library/Caches/helm/repository/kube-prometheus-stack-45.23.0.tgz

client.go:134: [debug] creating 1 resource(s)
install.go:160: [debug] CRD alertmanagerconfigs.monitoring.coreos.com is already present. Skipping.
client.go:134: [debug] creating 1 resource(s)
install.go:160: [debug] CRD alertmanagers.monitoring.coreos.com is already present. Skipping.
client.go:134: [debug] creating 1 resource(s)
install.go:160: [debug] CRD podmonitors.monitoring.coreos.com is already present. Skipping.
client.go:134: [debug] creating 1 resource(s)
install.go:160: [debug] CRD probes.monitoring.coreos.com is already present. Skipping.
client.go:134: [debug] creating 1 resource(s)
install.go:160: [debug] CRD prometheuses.monitoring.coreos.com is already present. Skipping.
client.go:134: [debug] creating 1 resource(s)
install.go:160: [debug] CRD prometheusrules.monitoring.coreos.com is already present. Skipping.
client.go:134: [debug] creating 1 resource(s)
install.go:160: [debug] CRD servicemonitors.monitoring.coreos.com is already present. Skipping.
client.go:134: [debug] creating 1 resource(s)
install.go:160: [debug] CRD thanosrulers.monitoring.coreos.com is already present. Skipping.
client.go:134: [debug] creating 1 resource(s)
Error: create: failed to create: the server responded with the status code 413 but did not return more information (post secrets)
helm.go:84: [debug] the server responded with the status code 413 but did not return more information (post secrets)
create: failed to create
helm.sh/helm/v3/pkg/storage/driver.(*Secrets).Create
    helm.sh/helm/v3/pkg/storage/driver/secrets.go:164
helm.sh/helm/v3/pkg/storage.(*Storage).Create
    helm.sh/helm/v3/pkg/storage/storage.go:69
helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
    helm.sh/helm/v3/pkg/action/install.go:365
main.runInstall
    helm.sh/helm/v3/cmd/helm/install.go:286
main.newUpgradeCmd.func2
    helm.sh/helm/v3/cmd/helm/upgrade.go:130
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra@v1.6.1/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra@v1.6.1/command.go:1044
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra@v1.6.1/command.go:968
main.main
    helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
    runtime/proc.go:250
runtime.goexit
    runtime/asm_amd64.s:1598

I suspect that this is the proxy serving https://*.kubernetes.omni.siderolabs.io which is limiting uploads. FWIW https://github.com/prometheus-community/helm-charts/issues/3205

Expected Behavior

Installing kube-prometheus-stack Helm chart via https://*.kubernetes.omni.siderolabs.io should just work.

Steps To Reproduce

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

helm upgrade kube-prometheus-stack prometheus-community/kube-prometheus-stack \
  --install --debug \
  --namespace kube-prometheus-stack --create-namespace

What browsers are you seeing the problem on?

No response

Anything else?

No response

smira commented 1 year ago

Thanks for reporting this, the fix is coming.

smira commented 1 year ago

This problem should be resolved now.

gerhard commented 1 year ago

OK! I have updated the node to Talos v1.4.1. Will report back when I have tested the new behaviour.

gerhard commented 1 year ago

This now worked as expected on both clusters, one running Talos v1.4.0 & the other one running v1.4.1.

Both clusters managed by Omni v0.8.1.

Closing - thank you! 💪