Open ebiscaia opened 8 months ago
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
We don't test with reverse-proxies . /remove-kind bug
Generally you will need to enbale proxy-protocol. If you post the answers to the questions that are asked in a new issue template, it may make someone comment usefully on that data. /triage needs-information
Let's see if what I got is useful:
I apologise the long post but details seems to break the code block. Let me know how to fix it and I'll edit the post.
What happened: Page does not load css content due to mixed-errors. See http://speedtest.eddienetworks.ddnsfree.com
What you expect to happen: Mixed-errors not to happen. Page loads correctly. See http://speedtest_lb.eddienetworks.ddnsfree.com. Same application but in load balancer service type. Just for comparison.
NGINX Ingress controller version
NGINX Ingress controller
Release: v1.8.1
Build: dc88dce9ea5e700f3301d16f971fa17c6cfe757d
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.21.6
Kubernetes
Client Version: v1.28.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3+k0s
Environment
OS: Alpine Linux v3.19 x86_64
Host: KVM/QEMU (Standard PC (Q35 + ICH9, 2009) pc-q35-8.1)
Kernel: 6.1.63-0-virt
CPU: QEMU Virtual version 2.5+ (4) @ 3.191GHz
Memory: 4934MiB
Instalation k0s
Cluster info
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready control-plane 83d v1.28.3+k0s 192.168.1.170 <none> Alpine Linux v3.19 6.1.63-0-virt containerd://1.7.8
worker1 Ready <none> 83d v1.28.4+k0s 192.168.1.171 <none> Ubuntu 23.10 6.5.0-13-generic containerd://1.7.8
worker2 Ready <none> 83d v1.28.4+k0s 192.168.1.172 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-13-amd64 containerd://1.7.8
worker3 Ready <none> 79d v1.28.4+k0s 192.168.1.173 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-13-amd64 containerd://1.7.8
How was the ingress-nginx-controller installed:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
Current State of the controller:
kubectl describe ingressclasses
Name: nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.8.1
Annotations: ingressclass.kubernetes.io/is-default-class: true
Controller: k8s.io/ingress-nginx
Events: <none>
kubectl -n ingress-nginx get all -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
grafana pod/grafana-64bd4b4ff-6qvqh 1/1 Running 1 (3h56m ago) 6h36m 10.244.1.179 worker1 <none> <none>
guacamole pod/guacamole-dpl-d9c674794-dwm87 1/1 Running 1 (3h56m ago) 6h42m 10.244.2.192 worker2 <none> <none>
homer pod/homer-dpl-5d7d6576b6-656pj 1/1 Running 1 (3h56m ago) 6h36m 10.244.2.190 worker2 <none> <none>
homer pod/homer-dpl-5d7d6576b6-98qmp 1/1 Running 1 (3h56m ago) 4h34m 10.244.2.187 worker2 <none> <none>
homer pod/homer-dpl-5d7d6576b6-bf74h 1/1 Running 1 (3h56m ago) 4h34m 10.244.1.181 worker1 <none> <none>
ingress-nginx pod/ingress-nginx-controller-8466cbd75b-lw2sg 1/1 Running 0 28m 10.244.2.206 worker2 <none> <none>
jellyfin pod/jellyfin-7994d8b8c-kcmxm 1/1 Running 0 3h10m 10.244.2.195 worker2 <none> <none>
kube-system pod/coredns-85df575cdb-6tjp2 1/1 Running 8 (33m ago) 6d9h 10.244.0.80 master <none> <none>
kube-system pod/coredns-85df575cdb-n4lmr 1/1 Running 1 (3h56m ago) 4h34m 10.244.1.177 worker1 <none> <none>
kube-system pod/konnectivity-agent-2rf4f 1/1 Running 13 (3h56m ago) 11d 10.244.1.174 worker1 <none> <none>
kube-system pod/konnectivity-agent-mmfnm 1/1 Running 10 (33m ago) 11d 10.244.0.78 master <none> <none>
kube-system pod/konnectivity-agent-q8fj8 1/1 Running 15 (3h54m ago) 11d 10.244.3.130 worker3 <none> <none>
kube-system pod/konnectivity-agent-tj6m8 1/1 Running 15 (3h56m ago) 11d 10.244.2.175 worker2 <none> <none>
kube-system pod/kube-proxy-25jq6 1/1 Running 13 (3h56m ago) 11d 192.168.1.171 worker1 <none> <none>
kube-system pod/kube-proxy-2nzkd 1/1 Running 13 (3h56m ago) 11d 192.168.1.172 worker2 <none> <none>
kube-system pod/kube-proxy-6g2ft 1/1 Running 13 (3h54m ago) 11d 192.168.1.173 worker3 <none> <none>
kube-system pod/kube-proxy-d5p7p 1/1 Running 10 (33m ago) 11d 192.168.1.170 master <none> <none>
kube-system pod/kube-router-lwsmq 1/1 Running 21 (3h56m ago) 11d 192.168.1.171 worker1 <none> <none>
kube-system pod/kube-router-qn7g2 1/1 Running 105 (145m ago) 11d 192.168.1.172 worker2 <none> <none>
kube-system pod/kube-router-s7fb5 1/1 Running 85 (3h54m ago) 11d 192.168.1.173 worker3 <none> <none>
kube-system pod/kube-router-td5mr 1/1 Running 10 (33m ago) 11d 192.168.1.170 master <none> <none>
kube-system pod/metrics-server-7cdb99bf49-b6b7r 1/1 Running 44 (32m ago) 83d 10.244.0.79 master <none> <none>
kuma pod/kuma-76756967fc-2m8np 1/1 Running 1 (3h56m ago) 6h39m 10.244.1.180 worker1 <none> <none>
longhorn-system pod/csi-attacher-7b5979f545-289w5 1/1 Running 3 (104m ago) 4h34m 10.244.1.173 worker1 <none> <none>
longhorn-system pod/csi-attacher-7b5979f545-4lj8l 1/1 Running 1 (3h56m ago) 4h34m 10.244.2.171 worker2 <none> <none>
longhorn-system pod/csi-attacher-7b5979f545-622pf 1/1 Running 4 (31m ago) 4h34m 10.244.2.185 worker2 <none> <none>
longhorn-system pod/csi-provisioner-55d544784d-68nxz 1/1 Running 4 (33m ago) 4h34m 10.244.1.168 worker1 <none> <none>
longhorn-system pod/csi-provisioner-55d544784d-cpk46 1/1 Running 2 (66m ago) 4h34m 10.244.2.170 worker2 <none> <none>
longhorn-system pod/csi-provisioner-55d544784d-pmlzj 1/1 Running 1 (3h56m ago) 4h34m 10.244.2.178 worker2 <none> <none>
longhorn-system pod/csi-resizer-5bd864fbf6-gbstg 1/1 Running 3 (104m ago) 4h34m 10.244.1.169 worker1 <none> <none>
longhorn-system pod/csi-resizer-5bd864fbf6-jllw7 1/1 Running 1 (3h56m ago) 4h34m 10.244.2.169 worker2 <none> <none>
longhorn-system pod/csi-resizer-5bd864fbf6-n8hj5 1/1 Running 4 (33m ago) 6h47m 10.244.2.167 worker2 <none> <none>
longhorn-system pod/csi-snapshotter-8dcd84758-6zksj 1/1 Running 2 (66m ago) 4h34m 10.244.2.166 worker2 <none> <none>
longhorn-system pod/csi-snapshotter-8dcd84758-chj5q 1/1 Running 1 (3h56m ago) 4h34m 10.244.2.164 worker2 <none> <none>
longhorn-system pod/csi-snapshotter-8dcd84758-k8s8n 1/1 Running 4 (33m ago) 4h34m 10.244.1.175 worker1 <none> <none>
longhorn-system pod/engine-image-ei-74783864-45tn2 1/1 Running 16 (3h56m ago) 40d 10.244.1.171 worker1 <none> <none>
longhorn-system pod/engine-image-ei-74783864-n42mz 1/1 Running 4 (3h54m ago) 34h 10.244.3.129 worker3 <none> <none>
longhorn-system pod/engine-image-ei-74783864-q469n 1/1 Running 6 2d7h 10.244.2.172 worker2 <none> <none>
longhorn-system pod/instance-manager-001d03793d5405e50d13a284d12bf76a 1/1 Running 0 3h54m 10.244.3.142 worker3 <none> <none>
longhorn-system pod/instance-manager-45beb22e9f4dafc6eabe622e2dc3f122 1/1 Running 0 3h54m 10.244.1.178 worker1 <none> <none>
longhorn-system pod/instance-manager-76833f9f145c4c553328a46c88deb2d1 1/1 Running 0 3h54m 10.244.2.186 worker2 <none> <none>
longhorn-system pod/longhorn-csi-plugin-2g6x6 3/3 Running 25 (3h54m ago) 7h40m 10.244.3.131 worker3 <none> <none>
longhorn-system pod/longhorn-csi-plugin-48bww 3/3 Running 50 (32m ago) 2d7h 10.244.2.165 worker2 <none> <none>
longhorn-system pod/longhorn-csi-plugin-wkch4 3/3 Running 11 (3h54m ago) 7h25m 10.244.1.176 worker1 <none> <none>
longhorn-system pod/longhorn-driver-deployer-96cb874b9-87r4v 1/1 Running 1 (3h56m ago) 4h34m 10.244.2.177 worker2 <none> <none>
longhorn-system pod/longhorn-manager-6hbxv 1/1 Running 5 (3h54m ago) 12h 10.244.2.173 worker2 <none> <none>
longhorn-system pod/longhorn-manager-l5pbn 1/1 Running 5 (3h54m ago) 8h 10.244.3.132 worker3 <none> <none>
longhorn-system pod/longhorn-manager-wf22b 1/1 Running 3 (3h55m ago) 7h38m 10.244.1.170 worker1 <none> <none>
longhorn-system pod/longhorn-ui-67bfdc7cf9-68hf6 1/1 Running 4 (3h55m ago) 4h34m 10.244.1.172 worker1 <none> <none>
longhorn-system pod/longhorn-ui-67bfdc7cf9-7bh4x 1/1 Running 4 (3h54m ago) 4h34m 10.244.2.176 worker2 <none> <none>
longhorn-system pod/share-manager-pvc-16ee8b72-aeed-48da-85ff-0d6646ee5df8 1/1 Running 0 3h53m 10.244.3.145 worker3 <none> <none>
longhorn-system pod/share-manager-pvc-3d5fc6c2-294f-4ef7-a826-60068f858348 1/1 Running 0 3h53m 10.244.3.146 worker3 <none> <none>
longhorn-system pod/share-manager-pvc-5c6b76b8-b373-48b6-b88b-232bb6a3bc08 1/1 Running 0 3h53m 10.244.3.144 worker3 <none> <none>
longhorn-system pod/share-manager-pvc-7ab5716c-33cc-4ac0-a327-73fa8e61d3f1 1/1 Running 0 3h53m 10.244.3.150 worker3 <none> <none>
longhorn-system pod/share-manager-pvc-9452b7a8-9511-459b-91db-1505f4d6852b 1/1 Running 0 3h53m 10.244.3.148 worker3 <none> <none>
longhorn-system pod/share-manager-pvc-a0c2f323-a4a3-4832-adf3-ccc8be7afddf 1/1 Running 0 3h53m 10.244.3.149 worker3 <none> <none>
longhorn-system pod/share-manager-pvc-c5602baf-d814-4f27-a21d-2baee3a787d7 1/1 Running 0 179m 10.244.3.151 worker3 <none> <none>
longhorn-system pod/share-manager-pvc-df002cf8-a820-4396-849e-65867dfae96e 1/1 Running 0 3h53m 10.244.3.143 worker3 <none> <none>
metallb-system pod/controller-595f88d88f-w7ttj 1/1 Running 16 (170m ago) 4h34m 10.244.2.181 worker2 <none> <none>
metallb-system pod/speaker-grbmp 1/1 Running 14 (32m ago) 4d1h 192.168.1.170 master <none> <none>
metallb-system pod/speaker-kr6gj 1/1 Running 24 12h 192.168.1.172 worker2 <none> <none>
metallb-system pod/speaker-m2xkc 1/1 Running 4 (3h55m ago) 7h25m 192.168.1.171 worker1 <none> <none>
metallb-system pod/speaker-w4k75 1/1 Running 10 (3h54m ago) 7h40m 192.168.1.173 worker3 <none> <none>
navid pod/navidrome-794f456777-qdz4c 1/1 Running 1 (3h56m ago) 6h38m 10.244.2.188 worker2 <none> <none>
piwigo pod/mariadb-dpl-5cfd67bb86-sqcbm 1/1 Running 1 (3h56m ago) 4h34m 10.244.2.179 worker2 <none> <none>
piwigo pod/piwigo-65bccc5b4f-qb4zz 1/1 Running 1 (3h56m ago) 4h34m 10.244.2.183 worker2 <none> <none>
podgrab pod/podgrab-6c58d7b5f6-6c66x 1/1 Running 1 (3h56m ago) 4h34m 10.244.2.189 worker2 <none> <none>
portainer pod/portainer-agent-7b78fd9984-wmdzh 1/1 Running 8 (3h54m ago) 4h34m 10.244.2.174 worker2 <none> <none>
prometheus pod/prometheus-6684c8d569-n62k4 1/1 Running 1 (3h56m ago) 4h34m 10.244.2.193 worker2 <none> <none>
speedtest pod/mariadb-dpl-86d69ff55d-q5r59 1/1 Running 2 (3h56m ago) 4h34m 10.244.2.184 worker2 <none> <none>
speedtest pod/speedtest-6d67c8b45-7zqjc 1/1 Running 0 46m 10.244.2.203 worker2 <none> <none>
vaultwarden pod/vaultwarden-5f494849-qlrl4 1/1 Running 1 (3h56m ago) 4h34m 10.244.2.182 worker2 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
kube-system daemonset.apps/konnectivity-agent 4 4 4 4 4 kubernetes.io/os=linux 83d konnectivity-agent quay.io/k0sproject/apiserver-network-proxy-agent:v0.1.4 k8s-app=konnectivity-agent
kube-system daemonset.apps/kube-proxy 4 4 4 4 4 kubernetes.io/os=linux 83d kube-proxy quay.io/k0sproject/kube-proxy:v1.28.3 k8s-app=kube-proxy
kube-system daemonset.apps/kube-router 4 4 4 4 4
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR grafana deployment.apps/grafana 1/1 1 1 6d3h grafana grafana/grafana-oss app=grafana guacamole deployment.apps/guacamole-dpl 1/1 1 1 6d2h guacamole flcontainers/guacamole app=guacamole homer deployment.apps/homer-dpl 3/3 3 3 6d doughnut b4bz/homer app=homer ingress-nginx deployment.apps/ingress-nginx-controller 1/1 1 1 82d controller registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx jellyfin deployment.apps/jellyfin 1/1 1 1 3h10m jellyfin lscr.io/linuxserver/jellyfin:latest app=jellyfin kube-system deployment.apps/coredns 2/2 2 2 83d coredns quay.io/k0sproject/coredns:1.11.1 k8s-app=kube-dns kube-system deployment.apps/metrics-server 1/1 1 1 83d metrics-server registry.k8s.io/metrics-server/metrics-server:v0.6.4 k8s-app=metrics-server kuma deployment.apps/kuma 1/1 1 1 6d8h kuma louislam/uptime-kuma app=kuma longhorn-system deployment.apps/csi-attacher 3/3 3 3 6d8h csi-attacher longhornio/csi-attacher:v4.2.0 app=csi-attacher longhorn-system deployment.apps/csi-provisioner 3/3 3 3 6d8h csi-provisioner longhornio/csi-provisioner:v3.4.1 app=csi-provisioner longhorn-system deployment.apps/csi-resizer 3/3 3 3 6d8h csi-resizer longhornio/csi-resizer:v1.7.0 app=csi-resizer longhorn-system deployment.apps/csi-snapshotter 3/3 3 3 6d8h csi-snapshotter longhornio/csi-snapshotter:v6.2.1 app=csi-snapshotter longhorn-system deployment.apps/longhorn-driver-deployer 1/1 1 1 81d longhorn-driver-deployer longhornio/longhorn-manager:v1.5.1 app=longhorn-driver-deployer longhorn-system deployment.apps/longhorn-ui 2/2 2 2 81d longhorn-ui longhornio/longhorn-ui:v1.5.1 app=longhorn-ui metallb-system deployment.apps/controller 1/1 1 1 82d controller quay.io/metallb/controller:v0.13.10 app=metallb,component=controller navid deployment.apps/navidrome 1/1 1 1 4d1h navi deluan/navidrome:0.49.2 app=navidrome piwigo deployment.apps/mariadb-dpl 1/1 1 1 3d1h mariadb linuxserver/mariadb app=mariadb piwigo deployment.apps/piwigo 1/1 1 1 2d6h piwigo linuxserver/piwigo app=piwigo podgrab deployment.apps/podgrab 1/1 1 1 6d1h podgrab akhilrex/podgrab app=podgrab portainer deployment.apps/portainer-agent 1/1 1 1 66d portainer-agent portainer/agent:2.19.1 app=portainer-agent prometheus deployment.apps/prometheus 1/1 1 1 6d6h prometheus prom/prometheus app=prometheus speedtest deployment.apps/mariadb-dpl 1/1 1 1 2d2h mariadb linuxserver/mariadb app=mariadb speedtest deployment.apps/speedtest 1/1 1 1 163m speedtest ghcr.io/alexjustesen/speedtest-tracker:v0.13.3 app=speedtest vaultwarden deployment.apps/vaultwarden 1/1 1 1 7d20h vaultwarden vaultwarden/server app=vaultwarden
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR grafana replicaset.apps/grafana-64bd4b4ff 1 1 1 6d3h grafana grafana/grafana-oss app=grafana,pod-template-hash=64bd4b4ff guacamole replicaset.apps/guacamole-dpl-d9c674794 1 1 1 6d2h guacamole flcontainers/guacamole app=guacamole,pod-template-hash=d9c674794 homer replicaset.apps/homer-dpl-5d7d6576b6 3 3 3 6d doughnut b4bz/homer app=homer,pod-template-hash=5d7d6576b6 ingress-nginx replicaset.apps/ingress-nginx-controller-6858cb9dd9 0 0 0 33m controller registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=6858cb9dd9 ingress-nginx replicaset.apps/ingress-nginx-controller-79d66f886c 0 0 0 82d controller registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=79d66f886c ingress-nginx replicaset.apps/ingress-nginx-controller-7f6c4db675 0 0 0 68m controller registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7f6c4db675 ingress-nginx replicaset.apps/ingress-nginx-controller-8466cbd75b 1 1 1 28m controller registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=8466cbd75b jellyfin replicaset.apps/jellyfin-7994d8b8c 1 1 1 3h10m jellyfin lscr.io/linuxserver/jellyfin:latest app=jellyfin,pod-template-hash=7994d8b8c kube-system replicaset.apps/coredns-85df575cdb 2 2 2 11d coredns quay.io/k0sproject/coredns:1.11.1 k8s-app=kube-dns,pod-template-hash=85df575cdb kube-system replicaset.apps/coredns-878bb57ff 0 0 0 83d coredns quay.io/k0sproject/coredns:1.10.1 k8s-app=kube-dns,pod-template-hash=878bb57ff kube-system replicaset.apps/metrics-server-7cdb99bf49 1 1 1 83d metrics-server registry.k8s.io/metrics-server/metrics-server:v0.6.4 k8s-app=metrics-server,pod-template-hash=7cdb99bf49 kuma replicaset.apps/kuma-76756967fc 1 1 1 6d8h kuma louislam/uptime-kuma app=kuma,pod-template-hash=76756967fc longhorn-system replicaset.apps/csi-attacher-7b5979f545 3 3 3 6d8h csi-attacher longhornio/csi-attacher:v4.2.0 app=csi-attacher,pod-template-hash=7b5979f545 longhorn-system replicaset.apps/csi-provisioner-55d544784d 3 3 3 6d8h csi-provisioner longhornio/csi-provisioner:v3.4.1 app=csi-provisioner,pod-template-hash=55d544784d longhorn-system replicaset.apps/csi-resizer-5bd864fbf6 3 3 3 6d8h csi-resizer longhornio/csi-resizer:v1.7.0 app=csi-resizer,pod-template-hash=5bd864fbf6 longhorn-system replicaset.apps/csi-snapshotter-8dcd84758 3 3 3 6d8h csi-snapshotter longhornio/csi-snapshotter:v6.2.1 app=csi-snapshotter,pod-template-hash=8dcd84758 longhorn-system replicaset.apps/longhorn-driver-deployer-96cb874b9 1 1 1 81d longhorn-driver-deployer longhornio/longhorn-manager:v1.5.1 app=longhorn-driver-deployer,pod-template-hash=96cb874b9 longhorn-system replicaset.apps/longhorn-ui-67bfdc7cf9 2 2 2 81d longhorn-ui longhornio/longhorn-ui:v1.5.1 app=longhorn-ui,pod-template-hash=67bfdc7cf9 metallb-system replicaset.apps/controller-595f88d88f 1 1 1 82d controller quay.io/metallb/controller:v0.13.10 app=metallb,component=controller,pod-template-hash=595f88d88f navid replicaset.apps/navidrome-794f456777 1 1 1 4d1h navi deluan/navidrome:0.49.2 app=navidrome,pod-template-hash=794f456777 piwigo replicaset.apps/mariadb-dpl-5cfd67bb86 1 1 1 3d1h mariadb linuxserver/mariadb app=mariadb,pod-template-hash=5cfd67bb86 piwigo replicaset.apps/piwigo-65bccc5b4f 1 1 1 2d6h piwigo linuxserver/piwigo app=piwigo,pod-template-hash=65bccc5b4f podgrab replicaset.apps/podgrab-6c58d7b5f6 1 1 1 6d1h podgrab akhilrex/podgrab app=podgrab,pod-template-hash=6c58d7b5f6 portainer replicaset.apps/portainer-agent-7b78fd9984 1 1 1 66d portainer-agent portainer/agent:2.19.1 app=portainer-agent,pod-template-hash=7b78fd9984 prometheus replicaset.apps/prometheus-6684c8d569 1 1 1 6d6h prometheus prom/prometheus app=prometheus,pod-template-hash=6684c8d569 speedtest replicaset.apps/mariadb-dpl-86d69ff55d 1 1 1 2d2h mariadb linuxserver/mariadb app=mariadb,pod-template-hash=86d69ff55d speedtest replicaset.apps/speedtest-6d67c8b45 1 1 1 154m speedtest ghcr.io/alexjustesen/speedtest-tracker:v0.13.3 app=speedtest,pod-template-hash=6d67c8b45 speedtest replicaset.apps/speedtest-6fd54665f8 0 0 0 158m speedtest ghcr.io/alexjustesen/speedtest-tracker:v0.13.4 app=speedtest,pod-template-hash=6fd54665f8 speedtest replicaset.apps/speedtest-78bb6b566 0 0 0 163m speedtest ghcr.io/alexjustesen/speedtest-tracker app=speedtest,pod-template-hash=78bb6b566 speedtest replicaset.apps/speedtest-8489d978c7 0 0 0 158m speedtest ghcr.io/alexjustesen/speedtest-tracker:latest app=speedtest,pod-template-hash=8489d978c7 vaultwarden replicaset.apps/vaultwarden-54757947f5 0 0 0 7d20h vaultwarden vaultwarden/server app=vaultwarden,pod-template-hash=54757947f5 vaultwarden replicaset.apps/vaultwarden-5f494849 1 1 1 7d20h vaultwarden vaultwarden/server app=vaultwarden,pod-template-hash=5f494849
NAMESPACE NAME COMPLETIONS DURATION AGE CONTAINERS IMAGES SELECTOR ingress-nginx job.batch/ingress-nginx-admission-create 1/1 24s 82d create registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b batch.kubernetes.io/controller-uid=4469c8e2-6709-48a1-8cc5-3c43425196d4 ingress-nginx job.batch/ingress-nginx-admission-patch 1/1 26s 82d patch registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b batch.kubernetes.io/controller-uid=063a78bd-4711-4615-b234-509c5bfd98ff
- `kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>`
Name: ingress-nginx-controller-8466cbd75b-lw2sg
Namespace: ingress-nginx
Priority: 0
Service Account: ingress-nginx
Node: worker2/192.168.1.172
Start Time: Thu, 30 Nov 2023 19:21:46 +1100
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.8.1
pod-template-hash=8466cbd75b
Annotations:
Normal Scheduled 38m default-scheduler Successfully assigned ingress-nginx/ingress-nginx-controller-8466cbd75b-lw2sg to worker2 Normal Pulled 38m kubelet Container image "registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd" already present on machine Normal Created 38m kubelet Created container controller Normal Started 38m kubelet Started container controller Normal RELOAD 38m nginx-ingress-controller NGINX reload triggered due to a change in configuration
- `kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>`
Name: ingress-nginx-controller Namespace: ingress-nginx Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.8.1 Annotations: metallb.universe.tf/ip-allocated-from-pool: pool Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.111.24.64 IPs: 10.111.24.64 LoadBalancer Ingress: 192.168.1.240 Port: http 80/TCP TargetPort: http/TCP NodePort: http 32157/TCP Endpoints: 10.244.2.206:80 Port: https 443/TCP TargetPort: https/TCP NodePort: https 30789/TCP Endpoints: 10.244.2.206:443 Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 32304 Events: Type Reason Age From Message
Normal nodeAssigned 39m (x23 over 156m) metallb-speaker announcing from node "worker2" with protocol "layer2"
- **Current state of ingress object, if applicable**:
- `kubectl -n <appnnamespace> get all,ing -o wide`
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mariadb-dpl-86d69ff55d-q5r59 1/1 Running 2 (4h10m ago) 4h48m 10.244.2.184 worker2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/mariadb-svc ClusterIP 10.103.125.252
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/mariadb-dpl 1/1 1 1 2d3h mariadb linuxserver/mariadb app=mariadb deployment.apps/speedtest 1/1 1 1 177m speedtest ghcr.io/alexjustesen/speedtest-tracker:v0.13.3 app=speedtest
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/mariadb-dpl-86d69ff55d 1 1 1 2d3h mariadb linuxserver/mariadb app=mariadb,pod-template-hash=86d69ff55d replicaset.apps/speedtest-6d67c8b45 1 1 1 168m speedtest ghcr.io/alexjustesen/speedtest-tracker:v0.13.3 app=speedtest,pod-template-hash=6d67c8b45 replicaset.apps/speedtest-6fd54665f8 0 0 0 172m speedtest ghcr.io/alexjustesen/speedtest-tracker:v0.13.4 app=speedtest,pod-template-hash=6fd54665f8 replicaset.apps/speedtest-78bb6b566 0 0 0 177m speedtest ghcr.io/alexjustesen/speedtest-tracker app=speedtest,pod-template-hash=78bb6b566 replicaset.apps/speedtest-8489d978c7 0 0 0 173m speedtest ghcr.io/alexjustesen/speedtest-tracker:latest app=speedtest,pod-template-hash=8489d978c7
NAME CLASS HOSTS ADDRESS PORTS AGE ingress.networking.k8s.io/speedtest-ing nginx speedtest.eddienetworks.ddnsfree.com 192.168.1.240 80 2d2h
- `kubectl -n <appnamespace> describe ing <ingressname>`
Name: speedtest-ing
Labels:
speedtest.eddienetworks.ddnsfree.com
/ speedtest-svc:3344 (10.244.2.203:80)
Annotations:
Normal Sync 84m nginx-ingress-controller Scheduled for sync Normal Sync 65m nginx-ingress-controller Scheduled for sync Normal Sync 50m nginx-ingress-controller Scheduled for sync Normal Sync 45m nginx-ingress-controller Scheduled for sync Normal Sync 44m nginx-ingress-controller Scheduled for sync
- If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
curl https://speedtest.eddienetworks.ddnsfree.com -v
GET / HTTP/2 Host: speedtest.eddienetworks.ddnsfree.com User-Agent: curl/8.4.0 Accept: /
also the load balancer version:
GET / HTTP/2 Host: speedtest_lb.eddienetworks.ddnsfree.com User-Agent: curl/8.4.0 Accept: /
Others nginx config of the reverse proxy:
upstream stest{
server 192.168.1.240;
}
server {
listen 80;
server_name speedtest.eddienetworks.ddnsfree.com *.speedtest.eddienetworks.ddnsfree.com;
return 301 https://$host$request_uri;
}
server {
server_name speedtest.eddienetworks.ddnsfree.com *.speedtest.eddienetworks.ddnsfree.com;
location / {
proxy_pass http://stest/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/eddienetworks.ddnsfree.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/eddienetworks.ddnsfree.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Also from the ingress logs:
location / {
set $namespace "speedtest";
set $ingress_name "speedtest-ing";
set $service_name "speedtest-svc";
set $service_port "3344";
set $location_path "/";
set $global_rate_limit_exceeding n;
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
ssl_redirect = true,
force_no_ssl_redirect = false,
preserve_trailing_slash = false,
use_port_in_redirects = false,
global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
})
balancer.rewrite()
plugins.run()
}
# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
# other authentication method such as basic auth or external auth useless - all requests will be allowed.
#access_by_lua_block {
#}
header_filter_by_lua_block {
lua_ingress.header()
plugins.run()
}
body_filter_by_lua_block {
plugins.run()
}
log_by_lua_block {
balancer.log()
monitor.call()
plugins.run()
}
port_in_redirect off;
set $balancer_ewma_score -1;
set $proxy_upstream_name "speedtest-speedtest-svc-3344";
set $proxy_host $proxy_upstream_name;
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
set $proxy_alternative_upstream_name "";
client_max_body_size 1m;
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Forwarded-Scheme $pass_access_scheme;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering off;
proxy_buffer_size 4k;
proxy_buffers 4 4k;
proxy_max_temp_file_size 1024m;
proxy_request_buffering on;
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_timeout 0;
proxy_next_upstream_tries 3;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
}
## end server speedtest.eddienetworks.ddnsfree.com
Thanks for the attention
Just to test, I installed a k3s VM (as k3s comes with Traefik ingress). The problem also happens in there.
Hi all,
Sorry if this is not following the procedures of how to post my problem.
I am trying to self-host speedtest-tracker (https://github.com/alexjustesen/speedtest-tracker) and my system has a Raspberry Pi with Nginx Reverse Proxy and a VM with K0S, MetalLB and Nginx Ingress. The Raspberry Pi provides access to all my services (Proxmox, HA, ...) and passes to Nginx Ingress the services that are in Kubernetes. The problem is that with speed-tracker the pages are not loading correctly if I use ingress. I get the
blocked:mixed-content)
Things I have tried so far: Fix the Reverse Proxy: I added
proxy_set_header X-Forwarded-Proto $scheme;
in the Raspberry Pi and create a Load Balancer service in my deployment and it worked. That actually was the initial configuration (normally I test things with a Load Balancer and then go to Ingress). Remove the Reverse Proxy: I removed the entry and created one in /etc/hosts. Worked. That was just to check if there was any other issues with the ingress and that was not the case. I returned to use the Reverse Proxy afterwards. Add annotations to the ingress of the application:nginx.ingress.kubernetes.io/proxy-set-header: "X-Forwarded-Proto $scheme"
: Did not work.nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
: Makes the error:ERR_TOO_MANY_REDIRECTS
Edit nginx cmap: Adduse-forwarded-headers: true
. Did not workHere are the links for comparison: http://speedtest.eddienetworks.ddnsfree.com http://speedtest_lb.eddienetworks.ddnsfree.com (Temporary)
Thanks