Closed MitchDart closed 3 years ago
@MitchDart: The label(s) triage/support
cannot be applied, because the repository doesn't have them.
nginx-ingress.yaml
``` apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: nginx-monitoring namespace: monitoring spec: rules: - host: dashboard.sticitt.co.za http: paths: - backend: serviceName: grafana-grafana servicePort: 3000 tls: - hosts: - dashboard.sticitt.co.za secretName: **** ```gce-ingress.yaml
``` apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: gcp-monitoring namespace: monitoring spec: rules: - host: dashboard-gcp.sticitt.co.za http: paths: - backend: serviceName: grafana-grafana servicePort: 3000 tls: - hosts: - dashboard-gcp.sticitt.co.za secretName: **** ```/triage support
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
What does this mean ?
one is Nginx one is GCE load balancer
Show ;
kubectl get all,nodes,ing -A -o wide
kubectl describe <resourcetype> <resourcename> -n <reourcenamespace> ... # for all related objects like pods, services, ingress-controllers, ingress-obects
/remove-kind support /triage needs-information
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
Hi All,
I attempted to ask on Slack but this issue is now critical for us and this is my last resort. Content for websites downloads extremely slowly from South Africa. We are using Nginx Ingress deployed using Helm on GKE. I will link the values file at the end of this post.
Example Here is a Grafana dashboard I deployed. I have two ingresses pointing to the same dashboard, one is Nginx one is GCE load balancer. The GCE loads in a few seconds for me, the Nginx sometimes takes up to two minutes to load all the resources. https://dashboard.sticitt.co.za (Nginx Ingress) https://dashboard-gcp.sticitt.co.za (GCE Ingress)
Nginx controller deployed using helm: Chart version: v3.24.0 Nginx Ingress version: v0.44.0
I have tried many things to solve this but it seems to be limiting the bandwidth somehow. If I disable Http2 from my browser it loads quite a bit faster but still no-where near what it should be. Since Http2 uses a single connection it seems that each connection is limited on bandwidth. Any help would be extremely appreciated. Here are my Helm values:
helm-values.yaml
``` ## helm upgrade nginx-controller ingress-nginx/ingress-nginx --values nginx.yaml --version 3.7.1 --namespace services ## nginx configuration ## Ref: https://github.com/kubernetes/ingress-nginx/blob/master/controllers/nginx/configuration.md ## controller: image: repository: k8s.gcr.io/ingress-nginx/controller tag: "v0.44.0" digest: sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a pullPolicy: IfNotPresent # www-data -> uid 101 runAsUser: 101 allowPrivilegeEscalation: true # Configures the ports the nginx-controller listens on containerPort: http: 80 https: 443 # Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/ config: {} ## Annotations to be added to the controller config configuration configmap ## configAnnotations: {} # Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/customization/custom-headers proxySetHeaders: {} # Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers addHeaders: {} # Optionally customize the pod dnsConfig. dnsConfig: {} # Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'. # By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller # to keep resolving names inside the k8s network, use ClusterFirstWithHostNet. dnsPolicy: ClusterFirst # Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network # Ingress status was blank because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply reportNodeInternalIp: false # Required for use with CNI based kubernetes installations (such as ones set up by kubeadm), # since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920 # is merged hostNetwork: false ## Use host ports 80 and 443 ## Disabled by default ## hostPort: enabled: false ports: http: 80 https: 443 ## Election ID to use for status update ## electionID: ingress-controller-leader ## Name of the ingress class to route through this controller ## ingressClass: nginx # labels to add to the pod container metadata podLabels: {} # key: value ## Security Context policies for controller pods ## podSecurityContext: {} ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for ## notes on enabling and using sysctls ### sysctls: {} # sysctls: # "net.core.somaxconn": "8192" ## Allows customization of the source of the IP address or FQDN to report ## in the ingress status field. By default, it reads the information provided ## by the service. If disable, the status field reports the IP address of the ## node or nodes where an ingress controller pod is running. publishService: enabled: true ## Allows overriding of the publish service to bind to ## Must benginx-ingress.yaml
``` apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: nginx-monitoring namespace: monitoring spec: rules: - host: dashboard.sticitt.co.za http: paths: - backend: serviceName: grafana-grafana servicePort: 3000 tls: - hosts: - dashboard.sticitt.co.za secretName: **** ```gce-ingress.yaml
``` apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: gcp-monitoring namespace: monitoring spec: rules: - host: dashboard-gcp.sticitt.co.za http: paths: - backend: serviceName: grafana-grafana servicePort: 3000 tls: - hosts: - dashboard-gcp.sticitt.co.za secretName: **** ```/triage support