kubernetes / ingress-nginx

Ingress NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.35k stars 8.23k forks source link

Unable to deploy second nginx ingress controller to ingest syslog #5996

Closed andelhie closed 4 years ago

andelhie commented 4 years ago

NGINX Ingress controller version: 1.8.0

Kubernetes version (use kubectl version):Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-13T18:08:14Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

Environment:

What happened:

Updated the values.yaml file to deploy new ingress controller to project namespace tele-dev but the only thing that gets deployed is a new svc.

nginx-syslog-nginx-ingress      LoadBalancer   10.105.99.85     10.40.157.36   80:31324/TCP,443:30103/TCP   3m58s

What you expected to happen:

I was hoping to get a new ingress controller to take in syslog from out side kubernetes. I am making a telemetry platform to take a lots of syslog data.

How to reproduce it: helm install nginx-syslog nginx-stable/nginx-ingress --values Documents/kube-elk-helm/nginx/values-dev.yaml -n tele-dev

Anything else we need to know:

Here is a copy of my values.yaml file

## nginx configuration
## Ref: https://github.com/kubernetes/ingress/blob/master/controllers/nginx/configuration.md
##
controller:
  name: controller
  image:
    # registry value can be any of the following:
    #  us.gcr.io
    #  eu.gcr.io
    #  asia.gcr.io
    registry: us.gcr.io
    repository: k8s-artifacts-prod/ingress-nginx/controller
    tag: "v0.34.1"
    # digest: sha256:0e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae3457e8bbceb20
    pullPolicy: IfNotPresent
    # www-data -> uid 101
    runAsUser: 101
    allowPrivilegeEscalation: true

  # This will fix the issue of HPA not being able to read the metrics.
  # Note that if you enable it for existing deployments, it won't work as the labels are immutable.
  # We recommend setting this to true for new deployments.
  useComponentLabel: false

  # Override component label key
  # componentLabelKeyOverride:

  # Configures the ports the nginx-controller listens on
  containerPort:
    http: 80
    https: 443

  # Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
  config: {}

  # Maxmind license key to download GeoLite2 Databases
  # https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases
  maxmindLicenseKey: ""

  # Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/customization/custom-headers
  proxySetHeaders: {}

  # Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headers
  addHeaders: {}

  # Required for use with CNI based kubernetes installations (such as ones set up by kubeadm),
  # since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920
  # is merged
  hostNetwork: false

  # Optionally customize the pod dnsConfig.
  dnsConfig: {}

  # Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'.
  # By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller
  # to keep resolving names inside the k8s network, use ClusterFirstWithHostNet.
  dnsPolicy: ClusterFirst

  # Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network
  # Ingress status was blank because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply
  reportNodeInternalIp: false

  ## Use host ports 80 and 443
  daemonset:
    useHostPort: false

    hostPorts:
      http: 80
      https: 443

  ## Required only if defaultBackend.enabled = false
  ## Must be <namespace>/<service_name>
  ##
  defaultBackendService: ""

  ## Election ID to use for status update
  ##
  electionID: ingress-controller-leader

  ## Name of the ingress class to route through this controller
  ##
  ingressClass: syslog

  # labels to add to the deployment metadata
  deploymentLabels: {}

  # labels to add to the pod container metadata
  podLabels: {}
  #  key: value

  ## Security Context policies for controller pods
  ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for
  ## notes on enabling and using sysctls
  ##
  podSecurityContext: {}

  ## Allows customization of the external service
  ## the ingress will be bound to via DNS
  publishService:
    enabled: false
    ## Allows overriding of the publish service to bind to
    ## Must be <namespace>/<service_name>
    ##
    pathOverride: ""

  ## Limit the scope of the controller
  ##
  scope:
    enabled: true
    namespace: "tele-dev"   # defaults to .Release.Namespace

  ## Allows customization of the configmap / nginx-configmap namespace
  ##
  configMapNamespace: ""   # defaults to .Release.Namespace

  ## Allows customization of the tcp-services-configmap namespace
  ##
  tcp:
    configMapNamespace: ""   # defaults to .Release.Namespace

  ## Allows customization of the udp-services-configmap namespace
  ##
  udp:
    configMapNamespace: ""   # defaults to .Release.Namespace

  ## Additional command line arguments to pass to nginx-ingress-controller
  ## E.g. to specify the default SSL certificate you can use
  ## extraArgs:
  ##   default-ssl-certificate: "<namespace>/<secret_name>"
  extraArgs: {}

  ## Additional environment variables to set
  extraEnvs: []
  # extraEnvs:
  #   - name: FOO
  #     valueFrom:
  #       secretKeyRef:
  #         key: FOO
  #         name: secret-resource

  ## DaemonSet or Deployment
  ##
  kind: Deployment

  ## Annotations to be added to the controller deployment
  ##
  deploymentAnnotations: {}

  # The update strategy to apply to the Deployment or DaemonSet
  ##
  updateStrategy: {}
  #  rollingUpdate:
  #    maxUnavailable: 1
  #  type: RollingUpdate

  # minReadySeconds to avoid killing pods before we are ready
  ##
  minReadySeconds: 0

  ## Node tolerations for server scheduling to nodes with taints
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  ##
  tolerations: []
  #  - key: "key"
  #    operator: "Equal|Exists"
  #    value: "value"
  #    effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"

  ## Affinity and anti-affinity
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
  ##
  affinity: {}
    # # An example of preferred pod anti-affinity, weight is in the range 1-100
    # podAntiAffinity:
    #   preferredDuringSchedulingIgnoredDuringExecution:
    #   - weight: 100
    #     podAffinityTerm:
    #       labelSelector:
    #         matchExpressions:
    #         - key: app
    #           operator: In
    #           values:
    #           - nginx-ingress
    #       topologyKey: kubernetes.io/hostname

    # # An example of required pod anti-affinity
    # podAntiAffinity:
    #   requiredDuringSchedulingIgnoredDuringExecution:
    #   - labelSelector:
    #       matchExpressions:
    #       - key: app
    #         operator: In
    #         values:
    #         - nginx-ingress
    #     topologyKey: "kubernetes.io/hostname"

  ## terminationGracePeriodSeconds
  ##
  terminationGracePeriodSeconds: 60

  ## Node labels for controller pod assignment
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}

  ## Liveness and readiness probe values
  ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
  ##
  livenessProbe:
    failureThreshold: 3
    initialDelaySeconds: 10
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 1
    port: 10254
  readinessProbe:
    failureThreshold: 3
    initialDelaySeconds: 10
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 1
    port: 10254

  ## Annotations to be added to controller pods
  ##
  podAnnotations: {}

  # Add Config Checksum to pod Annotations
  # This will trigger Rolling Updates on configuration changes
  podAnnotationConfigChecksum: false

  replicaCount: 1

  minAvailable: 1

  resources: {}
  #  limits:
  #    cpu: 100m
  #    memory: 64Mi
  #  requests:
  #    cpu: 100m
  #    memory: 64Mi

  autoscaling:
    enabled: false
    minReplicas: 2
    maxReplicas: 11
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50

  ## Override NGINX template
  customTemplate:
    configMapName: ""
    configMapKey: ""

  service:
    enabled: true

    annotations: {}
    labels: {}
    ## Deprecated, instead simply do not provide a clusterIP value
    omitClusterIP: false
    # clusterIP: ""

    ## List of IP addresses at which the controller services are available
    ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
    ##
    externalIPs: []

    loadBalancerIP: ""
    loadBalancerSourceRanges: []

    enableHttp: true
    enableHttps: true

    ## Set external traffic policy to: "Local" to preserve source IP on
    ## providers supporting it
    ## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
    externalTrafficPolicy: ""

    # Must be either "None" or "ClientIP" if set. Kubernetes will default to "None".
    # Ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
    sessionAffinity: ""

    healthCheckNodePort: 0

    ports:
      http: 80
      https: 443

    targetPorts:
      http: http
      https: https

    type: LoadBalancer

    # type: NodePort
    # nodePorts:
    #   http: 32080
    #   https: 32443
    #   tcp:
    #     8080: 32808
    nodePorts:
      http: ""
      https: ""
      tcp: {}
      udp: {}

    ## Enables an additional internal load balancer (besides the external one).
    ## Annotations are mandatory for the load balancer to come up. Varies with the cloud service.
    internal:
      enabled: false
      annotations: {}

  extraContainers: []
  ## Additional containers to be added to the controller pod.
  ## See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example.
  #  - name: my-sidecar
  #    image: nginx:latest
  #  - name: lemonldap-ng-controller
  #    image: lemonldapng/lemonldap-ng-controller:0.2.0
  #    args:
  #      - /lemonldap-ng-controller
  #      - --alsologtostderr
  #      - --configmap=$(POD_NAMESPACE)/lemonldap-ng-configuration
  #    env:
  #      - name: POD_NAME
  #        valueFrom:
  #          fieldRef:
  #            fieldPath: metadata.name
  #      - name: POD_NAMESPACE
  #        valueFrom:
  #          fieldRef:
  #            fieldPath: metadata.namespace
  #    volumeMounts:
  #    - name: copy-portal-skins
  #      mountPath: /srv/var/lib/lemonldap-ng/portal/skins

  extraVolumeMounts: []
  ## Additional volumeMounts to the controller main container.
  #  - name: copy-portal-skins
  #   mountPath: /var/lib/lemonldap-ng/portal/skins

  extraVolumes: []
  ## Additional volumes to the controller pod.
  #  - name: copy-portal-skins
  #    emptyDir: {}

  extraInitContainers: []
  ## Containers, which are run before the app containers are started.
  # - name: init-myservice
  #   image: busybox
  #   command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']

  admissionWebhooks:
    enabled: false
    failurePolicy: Fail
    port: 8443

    service:
      annotations: {}
      ## Deprecated, instead simply do not provide a clusterIP value
      omitClusterIP: false
      # clusterIP: ""
      externalIPs: []
      loadBalancerIP: ""
      loadBalancerSourceRanges: []
      servicePort: 443
      type: ClusterIP

    patch:
      enabled: true
      image:
        repository: jettech/kube-webhook-certgen
        tag: v1.0.0
        pullPolicy: IfNotPresent
      ## Provide a priority class name to the webhook patching job
      ##
      priorityClassName: ""
      podAnnotations: {}
      nodeSelector: {}
      resources: {}

  metrics:
    port: 10254
    # if this port is changed, change healthz-port: in extraArgs: accordingly
    enabled: false

    service:
      annotations: {}
      # prometheus.io/scrape: "true"
      # prometheus.io/port: "10254"

      ## Deprecated, instead simply do not provide a clusterIP value
      omitClusterIP: false
      # clusterIP: ""

      ## List of IP addresses at which the stats-exporter service is available
      ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
      ##
      externalIPs: []

      loadBalancerIP: ""
      loadBalancerSourceRanges: []
      servicePort: 9913
      type: ClusterIP

    serviceMonitor:
      enabled: false
      additionalLabels: {}
      namespace: ""
      namespaceSelector: {}
      # Default: scrape .Release.Namespace only
      # To scrape all, use the following:
      # namespaceSelector:
      #   any: true
      scrapeInterval: 30s
      # honorLabels: true

    prometheusRule:
      enabled: false
      additionalLabels: {}
      namespace: ""
      rules: []
        # # These are just examples rules, please adapt them to your needs
        # - alert: TooMany500s
        #   expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
        #   for: 1m
        #   labels:
        #     severity: critical
        #   annotations:
        #     description: Too many 5XXs
        #     summary: More than 5% of the all requests did return 5XX, this require your attention
        # - alert: TooMany400s
        #   expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"4.+"} ) / sum(nginx_ingress_controller_requests) ) > 5
        #   for: 1m
        #   labels:
        #     severity: critical
        #   annotations:
        #     description: Too many 4XXs
        #     summary: More than 5% of the all requests did return 4XX, this require your attention

  lifecycle: {}

  priorityClassName: ""

## Rollback limit
##
revisionHistoryLimit: 10

## Default 404 backend
##
defaultBackend:

  ## If false, controller.defaultBackendService must be provided
  ##
  enabled: true

  name: default-backend
  image:
    repository: k8s.gcr.io/defaultbackend-amd64
    tag: "1.5"
    pullPolicy: IfNotPresent
    # nobody user -> uid 65534
    runAsUser: 65534

  # This will fix the issue of HPA not being able to read the metrics.
  # Note that if you enable it for existing deployments, it won't work as the labels are immutable.
  # We recommend setting this to true for new deployments.
  useComponentLabel: false

  # Override component label key
  # componentLabelKeyOverride:

  extraArgs: {}

  serviceAccount:
    create: true
    name:
  ## Additional environment variables to set for defaultBackend pods
  extraEnvs: []

  port: 8080

  ## Readiness and liveness probes for default backend
  ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
  ##
  livenessProbe:
    failureThreshold: 3
    initialDelaySeconds: 30
    periodSeconds: 10
    successThreshold: 1
    timeoutSeconds: 5
  readinessProbe:
    failureThreshold: 6
    initialDelaySeconds: 0
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 5

  ## Node tolerations for server scheduling to nodes with taints
  ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  ##
  tolerations: []
  #  - key: "key"
  #    operator: "Equal|Exists"
  #    value: "value"
  #    effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"

  affinity: {}

  ## Security Context policies for controller pods
  ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for
  ## notes on enabling and using sysctls
  ##
  podSecurityContext: {}

  # labels to add to the deployment metadata
  deploymentLabels: {}

  # labels to add to the pod container metadata
  podLabels: {}
  #  key: value

  ## Node labels for default backend pod assignment
  ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
  ##
  nodeSelector: {}

  ## Annotations to be added to default backend pods
  ##
  podAnnotations: {}

  replicaCount: 1

  minAvailable: 1

  resources: {}
  # limits:
  #   cpu: 10m
  #   memory: 20Mi
  # requests:
  #   cpu: 10m
  #   memory: 20Mi

  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 2
    targetCPUUtilizationPercentage: 50
    targetMemoryUtilizationPercentage: 50

  service:
    annotations: {}
    ## Deprecated, instead simply do not provide a clusterIP value
    omitClusterIP: false
    # clusterIP: ""

    ## List of IP addresses at which the default backend service is available
    ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
    ##
    externalIPs: []

    loadBalancerIP: ""
    loadBalancerSourceRanges: []
    servicePort: 80
    type: ClusterIP

  priorityClassName: ""

# If provided, the value will be used as the `release` label instead of .Release.Name
releaseLabelOverride: ""

## Enable RBAC as per https://github.com/kubernetes/ingress/tree/master/examples/rbac/nginx and https://github.com/kubernetes/ingress/issues/266
rbac:
  create: true
  scope: false

# If true, create & use Pod Security Policy resources
# https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
  enabled: false

serviceAccount:
  create: true
  name:
  annotations: {}

## Optional array of imagePullSecrets containing private registry credentials
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# - name: secretName

# TCP service key:value pairs
# Ref: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tcp
##
tcp: 
#  8080: "default/example-tcp-svc:9000"
  5515: "tele-dev/logstash-logstash:5515"

# UDP service key:value pairs
# Ref: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/udp
##
udp: 
#  53: "kube-system/kube-dns:53"
  5515: "tele-dev/logstash-logstash:5515"
aledbf commented 4 years ago

@andelhie I cannot reproduce this issue

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
cat << EOF | helm template ingress ingress-nginx/ingress-nginx --namespace tele-dev --values - | kubectl --namespace tele-dev apply -f -
controller:
  service:
    type: LoadBalancer
    externalTrafficPolicy: Local
  scope:
    enabled: true
    namespace: tele-dev
tcp:
  5515: "tele-dev/logstash-logstash:5515"
udp:
  5515: "tele-dev/logstash-logstash:5515"
EOF
serviceaccount/ingress-ingress-nginx created
configmap/ingress-ingress-nginx-tcp created
configmap/ingress-ingress-nginx-udp created
configmap/ingress-ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-ingress-nginx created
service/ingress-ingress-nginx-controller-admission created
deployment.apps/ingress-ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-ingress-nginx-admission configured
serviceaccount/ingress-ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-ingress-nginx-admission unchanged
role.rbac.authorization.k8s.io/ingress-ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-ingress-nginx-admission created
job.batch/ingress-ingress-nginx-admission-create created
job.batch/ingress-ingress-nginx-admission-patch created
The Service "ingress-ingress-nginx-controller" is invalid: spec.ports: Invalid value: []core.ServicePort{core.ServicePort{Name:"http", Protocol:"TCP", AppProtocol:(*string)(nil), Port:80, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:"http"}, NodePort:0}, core.ServicePort{Name:"https", Protocol:"TCP", AppProtocol:(*string)(nil), Port:443, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:"https"}, NodePort:0}, core.ServicePort{Name:"5515-tcp", Protocol:"TCP", AppProtocol:(*string)(nil), Port:5515, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:"5515-tcp"}, NodePort:0}, core.ServicePort{Name:"5515-udp", Protocol:"UDP", AppProtocol:(*string)(nil), Port:5515, TargetPort:intstr.IntOrString{Type:1, IntVal:0, StrVal:"5515-udp"}, NodePort:0}}: cannot create an external load balancer with mix protocols

The error at the end

cannot create an external load balancer with mix protocols

is expected. There is no support for mixed protocol in cloud load balancers.

andelhie commented 4 years ago

So I just tried deploying it in the method you show above and I am getting some helm errors. I did upgrade to Helm v3.2.4 to test but I just keep getting this error when I try and run the command you posted.

Error: failed to download "ingress-nginx/ingress-nginx" (hint: running `helm repo update` may help)
error: no objects passed to apply

So I switched it to the repo I do have nginx-stable/ingress-nginx and i get the same error.

Error: failed to download "nginx-stable/ingress-nginx" (hint: running `helm repo update` may help)
error: no objects passed to apply

I did try the helm repo update but that did not help.

I did notice this error would pop up when the --values flag is used and it is not formatted correctly.

aledbf commented 4 years ago

@andelhie I cannot reproduce that error

docker run -ti --net=host --rm -v ~/.kube:/root/.kube -v ~/.helm:/root/.helm dtzar/helm-kubectl bash -c '
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

kubectl create ns tele-dev
cat << EOF | helm template ingress ingress-nginx/ingress-nginx --namespace tele-dev --values - | kubectl apply --namespace tele-dev -f -
controller:
  service:
    type: LoadBalancer
    externalTrafficPolicy: Local
  scope:
    enabled: true
    namespace: tele-dev
tcp:
  5515: "tele-dev/logstash-logstash:5515"
EOF

kubectl get svc --namespace tele-dev
'
"ingress-nginx" has been added to your repositories
namespace/tele-dev created
serviceaccount/ingress-ingress-nginx created
configmap/ingress-ingress-nginx-tcp created
configmap/ingress-ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-ingress-nginx created
service/ingress-ingress-nginx-controller-admission created
service/ingress-ingress-nginx-controller created
deployment.apps/ingress-ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-ingress-nginx-admission configured
serviceaccount/ingress-ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-ingress-nginx-admission unchanged
role.rbac.authorization.k8s.io/ingress-ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-ingress-nginx-admission created
job.batch/ingress-ingress-nginx-admission-create created
job.batch/ingress-ingress-nginx-admission-patch created
NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                     AGE
ingress-ingress-nginx-controller             LoadBalancer   10.106.195.115   <pending>     80:31148/TCP,443:30391/TCP,5515:32359/TCP   0s
ingress-ingress-nginx-controller-admission   ClusterIP      10.106.119.211   <none>        443/TCP                                     0s
mavrick commented 4 years ago

Getting same error

# helm upgrade --reuse-values nginx-ingress stable/nginx-ingress
Error: UPGRADE FAILED: template: nginx-ingress/templates/default-backend-poddisruptionbudget.yaml:1:73: executing "nginx-ingress/templates/default-backend-poddisruptionbudget.yaml" at <.Values.defaultBackend.autoscaling.minReplicas>: nil pointer evaluating interface {}.minReplicas

Environment:

Cloud provider or hardware configuration: GKE OS (e.g. from /etc/os-release): Ubuntu Kernel (e.g. uname -a): 5.3.0-62-generic #56-Ubuntu SMP Tue Jun 23 11:20:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux Install tools: Helm

# helm version
version.BuildInfo{Version:"v3.2.2", GitCommit:"a6ea66349ae3015618da4f547677a14b9ecc09b3", GitTreeState:"clean", GoVersion:"go1.13.12"}
# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.11-gke.5", GitCommit:"baccd25d44f1a0d06ad3190eb508784efbb990a5", GitTreeState:"clean", BuildDate:"2020-06-25T22:55:26Z", GoVersion:"go1.13.9b4", Compiler:"gc", Platform:"linux/amd64"}
# helm list
NAME            NAMESPACE       REVISION        UPDATED                                         STATUS          CHART                   APP VERSION
nginx-ingress   default         6               2020-06-16 14:05:24.463862044 +1000 AEST        deployed        nginx-ingress-1.39.1    0.32.0
andelhie commented 4 years ago

So I figured out the problem I was having was I grabbed the values.yaml from the git repo. I went and grab the chart with helm pull nginx-stable/nginx-ingress --untar. With the values file I was able to see they where way different. After deploying the updated values.yaml my setup deployed just fine.

@mavrick your problem is you need to pass some values for autoscaling you passing nill values where it needs a number.

spoved-aws commented 3 years ago

I am using only the below command to install the ingress controller on a new namespace:

$ helm install ingress-nginx/ingress-nginx --generate-name \
>     --namespace ingress-tls-test1 \
>     --set controller.replicaCount=2 \
>     --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
>     --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux

Error: failed to download "ingress-nginx/ingress-nginx" (hint: running helm repo update may help)

I don't have a values.yaml file . Does anyone know how can I resolve this issue ?