kubernetes / ingress-nginx

Ingress NGINX Controller for Kubernetes
https://kubernetes.github.io/ingress-nginx/
Apache License 2.0
17.47k stars 8.25k forks source link

multiple ingress-nginx controllers chroot failed bind internal syslog #8839

Closed NissesSenap closed 2 years ago

NissesSenap commented 2 years ago

What happened:

I'm running two controllers on my EKS cluster running on calico CNI (this forces me to overwrite a number of ports due to how the AWS CNI works in hostNetwork: true). When running in the chroot mode I'm unable to start all my pods, my guess is that a port or something similar gets occupied when running one public and one private controller on the same node.

When running ingress-nginx chroot 1.3.0 I get the following error:

➜ k logs ingress-nginx-private-controller-6dfbd7d4b8-55k7v
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.3.0
  Build:         2b7b74854d90ad9b4b96a5011b9e8b67d20bfb8f
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.10

-------------------------------------------------------------------------------

W0718 11:37:43.227986       7 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0718 11:37:43.228153       7 main.go:230] "Creating API client" host="https://172.20.0.1:443"
I0718 11:37:43.235220       7 main.go:274] "Running in Kubernetes cluster" major="1" minor="22+" git="v1.22.9-eks-a64ea69" state="clean" commit="540410f9a2e24b7a2a870ebfacb3212744b5f878" platform="linux/amd64"
I0718 11:37:43.416411       7 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0718 11:37:43.432264       7 ssl.go:531] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
F0718 11:37:43.477058       7 logger.go:36] failed bind internal syslog: %!w(*net.OpError=&{listen udp <nil> 0xc0001e3d70 0xc0005b7720})

What you expected to happen:

The application should be able to start without any issues.

When I changed back to running without the chroot image it works perfectly.

I think some socket or similar is trying to bind the some port if you are running multiple versions of the controller on the same node when running hostNetwork: true

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

NGINX Ingress controller
  Release:       v1.3.0
  Build:         2b7b74854d90ad9b4b96a5011b9e8b67d20bfb8f
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.10

Kubernetes version (use kubectl version):

Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.9-eks-a64ea69", GitCommit:"540410f9a2e24b7a2a870ebfacb3212744b5f878", GitTreeState:"clean", BuildDate:"2022-05-12T19:15:31Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}

Environment:

➜ helm get values ingress-nginx-private           
USER-SUPPLIED VALUES:
controller:
  addHeaders: null
  admissionWebhooks:
    port: 2443
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app.kubernetes.io/name
              operator: In
              values:
              - ingress-nginx
          topologyKey: topology.kubernetes.io/zone
        weight: 100
  config:
    allow-snippet-annotations: false
    datadog-collector-host: $HOST_IP
    enable-opentracing: "true"
    server-tokens: "false"
  containerPort:
    http: 1080
    https: 1443
  dnsPolicy: ClusterFirstWithHostNet
  electionID: ingress-controller-leader-nginx-private
  extraArgs:
    default-server-port: 8282
    healthz-port: 10354
    http-port: 1080
    https-port: 1443
    profiler-port: 10345
    status-port: 10346
    stream-port: 10347
  extraEnvs:
  - name: HOST_IP
    valueFrom:
      fieldRef:
        apiVersion: v1
        fieldPath: status.hostIP
  hostNetwork: true
  image:
    chroot: true
  ingressClass: nginx-private
  ingressClassResource:
    controllerValue: k8s.io/ingress-nginx-private
    default: false
    name: nginx-private
  livenessProbe:
    httpGet:
      port: 10354
  metrics:
    enabled: true
    port: 10354
    service:
      labels:
        function: metrics
  podAnnotations:
    ad.datadoghq.com/controller.check_names: '["nginx", "nginx_ingress_controller"]'
    ad.datadoghq.com/controller.init_configs: '[{},{}]'
    ad.datadoghq.com/controller.instances: '[{"prometheus_url": "http://%%host%%:%%port_metrics%%/metrics"}]'
    ad.datadoghq.com/controller.logs: '[{"service": "controller", "source": "nginx-ingress-controller"}]'
  priorityClassName: platform-medium
  readinessProbe:
    httpGet:
      port: 10354
  replicaCount: 3
  resources:
    requests:
      cpu: 100m
      memory: 110Mi
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
    externalTrafficPolicy: Local

Public ingress-nginx controller

    ➜ helm get values ingress-nginx-public 
USER-SUPPLIED VALUES:
controller:
  addHeaders: null
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app.kubernetes.io/name
              operator: In
              values:
              - ingress-nginx
          topologyKey: topology.kubernetes.io/zone
        weight: 100
  config:
    allow-snippet-annotations: false
    datadog-collector-host: $HOST_IP
    enable-opentracing: "true"
    server-tokens: "false"
  dnsPolicy: ClusterFirstWithHostNet
  electionID: ingress-controller-leader-nginx-public
  extraArgs:
    default-ssl-certificate: ingress-nginx/ingress-nginx
  extraEnvs:
  - name: HOST_IP
    valueFrom:
      fieldRef:
        apiVersion: v1
        fieldPath: status.hostIP
  hostNetwork: true
  image:
    chroot: true
  ingressClass: nginx-public
  ingressClassResource:
    controllerValue: k8s.io/ingress-nginx-public
    default: true
    name: nginx-public
  metrics:
    enabled: true
    service:
      labels:
        function: metrics
  podAnnotations:
    ad.datadoghq.com/controller.check_names: '["nginx", "nginx_ingress_controller"]'
    ad.datadoghq.com/controller.init_configs: '[{},{}]'
    ad.datadoghq.com/controller.instances: '[{"prometheus_url": "http://%%host%%:%%port_metrics%%/metrics"}]'
    ad.datadoghq.com/controller.logs: '[{"service": "controller", "source": "nginx-ingress-controller"}]'
  priorityClassName: platform-medium
  replicaCount: 3
  resources:
    requests:
      cpu: 100m
      memory: 110Mi
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
    externalTrafficPolicy: Local

How to reproduce this issue:

My guess is that it should be enough to run two controllers, both with hostNetwork: true and

controller:
  image:
    chroot: true

and see that some of the pods start on the same node.

NissesSenap commented 2 years ago

I also saved the log from when i run the controller with chroot but in v1.2.1. But it's more or less the same output but with the stack as well.

######## 1.2.1
➜ k logs ingress-nginx-public-controller-5ffd9c5cbf-bnvgc
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.2.1
  Build:         08848d69e0c83992c89da18e70ea708752f21d7a
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.10

-------------------------------------------------------------------------------

W0718 11:36:08.980738       7 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0718 11:36:08.980919       7 main.go:230] "Creating API client" host="https://172.20.0.1:443"
I0718 11:36:08.994150       7 main.go:274] "Running in Kubernetes cluster" major="1" minor="22+" git="v1.22.9-eks-a64ea69" state="clean" commit="540410f9a2e24b7a2a870ebfacb3212744b5f878" platform="linux/amd64"
I0718 11:36:09.221205       7 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0718 11:36:09.262684       7 ssl.go:531] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0718 11:36:09.307437       7 nginx.go:256] "Starting NGINX Ingress controller"
F0718 11:36:09.307848       7 logger.go:36] failed bind internal syslog: %!w(*net.OpError=&{listen udp <nil> 0xc00012cff0 0xc0003c3580})
goroutine 75 [running]:
k8s.io/klog/v2.stacks(0x1)
        k8s.io/klog/v2@v2.60.1/klog.go:860 +0x8a
k8s.io/klog/v2.(*loggingT).output(0x2859100, 0x3, 0x0, 0xc0004440e0, 0x1, {0x2013624?, 0x1?}, 0xc0005f5c00?, 0x0)
        k8s.io/klog/v2@v2.60.1/klog.go:825 +0x686
k8s.io/klog/v2.(*loggingT).printfDepth(0x2859100, 0x5c65041c?, 0x0, {0x0, 0x0}, 0x405d80?, {0x19cc6fb, 0x1f}, {0xc000039fb0, 0x1, ...})
        k8s.io/klog/v2@v2.60.1/klog.go:630 +0x1f2
k8s.io/klog/v2.(*loggingT).printf(...)
        k8s.io/klog/v2@v2.60.1/klog.go:612
k8s.io/klog/v2.Fatalf(...)
        k8s.io/klog/v2@v2.60.1/klog.go:1516
main.logger({0x19b5424, 0xf})
        k8s.io/ingress-nginx/cmd/nginx/logger.go:36 +0x1d4
created by main.main
        k8s.io/ingress-nginx/cmd/nginx/main.go:158 +0xf75

goroutine 1 [chan receive]:
main.handleSigterm(0x19acee2?, 0xa, 0x1a8e180)
        k8s.io/ingress-nginx/cmd/nginx/main.go:175 +0x9c
main.main()
        k8s.io/ingress-nginx/cmd/nginx/main.go:165 +0x1097

goroutine 35 [sleep]:
time.Sleep(0x12a05f200)
        runtime/time.go:194 +0x12e
k8s.io/ingress-nginx/internal/ingress/metric.(*collector).Start.func1()
        k8s.io/ingress-nginx/internal/ingress/metric/main.go:149 +0x2c
created by k8s.io/ingress-nginx/internal/ingress/metric.(*collector).Start
        k8s.io/ingress-nginx/internal/ingress/metric/main.go:148 +0x31f

goroutine 32 [syscall]:
syscall.Syscall6(0xe8, 0x12, 0xc0006afc14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
        syscall/asm_linux_amd64.s:43 +0x5
golang.org/x/sys/unix.EpollWait(0x0?, {0xc0006afc14?, 0x0?, 0x0?}, 0x0?)
        golang.org/x/sys@v0.0.0-20220412211240-33da011f77ad/unix/zsyscall_linux_amd64.go:56 +0x58
github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0003e2ea0)
        github.com/fsnotify/fsnotify@v1.5.4/inotify_poller.go:86 +0x7d
github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000100140)
        github.com/fsnotify/fsnotify@v1.5.4/inotify.go:206 +0x26e
created by github.com/fsnotify/fsnotify.NewWatcher
        github.com/fsnotify/fsnotify@v1.5.4/inotify.go:60 +0x1c5

goroutine 30 [IO wait]:
internal/poll.runtime_pollWait(0x7fbea1ce7f78, 0x72)
        runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0003cec80?, 0xc000500000?, 0x0)
        internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
        internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc0003cec80, {0xc000500000, 0x4116, 0x4116})
        internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0003cec80, {0xc000500000?, 0xc0004291c0?, 0xc00050004b?})
        net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc000124018, {0xc000500000?, 0x0?, 0x1007fffe4?})
        net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc00048f7a0, {0xc000500000?, 0x0?, 0x8?})
        crypto/tls/conn.go:784 +0x3d
bytes.(*Buffer).ReadFrom(0xc0001ef3f8, {0x1c115e0, 0xc00048f7a0})
        bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0001ef180, {0x1c13c60?, 0xc000124018}, 0x40d0?)
        crypto/tls/conn.go:806 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc0001ef180, 0x0)
        crypto/tls/conn.go:613 +0x116
crypto/tls.(*Conn).readRecord(...)
        crypto/tls/conn.go:581
crypto/tls.(*Conn).Read(0xc0001ef180, {0xc000481000, 0x1000, 0x9e5aa0?})
        crypto/tls/conn.go:1284 +0x16f
bufio.(*Reader).Read(0xc00044bec0, {0xc0002104a0, 0x9, 0x9f3d82?})
        bufio/bufio.go:236 +0x1b4
io.ReadAtLeast({0x1c11440, 0xc00044bec0}, {0xc0002104a0, 0x9, 0x9}, 0x9)
        io/io.go:331 +0x9a
io.ReadFull(...)
        io/io.go:350
golang.org/x/net/http2.readFrameHeader({0xc0002104a0?, 0x9?, 0xc001326990?}, {0x1c11440?, 0xc00044bec0?})
        golang.org/x/net@v0.0.0-20220225172249-27dd8689420f/http2/frame.go:237 +0x6e
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000210460)
        golang.org/x/net@v0.0.0-20220225172249-27dd8689420f/http2/frame.go:498 +0x95
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0004d5f98)
        golang.org/x/net@v0.0.0-20220225172249-27dd8689420f/http2/transport.go:2118 +0x130
golang.org/x/net/http2.(*ClientConn).readLoop(0xc00036e480)
        golang.org/x/net@v0.0.0-20220225172249-27dd8689420f/http2/transport.go:2014 +0x6f
created by golang.org/x/net/http2.(*Transport).newClientConn
        golang.org/x/net@v0.0.0-20220225172249-27dd8689420f/http2/transport.go:725 +0xa65

goroutine 36 [chan receive]:
k8s.io/ingress-nginx/internal/ingress/metric/collectors.namedProcess.Start(...)
        k8s.io/ingress-nginx/internal/ingress/metric/collectors/process.go:185
created by k8s.io/ingress-nginx/internal/ingress/metric.(*collector).Start
        k8s.io/ingress-nginx/internal/ingress/metric/main.go:152 +0x38a

goroutine 37 [IO wait]:
internal/poll.runtime_pollWait(0x7fbea1ce7e88, 0x72)
        runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0001b5a80?, 0x0?, 0x0)
        internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
        internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Accept(0xc0001b5a80)
        internal/poll/fd_unix.go:614 +0x22c
net.(*netFD).accept(0xc0001b5a80)
        net/fd_unix.go:172 +0x35
net.(*UnixListener).accept(0x0?)
        net/unixsock_posix.go:166 +0x1c
net.(*UnixListener).Accept(0xc0001091a0)
        net/unixsock.go:260 +0x3d
k8s.io/ingress-nginx/internal/ingress/metric/collectors.(*SocketCollector).Start(0xc00030ba40)
        k8s.io/ingress-nginx/internal/ingress/metric/collectors/socket.go:354 +0x35
created by k8s.io/ingress-nginx/internal/ingress/metric.(*collector).Start
        k8s.io/ingress-nginx/internal/ingress/metric/main.go:153 +0x3d8

goroutine 38 [IO wait]:
internal/poll.runtime_pollWait(0x7fbea1ce7d98, 0x72)
        runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0003ce000?, 0x0?, 0x0)
        internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
        internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Accept(0xc0003ce000)
        internal/poll/fd_unix.go:614 +0x22c
net.(*netFD).accept(0xc0003ce000)
        net/fd_unix.go:172 +0x35
net.(*TCPListener).accept(0xc0004f2000)
        net/tcpsock_posix.go:139 +0x28
net.(*TCPListener).Accept(0xc0004f2000)
        net/tcpsock.go:288 +0x3d
net/http.(*Server).Serve(0xc0002100e0, {0x1c29f10, 0xc0004f2000})
        net/http/server.go:3039 +0x385
net/http.(*Server).ListenAndServe(0xc0002100e0)
        net/http/server.go:2968 +0x7d
main.registerProfiler()
        k8s.io/ingress-nginx/cmd/nginx/main.go:333 +0x24b
created by main.main
        k8s.io/ingress-nginx/cmd/nginx/main.go:146 +0xdd3

goroutine 39 [chan receive]:
k8s.io/apimachinery/pkg/watch.(*Broadcaster).loop(0xc000128300)
        k8s.io/apimachinery@v0.23.6/pkg/watch/mux.go:247 +0x57
created by k8s.io/apimachinery/pkg/watch.NewLongQueueBroadcaster
        k8s.io/apimachinery@v0.23.6/pkg/watch/mux.go:89 +0x116

goroutine 40 [chan receive]:
k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1()
        k8s.io/client-go@v0.23.6/tools/record/event.go:304 +0x73
created by k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
        k8s.io/client-go@v0.23.6/tools/record/event.go:302 +0x8c

goroutine 41 [chan receive]:
k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1()
        k8s.io/client-go@v0.23.6/tools/record/event.go:304 +0x73
created by k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
        k8s.io/client-go@v0.23.6/tools/record/event.go:302 +0x8c

goroutine 42 [select]:
github.com/eapache/channels.(*RingChannel).ringBuffer(0xc0001812f0)
        github.com/eapache/channels@v1.1.0/ring_channel.go:87 +0x16a
created by github.com/eapache/channels.NewRingChannel
        github.com/eapache/channels@v1.1.0/ring_channel.go:32 +0x1d6

goroutine 43 [syscall]:
syscall.Syscall6(0xe8, 0x9, 0xc000569c14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
        syscall/asm_linux_amd64.s:43 +0x5
golang.org/x/sys/unix.EpollWait(0x0?, {0xc000569c14?, 0x0?, 0x0?}, 0x0?)
        golang.org/x/sys@v0.0.0-20220412211240-33da011f77ad/unix/zsyscall_linux_amd64.go:56 +0x58
github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000139d40)
        github.com/fsnotify/fsnotify@v1.5.4/inotify_poller.go:86 +0x7d
github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000100f50)
        github.com/fsnotify/fsnotify@v1.5.4/inotify.go:206 +0x26e
created by github.com/fsnotify/fsnotify.NewWatcher
        github.com/fsnotify/fsnotify@v1.5.4/inotify.go:60 +0x1c5

goroutine 44 [select]:
k8s.io/ingress-nginx/internal/watch.(*OSFileWatcher).watch.func1({0x7ffc14db9d63, 0x4})
        k8s.io/ingress-nginx/internal/watch/file_watcher.go:73 +0xdf
created by k8s.io/ingress-nginx/internal/watch.(*OSFileWatcher).watch
        k8s.io/ingress-nginx/internal/watch/file_watcher.go:71 +0x215

goroutine 45 [syscall]:
syscall.Syscall6(0xe8, 0xe, 0xc000589c14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
        syscall/asm_linux_amd64.s:43 +0x5
golang.org/x/sys/unix.EpollWait(0x0?, {0xc000589c14?, 0x0?, 0x0?}, 0x0?)
        golang.org/x/sys@v0.0.0-20220412211240-33da011f77ad/unix/zsyscall_linux_amd64.go:56 +0x58
github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000139f60)
        github.com/fsnotify/fsnotify@v1.5.4/inotify_poller.go:86 +0x7d
github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000100fa0)
        github.com/fsnotify/fsnotify@v1.5.4/inotify.go:206 +0x26e
created by github.com/fsnotify/fsnotify.NewWatcher
        github.com/fsnotify/fsnotify@v1.5.4/inotify.go:60 +0x1c5

goroutine 46 [select]:
k8s.io/ingress-nginx/internal/watch.(*OSFileWatcher).watch.func1({0x7ffc14db9d99, 0x3})
        k8s.io/ingress-nginx/internal/watch/file_watcher.go:73 +0xdf
created by k8s.io/ingress-nginx/internal/watch.(*OSFileWatcher).watch
        k8s.io/ingress-nginx/internal/watch/file_watcher.go:71 +0x215

goroutine 47 [chan receive]:
k8s.io/apimachinery/pkg/watch.(*Broadcaster).loop(0xc00031ac00)
        k8s.io/apimachinery@v0.23.6/pkg/watch/mux.go:247 +0x57
created by k8s.io/apimachinery/pkg/watch.NewLongQueueBroadcaster
        k8s.io/apimachinery@v0.23.6/pkg/watch/mux.go:89 +0x116

goroutine 48 [chan receive]:
k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1()
        k8s.io/client-go@v0.23.6/tools/record/event.go:304 +0x73
created by k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
        k8s.io/client-go@v0.23.6/tools/record/event.go:302 +0x8c

goroutine 49 [chan receive]:
k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher.func1()
        k8s.io/client-go@v0.23.6/tools/record/event.go:304 +0x73
created by k8s.io/client-go/tools/record.(*eventBroadcasterImpl).StartEventWatcher
        k8s.io/client-go@v0.23.6/tools/record/event.go:302 +0x8c

goroutine 51 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000082f60)
        k8s.io/client-go@v0.23.6/util/workqueue/delaying_queue.go:231 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue
        k8s.io/client-go@v0.23.6/util/workqueue/delaying_queue.go:68 +0x24f

goroutine 52 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000083140)
        k8s.io/client-go@v0.23.6/util/workqueue/delaying_queue.go:231 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue
        k8s.io/client-go@v0.23.6/util/workqueue/delaying_queue.go:68 +0x24f

goroutine 65 [select]:
k8s.io/ingress-nginx/internal/watch.(*OSFileWatcher).watch.func1({0x19ca7a3, 0xa})
        k8s.io/ingress-nginx/internal/watch/file_watcher.go:73 +0xdf
created by k8s.io/ingress-nginx/internal/watch.(*OSFileWatcher).watch
        k8s.io/ingress-nginx/internal/watch/file_watcher.go:71 +0x215

goroutine 66 [syscall]:
syscall.Syscall6(0xe8, 0x16, 0xc0006cfc14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
        syscall/asm_linux_amd64.s:43 +0x5
golang.org/x/sys/unix.EpollWait(0x0?, {0xc0006cfc14?, 0x0?, 0x0?}, 0x0?)
        golang.org/x/sys@v0.0.0-20220412211240-33da011f77ad/unix/zsyscall_linux_amd64.go:56 +0x58
github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0003e3040)
        github.com/fsnotify/fsnotify@v1.5.4/inotify_poller.go:86 +0x7d
github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000100230)
        github.com/fsnotify/fsnotify@v1.5.4/inotify.go:206 +0x26e
created by github.com/fsnotify/fsnotify.NewWatcher
        github.com/fsnotify/fsnotify@v1.5.4/inotify.go:60 +0x1c5

goroutine 67 [select]:
k8s.io/ingress-nginx/internal/watch.(*OSFileWatcher).watch.func1({0xc0003e2f71, 0x9})
        k8s.io/ingress-nginx/internal/watch/file_watcher.go:73 +0xdf
created by k8s.io/ingress-nginx/internal/watch.(*OSFileWatcher).watch
        k8s.io/ingress-nginx/internal/watch/file_watcher.go:71 +0x215

goroutine 68 [syscall]:
syscall.Syscall6(0xe8, 0x1a, 0xc00068fc14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
        syscall/asm_linux_amd64.s:43 +0x5
golang.org/x/sys/unix.EpollWait(0x0?, {0xc00068fc14?, 0x0?, 0x0?}, 0x0?)
        golang.org/x/sys@v0.0.0-20220412211240-33da011f77ad/unix/zsyscall_linux_amd64.go:56 +0x58
github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0003e3060)
        github.com/fsnotify/fsnotify@v1.5.4/inotify_poller.go:86 +0x7d
github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0001003c0)
        github.com/fsnotify/fsnotify@v1.5.4/inotify.go:206 +0x26e
created by github.com/fsnotify/fsnotify.NewWatcher
        github.com/fsnotify/fsnotify@v1.5.4/inotify.go:60 +0x1c5

goroutine 69 [runnable]:
k8s.io/ingress-nginx/internal/watch.(*OSFileWatcher).watch.func2()
        k8s.io/ingress-nginx/internal/watch/file_watcher.go:71
runtime.goexit()
        runtime/asm_amd64.s:1571 +0x1
created by k8s.io/ingress-nginx/internal/watch.(*OSFileWatcher).watch
        k8s.io/ingress-nginx/internal/watch/file_watcher.go:71 +0x215

goroutine 70 [runnable]:
github.com/fsnotify/fsnotify.NewWatcher.func1()
        github.com/fsnotify/fsnotify@v1.5.4/inotify.go:60
runtime.goexit()
        runtime/asm_amd64.s:1571 +0x1
created by github.com/fsnotify/fsnotify.NewWatcher
        github.com/fsnotify/fsnotify@v1.5.4/inotify.go:60 +0x1c5

goroutine 71 [runnable]:
k8s.io/ingress-nginx/internal/watch.(*OSFileWatcher).watch.func2()
        k8s.io/ingress-nginx/internal/watch/file_watcher.go:71
runtime.goexit()
        runtime/asm_amd64.s:1571 +0x1
created by k8s.io/ingress-nginx/internal/watch.(*OSFileWatcher).watch
        k8s.io/ingress-nginx/internal/watch/file_watcher.go:71 +0x215

goroutine 76 [IO wait]:
internal/poll.runtime_pollWait(0x7fbea1ce7ca8, 0x72)
        runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc00061b100?, 0x0?, 0x0)
        internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
        internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Accept(0xc00061b100)
        internal/poll/fd_unix.go:614 +0x22c
net.(*netFD).accept(0xc00061b100)
        net/fd_unix.go:172 +0x35
net.(*TCPListener).accept(0xc00048efd8)
        net/tcpsock_posix.go:139 +0x28
net.(*TCPListener).Accept(0xc00048efd8)
        net/tcpsock.go:288 +0x3d
net/http.(*Server).Serve(0xc0002101c0, {0x1c29f10, 0xc00048efd8})
        net/http/server.go:3039 +0x385
net/http.(*Server).ListenAndServe(0xc0002101c0)
        net/http/server.go:2968 +0x7d
main.startHTTPServer({0x0?, 0xbf08c41dfe570ed1?}, 0x43c7f65dc39ed705?, 0xc00064ca00)
        k8s.io/ingress-nginx/cmd/nginx/main.go:345 +0x13b
created by main.main
        k8s.io/ingress-nginx/cmd/nginx/main.go:162 +0x1031

goroutine 77 [select]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x1c2ac78, 0xc00064cb00}, 0xc000047410, 0xc72a0a?)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:658 +0xe5
k8s.io/apimachinery/pkg/util/wait.poll({0x1c2ac78, 0xc00064cb00}, 0x98?, 0xc71885?, 0xc00063dd30?)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:594 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1c2ac78, 0xc00064cb00}, 0x20?, 0xc00062b400?)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:545 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x18?, 0x7fbec8b105b8?, 0x18?)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:536 +0x7c
k8s.io/client-go/tools/cache.WaitForCacheSync(0x31?, {0xc000649560, 0x4, 0x4})
        k8s.io/client-go@v0.23.6/tools/cache/shared_informer.go:255 +0x97
k8s.io/ingress-nginx/internal/ingress/controller/store.(*Informer).Run(0xc000445c70, 0xc0001025a0)
        k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:170 +0x525
k8s.io/ingress-nginx/internal/ingress/controller/store.(*k8sStore).Run(0x0?, 0x0?)
        k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:1133 +0x25
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).Start(0xc0001b83c0)
        k8s.io/ingress-nginx/internal/ingress/controller/nginx.go:258 +0x90
created by main.main
        k8s.io/ingress-nginx/cmd/nginx/main.go:163 +0x1074

goroutine 79 [syscall]:
os/signal.signal_recv()
        runtime/sigqueue.go:151 +0x2f
os/signal.loop()
        os/signal/signal_unix.go:23 +0x19
created by os/signal.Notify.func1.1
        os/signal/signal.go:151 +0x2a

goroutine 80 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0003920c8, 0x0)
        runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x16b0600?)
        sync/cond.go:56 +0x8c
k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc0003920a0, 0xc00063dd70)
        k8s.io/client-go@v0.23.6/tools/cache/delta_fifo.go:527 +0x22e
k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000022120)
        k8s.io/client-go@v0.23.6/tools/cache/controller.go:183 +0x36
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x40d625?)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc71385?, {0x1c13440, 0xc00064fc80}, 0x1, 0xc0001025a0)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000022188?, 0x3b9aca00, 0x0, 0x0?, 0x7fbea1c2c380?)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*controller).Run(0xc000022120, 0xc0001025a0)
        k8s.io/client-go@v0.23.6/tools/cache/controller.go:154 +0x2c5
k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc0003937c0, 0xc0001b83c0?)
        k8s.io/client-go@v0.23.6/tools/cache/shared_informer.go:414 +0x47c
created by k8s.io/ingress-nginx/internal/ingress/controller/store.(*Informer).Run
        k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:160 +0xbb

goroutine 81 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000392168, 0x0)
        runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x16b0600?)
        sync/cond.go:56 +0x8c
k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc000392140, 0xc0001840e0)
        k8s.io/client-go@v0.23.6/tools/cache/delta_fifo.go:527 +0x22e
k8s.io/client-go/tools/cache.(*controller).processLoop(0xc0000226c0)
        k8s.io/client-go@v0.23.6/tools/cache/controller.go:183 +0x36
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x40d625?)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc71385?, {0x1c13440, 0xc0001f6030}, 0x1, 0xc0001025a0)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000022728?, 0x3b9aca00, 0x0, 0x0?, 0x7fbea1c2c380?)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*controller).Run(0xc0000226c0, 0xc0001025a0)
        k8s.io/client-go@v0.23.6/tools/cache/controller.go:154 +0x2c5
k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc000393680, 0xc00064ca00?)
        k8s.io/client-go@v0.23.6/tools/cache/shared_informer.go:414 +0x47c
created by k8s.io/ingress-nginx/internal/ingress/controller/store.(*Informer).Run
        k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:161 +0x145

goroutine 82 [runnable]:
k8s.io/ingress-nginx/internal/ingress/controller/store.(*Informer).Run.func3()
        k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:163
runtime.goexit()
        runtime/asm_amd64.s:1571 +0x1
created by k8s.io/ingress-nginx/internal/ingress/controller/store.(*Informer).Run
        k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:163 +0x1db

goroutine 83 [runnable]:
k8s.io/ingress-nginx/internal/ingress/controller/store.(*Informer).Run.func4()
        k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:165
runtime.goexit()
        runtime/asm_amd64.s:1571 +0x1
created by k8s.io/ingress-nginx/internal/ingress/controller/store.(*Informer).Run
        k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:165 +0x265

goroutine 84 [runnable]:
k8s.io/ingress-nginx/internal/ingress/controller/store.(*Informer).Run.func5()
        k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:166
runtime.goexit()
        runtime/asm_amd64.s:1571 +0x1
created by k8s.io/ingress-nginx/internal/ingress/controller/store.(*Informer).Run
        k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:166 +0x2ed

goroutine 85 [runnable]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:300
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:300 +0xc5

goroutine 86 [select]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:708 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:691 +0xca

goroutine 87 [runnable]:
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:71
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:71 +0x85

goroutine 88 [runnable]:
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:71
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:71 +0x85

goroutine 89 [runnable]:
k8s.io/client-go/tools/cache.(*controller).Run.func1()
        k8s.io/client-go@v0.23.6/tools/cache/controller.go:129
created by k8s.io/client-go/tools/cache.(*controller).Run
        k8s.io/client-go@v0.23.6/tools/cache/controller.go:129 +0xbe

goroutine 90 [select]:
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1(0xc0002102a0, 0xc0000221b0, 0xc0001025a0, 0xc0006e3d40)
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:315 +0x385
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc0002102a0, 0xc0001025a0)
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:361 +0x245
k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:221 +0x26
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000657fb0?)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00042dd80?, {0x1c13420, 0xc000100730}, 0x1, 0xc0001025a0)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:156 +0xb6
k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002102a0, 0xc0001025a0)
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:220 +0x1c6
k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:56 +0x22
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:71 +0x85

goroutine 91 [select]:
golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc00036e480, 0xc00031d600)
        golang.org/x/net@v0.0.0-20220225172249-27dd8689420f/http2/transport.go:1172 +0x451
golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0003ceb00, 0xc00031d600, {0xc0?})
        golang.org/x/net@v0.0.0-20220225172249-27dd8689420f/http2/transport.go:499 +0x1b7
golang.org/x/net/http2.(*Transport).RoundTrip(...)
        golang.org/x/net@v0.0.0-20220225172249-27dd8689420f/http2/transport.go:460
golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00042e140?}, 0xc00031d600?)
        golang.org/x/net@v0.0.0-20220225172249-27dd8689420f/http2/transport.go:2978 +0x1b
net/http.(*Transport).roundTrip(0xc00042e140, 0xc00031d600)
        net/http/transport.go:539 +0x390
net/http.(*Transport).RoundTrip(0x1877d20?, 0xc00064ffb0?)
        net/http/roundtrip.go:17 +0x19
k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc0004488a0, 0xc00031d500)
        k8s.io/client-go@v0.23.6/transport/round_trippers.go:317 +0x3c5
k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc000428500, 0xc00031d400)
        k8s.io/client-go@v0.23.6/transport/round_trippers.go:168 +0x322
net/http.send(0xc00031d400, {0x1c13a80, 0xc000428500}, {0x194daa0?, 0x9d9379fc00000001?, 0x0?})
        net/http/client.go:252 +0x5d8
net/http.(*Client).send(0xc0004488d0, 0xc00031d400, {0x7fbea1d5cb30?, 0xf8?, 0x0?})
        net/http/client.go:176 +0x9b
net/http.(*Client).do(0xc0004488d0, 0xc00031d400)
        net/http/client.go:725 +0x8f5
net/http.(*Client).Do(...)
        net/http/client.go:593
k8s.io/client-go/rest.(*Request).request(0xc00031d200, {0x1c2acb0, 0xc0001363f0}, 0x1?)
        k8s.io/client-go@v0.23.6/rest/request.go:980 +0x419
k8s.io/client-go/rest.(*Request).Do(0x7fbea1ce02a0?, {0x1c2acb0?, 0xc0001363f0?})
        k8s.io/client-go@v0.23.6/rest/request.go:1038 +0xc7
k8s.io/client-go/kubernetes/typed/core/v1.(*secrets).List(0xc000649660, {0x1c2acb0, 0xc0001363f0}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, 0x0}, {0xc0003e30c0, ...}, ...})
        k8s.io/client-go@v0.23.6/kubernetes/typed/core/v1/secret.go:95 +0x185
k8s.io/client-go/informers/core/v1.NewFilteredSecretInformer.func1({{{0x0, 0x0}, {0x0, 0x0}}, {0x0, 0x0}, {0x0, 0x0}, 0x0, 0x0, ...})
        k8s.io/client-go@v0.23.6/informers/core/v1/secret.go:65 +0x182
k8s.io/client-go/tools/cache.(*ListWatch).List(0xc000657d50?, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, 0x0}, {0x0, 0x0}, 0x0, ...})
        k8s.io/client-go@v0.23.6/tools/cache/listwatch.go:106 +0x56
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1.2({{{0x0, 0x0}, {0x0, 0x0}}, {0x0, 0x0}, {0x0, 0x0}, 0x0, 0x0, ...})
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:277 +0x62
k8s.io/client-go/tools/pager.SimplePageFunc.func1({0xc0003920a0?, 0x7fbea1ce05d0?}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, 0x0}, {0x0, 0x0}, ...})
        k8s.io/client-go@v0.23.6/tools/pager/pager.go:40 +0x57
k8s.io/client-go/tools/pager.(*ListPager).List(0xc0004d7fb0, {0x1c2acb0, 0xc0001363e8}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, 0x0}, {0x0, ...}, ...})
        k8s.io/client-go@v0.23.6/tools/pager/pager.go:92 +0x16b
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1()
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:302 +0x207
created by k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:268 +0x2fb

goroutine 92 [select]:
golang.org/x/net/http2.(*clientStream).writeRequest(0xc00036e780, 0xc00031d600)
        golang.org/x/net@v0.0.0-20220225172249-27dd8689420f/http2/transport.go:1340 +0x9c9
golang.org/x/net/http2.(*clientStream).doRequest(0xa00619a01?, 0xc000657fa0?)
        golang.org/x/net@v0.0.0-20220225172249-27dd8689420f/http2/transport.go:1202 +0x1e
created by golang.org/x/net/http2.(*ClientConn).RoundTrip
        golang.org/x/net@v0.0.0-20220225172249-27dd8689420f/http2/transport.go:1131 +0x30a

goroutine 93 [runnable]:
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:71
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:71 +0x85

goroutine 94 [runnable]:
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:71
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:71 +0x85

goroutine 95 [runnable]:
k8s.io/client-go/tools/cache.(*controller).Run.func1()
        k8s.io/client-go@v0.23.6/tools/cache/controller.go:129
created by k8s.io/client-go/tools/cache.(*controller).Run
        k8s.io/client-go@v0.23.6/tools/cache/controller.go:129 +0xbe

goroutine 96 [select]:
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1(0xc000210540, 0xc000022750, 0xc0001025a0, 0xc0006f6d40)
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:315 +0x385
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc000210540, 0xc0001025a0)
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:361 +0x245
k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:221 +0x26
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00032d7b0?)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:155 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00042de00?, {0x1c13420, 0xc000100870}, 0x1, 0xc0001025a0)
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:156 +0xb6
k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000210540, 0xc0001025a0)
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:220 +0x1c6
k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:56 +0x22
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
        k8s.io/apimachinery@v0.23.6/pkg/util/wait/wait.go:71 +0x85

goroutine 97 [runnable]:
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1()
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:268
created by k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
        k8s.io/client-go@v0.23.6/tools/cache/reflector.go:268 +0x2fb
rikatz commented 2 years ago

Hey, just figured out. Can you try using the flag to configure right addressing?

https://github.com/kubernetes/ingress-nginx/blob/main/cmd/nginx/flags.go#L198

Because you are binding to hostNetwork, it will try to bind this port also on hostNetwork.

Also be aware that as soon as you are binding to hostNetwork this port will be exposed in your host and someone can push rogue messages to the logger :)

NissesSenap commented 2 years ago

Ahh just as I thought there was some flag to define some address. Thanks allot for the quick response @rikatz!

I guess this possibility already exist when I don't run in chroot mode or isn't it public then? I think I will have time to test this out tomorrow, if It works I will create a PR to make this available to the helm chart.

Trust me I don't want to run hostNetwork but the AWS CNI don't give me much of an option....

longwuyuan commented 2 years ago

/triage accepted

rikatz commented 2 years ago

@NissesSenap this worked? :)

NissesSenap commented 2 years ago

Sorry for the delay @rikatz, I have verified and setting - --internal-logger-address=127.0.0.1:11515 solved the issue.

I think I should have time some time later next week to create a PR in the helm chart unless someone else is quicker then me.

FYI to other people that might be following the issue the flags file have moved: https://github.com/kubernetes/ingress-nginx/blob/0cc43d5e5295a769fa6e9428004a70378b632e9b/pkg/flags/flags.go#L202

NissesSenap commented 2 years ago

@rikatz you wrote earlier "Also be aware that as soon as you are binding to hostNetwork this port will be exposed in your host and someone can push rogue messages to the logger ".

Would you say it's better to not run the chrooted image to not expose the syslog address? Sadly I can't remove the hostNetwork config.

NissesSenap commented 2 years ago

Was planning to start to do a contribution in the helm chart but remembered that the extraArgs is already in place (I'm already using it).

I will create a documentation PR to make this value a bit easier to find without having to read the code.