voyagermesh / voyager

🚀 Secure L7/L4 (HAProxy) Ingress Controller for Kubernetes
https://voyagermesh.com
Apache License 2.0
1.35k stars 134 forks source link

Constant "Back-off restarting failed container" for a non existing bad ingress. #797

Closed behdadkh closed 6 years ago

behdadkh commented 6 years ago

Hello,

I need help to be able to redeploy voyager in my bare-metal cluster.

I did have a "bad" ingress which I deleted and at the moment there is no ingress implemented in my entire cluster:

kubectl get ingress --all-namespaces 

No resources found.

However, upon fresh deployment of voyager, operator pod is constantly complaining about the bad ingress and it tries to restart again:

I0103 10:35:10.407989       1 operator.go:55] Ensuring CRD registration
I0103 10:35:13.432024       1 validator.go:52] Checking ingress kubernetes-dashboard-ingress@kube-public
F0103 10:35:13.432073       1 run.go:136] One or more Ingress objects are invalid: kubernetes-dashboard-ingress@kube-public

Kubernetes version: 1.9.0 Voyager: 5.0.0-rc.10

describe pod:

Name:           voyager-operator-746fcf85c6-v78cv
Namespace:      kube-public
Node:           master/****
Start Time:     Wed, 03 Jan 2018 11:24:08
Labels:         app=voyager
                pod-template-hash=3029794172
Annotations:    scheduler.alpha.kubernetes.io/critical-pod=
Status:         Running
IP:             ****
Controlled By:  ReplicaSet/voyager-operator-746fcf85c6
Containers:
  voyager:
    Container ID:  docker://22de4c9b99494ce9c85ca554f4fb3e1b2f624fa3dc21a8cba04bc7ba0aa7eece
    Image:         appscode/voyager:5.0.0-rc.10
    Image ID:      docker-pullable://appscode/voyager@sha256:992e5e07bf7621401b3605d44bbb2a5ef836be5984daa5b8941446afcb5a18b1
    Ports:         56790/TCP, 56791/TCP
    Args:
      run
      --v=3
      --rbac=true
      --cloud-provider=
      --cloud-config=
      --ingress-class=
      --restrict-to-operator-namespace=true
      --analytics=false
      --log.level=debug
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Wed, 03 Jan 2018 11:35:10 +0100
      Finished:     Wed, 03 Jan 2018 11:35:13 +0100
    Ready:          False
    Restart Count:  7
    Environment:    <none>
    Mounts:
      /etc/kubernetes from cloudconfig (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from voyager-operator-token-chp4x (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  cloudconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes
    HostPathType:  
  voyager-operator-token-chp4x:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  voyager-operator-token-chp4x
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  node-role.kubernetes.io/master=
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age                From               Message
  ----     ------                 ----               ----               -------
  Normal   Scheduled              15m                default-scheduler  Successfully assigned voyager-operator-746fcf85c6-v78cv to master
  Normal   SuccessfulMountVolume  14m                kubelet, master    MountVolume.SetUp succeeded for volume "cloudconfig"
  Normal   SuccessfulMountVolume  14m                kubelet, master    MountVolume.SetUp succeeded for volume "voyager-operator-token-chp4x"
  Normal   Pulled                 14m (x3 over 14m)  kubelet, master    Container image "appscode/voyager:5.0.0-rc.10" already present on machine
  Normal   Created                14m (x3 over 14m)  kubelet, master    Created container
  Normal   Started                14m (x3 over 14m)  kubelet, master    Started container
  Warning  BackOff                14m (x3 over 14m)  kubelet, master    Back-off restarting failed container
diptadas commented 6 years ago

You also need to check for any existing ingess crds:

kubectl get ingress.voyager.appscode.com --all-namespaces

Btw, we are planning to continue running operator even when one or more bad-ingress found.

tamalsaha commented 6 years ago

Fixed by #837