kubernetes-sigs / cluster-api-provider-openstack

Cluster API implementation for OpenStack
https://cluster-api-openstack.sigs.k8s.io/
Apache License 2.0
289 stars 252 forks source link

Cluster Erroneously Stuck in Failed State #2146

Open spjmurray opened 2 months ago

spjmurray commented 2 months ago

/kind bug

What steps did you take and what happened:

Just checking the state of things in ArgoCD and noted my cluster was in the red. Boo! On further inspection I can see:

  failureMessage: >-
    Failure detected from referenced resource
    infrastructure.cluster.x-k8s.io/v1beta1, Kind=OpenStackCluster with name
    "cluster-bc3d5fc1": failed to reconcile external network: failed to get
    external network: Get
    "https://compute.sausage.cloud:9696/v2.0/networks/5617d17e-fdc1-4aa1-a14b-b9b5136c65af":
    dial tcp: lookup compute.sausage.cloud on 10.96.1.35:53: server misbehaving
  failureReason: UpdateError
  infrastructureReady: true
  observedGeneration: 2
  phase: Failed

but there is no such failure message attached to the OSC resource, so I'm figuring CAPO did sort itself out eventually. I'll just edit the resource, says I, and set the phase (didn't Kubernetes deem such things in the API a total fail?) back to Provisioned and huzzah. But that didn't work and it magically re-appeared from somewhere, I have no idea how this is even possible, but I digress...

According to https://github.com/kubernetes-sigs/cluster-api/issues/10847 CAPO should only ever set these things if something is terminal, and DNS failure quite frankly isn't, specially if you are a road warrior, living Max Max style like some Antipodean Adonis where Wifi is always up and down.

What did you expect to happen:

Treat this error as transient.

Anything else you would like to add:

Just basically reaching out for discussion before I delve into the code, it may be known about, fixed. As always you may have opinions on how this could be fixed. Logically:

var derr *net.DNSError

if errors.As(err, &derr) {
  // handle gracefully
}

should be the simple solution, depending on how well errors are propagated from Gophercloud, which is another story entirely.

Environment:

spjmurray commented 2 months ago

Ah, subresources... you can work around this with:

kubectl --kubeconfig kc patch clusters.cluster.x-k8s.io -n f766b888-7bc3-414b-9ca3-c5b4fc080c1b cluster-bc3d5fc1 --subresource status --type=json -p '[{"op":"replace","path":"/status/phase","value":"Provisioned"},{"op":"remove","path":"/status/failureReason"},{"op":"remove","path":"/status/failureMessage"}]'
cwrau commented 1 month ago

Same thing is happening to us, https://github.com/kubernetes-sigs/cluster-api/issues/10991#issuecomment-2264972433

Some little transient problems with the OpenStack API resulting in permanently failed clusters is quite annoying, CAPO shouldn't set these fields if the errors aren't terminal.

And, to be honest, what kind of failures are terminal? Maybe "couldn't (re)-allocate specified loadbalancer IP", but I can't think of anything more.

spjmurray commented 1 month ago

I'm seeing similar problems with

    {"NeutronError": {"type": "IpAddressGenerationFailure", "message": "No more
    IP addresses available on network cc8c67a4-83a5-420d-93dd-34bba415f433.",
    "detail": ""}}

cluster comes up eventually, so it's treated correctly as transient by CAPO,, but it's stuck constantly broken in the CAPI bit

cwrau commented 3 days ago

As we're running our own operator on top of this, we're patching this ourselves; if the CAPI cluster has these fields but the CAPO one doesn't, we remove it from the status ourselves

But it would be great if this would be addressed