cert-manager / csi-lib

A library for building CSI drivers that request certificates from cert-manager
Apache License 2.0
14 stars 13 forks source link

CertificateRequest should be in terminate state when it is denied by an approver #30

Closed 7ing closed 2 years ago

7ing commented 2 years ago

Related code block: https://github.com/cert-manager/csi-lib/blob/main/manager/manager.go#L305-L309

// Handle cases where the request has been explicitly denied
if apiutil.CertificateRequestIsDenied(updatedReq) {
  cond := apiutil.GetCertificateRequestCondition(updatedReq, cmapi.CertificateRequestConditionDenied)
  return false, fmt.Errorf("request %q has been denied by the approval plugin: %s", updatedReq.Name, cond.Message)
}

Observed behavior: When CertificateRequest is denied by an approver, the lib will delete it and recreate one. If the approver is auto approve/deny the CSR based on some policies, this would create an infinite loop: create -> denied -> delete -> create again -> ...

Sample logs:

....
I0823 16:00:25.368111       1 nodeserver.go:76] driver "msg"="Registered new volume with storage backend" "pod_name"="csi-64d84b77c-vfblf"
I0823 16:00:25.368202       1 nodeserver.go:86] driver "msg"="Volume registered for management" "pod_name"="csi-64d84b77c-vfblf"
I0823 16:00:25.368376       1 nodeserver.go:97] driver "msg"="Waiting for certificate to be issued..." "pod_name"="csi-64d84b77c-vfblf"
I0823 16:00:26.368756       1 manager.go:514] manager "msg"="Triggering new issuance" "volume_id"="csi-1d842b8307fb94812b128fc0dad4284e119e757ca39bbc164a6a693e8cc2cb80"
I0823 16:00:26.368811       1 manager.go:249] manager "msg"="Processing issuance" "volume_id"="csi-1d842b8307fb94812b128fc0dad4284e119e757ca39bbc164a6a693e8cc2cb80"
I0823 16:00:26.385125       1 manager.go:379] manager "msg"="Deleted CertificateRequest resource" "name"="c7722b4d-cab5-4709-99ed-1576fd4f705e" "namespace"="poc" "volume_id"="csi-1d842b8307fb94812b128fc0dad4284e119e757ca39bbc164a6a693e8cc2cb80"
I0823 16:00:26.503681       1 manager.go:287] manager "msg"="Created new CertificateRequest resource" "volume_id"="csi-1d842b8307fb94812b128fc0dad4284e119e757ca39bbc164a6a693e8cc2cb80"
E0823 16:00:27.512440       1 manager.go:516] manager "msg"="Failed to issue certificate, retrying after applying exponential backoff" "error"="waiting for request: request \"560d3fde-78c0-4d64-9333-3663a672e7a2\" has been denied by the approval plugin: issuer is unauthorized" "volume_id"="csi-1d842b8307fb94812b128fc0dad4284e119e757ca39bbc164a6a693e8cc2cb80"
I0823 16:00:29.711691       1 manager.go:514] manager "msg"="Triggering new issuance" "volume_id"="csi-1d842b8307fb94812b128fc0dad4284e119e757ca39bbc164a6a693e8cc2cb80"
I0823 16:00:29.720993       1 manager.go:249] manager "msg"="Processing issuance" "volume_id"="csi-1d842b8307fb94812b128fc0dad4284e119e757ca39bbc164a6a693e8cc2cb80"
I0823 16:00:29.731453       1 manager.go:379] manager "msg"="Deleted CertificateRequest resource" "name"="560d3fde-78c0-4d64-9333-3663a672e7a2" "namespace"="poc" "volume_id"="csi-1d842b8307fb94812b128fc0dad4284e119e757ca39bbc164a6a693e8cc2cb80"
I0823 16:00:29.807662       1 manager.go:287] manager "msg"="Created new CertificateRequest resource" "volume_id"="csi-1d842b8307fb94812b128fc0dad4284e119e757ca39bbc164a6a693e8cc2cb80"
E0823 16:00:30.808675       1 manager.go:516] manager "msg"="Failed to issue certificate, retrying after applying exponential backoff" "error"="waiting for request: request \"fba62e01-28f9-4644-9f5f-694eaa174084\" has been denied by the approval plugin: issuer is unauthorized" "volume_id"="csi-1d842b8307fb94812b128fc0dad4284e119e757ca39bbc164a6a693e8cc2cb80"
....

Expected behavior: When CertificateRequest is denied, should be in terminate state. No new object should be created. Or at least has a flag to stop this infinite loop.

A possible solution is to return true in line: https://github.com/cert-manager/csi-lib/blob/main/manager/manager.go#L308

@munnerz

JoshVanL commented 2 years ago

We likely want to do this for all terminal states (i.e Failed, and perhaps unknown readiness conditions (though they should be impossible)). We can write to the metadata file with a configurable nextIssuanceTime since both a failed and denied request may be transient. A default of 1h may be appropriate (?).

wdyt?

7ing commented 2 years ago

Currently the denied request is handled separately from failed requests. I agree some terminal states are transient, like timeout or manual approval. They could be retried with exp-back-off logic. But for invalid CSRs, perhaps retry won't help here.

Also, from what I have observed, the retries for denied requests do not have exp-back-off logic. Maybe because they are recreated ones. Worth to check as well.

munnerz commented 2 years ago

The issue with failing permanently is that we can't bubble this up to the Kubelet (which is invoking the CSI plugin), meaning that if a failure does occur then the pod will never start and will be 'stuck'.

The rationale behind re-creating the CertificateRequest is that a future CSR might be approved (even if it is being denied now). For example, if a user has deployed an always-deny approver, sets up the CSI driver and hits this case, they would then go and fix up their approval flow so that these CSRs do get approved in future, and the retrying behaviour means that the pod will then start.

We certainly should reduce how often we retry though - ideally we'd exponentially back-off (but still continue to retry) and surface this back to the user by returning an error message in the Publish call.

To properly manage exponential backoff though, we will probably need to change how NodePublishVolume behaves on failure. We can probably re-use some of the codepaths we created for the "continue on not ready" feature to do this, but will need to also record the number of failed attempts in the metadata store too to properly compute the next "wait period".

munnerz commented 2 years ago

We do already perform exponential back-off on retries, which you can see here: https://github.com/cert-manager/csi-lib/blob/c9135148ecc6a25820bae8ea25c1b6d631a90ca8/manager/manager.go#L498-L520

However, for the initial NodePublishVolume call we set a timeout of 30s for the initial issuance: https://github.com/cert-manager/csi-lib/blob/c9135148ecc6a25820bae8ea25c1b6d631a90ca8/driver/nodeserver.go#L51

This means that after 30s of waiting, the volume will be 'unmanaged' and this exponential back-off will be cancelled.

The kubelet will then retry the NodePublishVolume call after a few seconds, which won't start with the same exponent (meaning we'll see another flurry of requests).

This is definitely something we should dig into and improve upon... I'm going to look into what options we have to allow us to not lose this state and consistently apply a proper exponential back off.

/assign

munnerz commented 2 years ago

Thinking some more on our exponential backoff config + the 30s context deadline in NodePublishVolume, assuming an approval plugin that always denies (or an issuer that always fails), we will create requests as so:

T=0s
T=2s
T=6s
T=14s
T=30s
(context deadline exceeded)
(kubelet calls NodePublishVolume again after N seconds (uncertain exact number, but assume 5s in this example)
T=35s
T=37s
T=41s
T=49s
T=65s
(context deadline exceeded)
...

That means that every 65s we are creating approximately 10 CertificateRequest objects.

We could consider having a more aggressive backoff instead - perhaps not even exponential (say, one per minute instead..).

We can still specifically handle non-approver or issuer based errors (e.g. a transient network failure) and retry quicker in those cases, but for errors like you describe (approver denies or issuer fails), we should definitely create fewer CertificateRequest objects.

7ing commented 2 years ago

The rationale behind re-creating the CertificateRequest is that a future CSR might be approved (even if it is being denied now). For example, if a user has deployed an always-deny approver, sets up the CSI driver and hits this case, they would then go and fix up their approval flow so that these CSRs do get approved in future, and the retrying behaviour means that the pod will then start.

Agree with this reason. Retry with better exp-back-off logic would be good.

That means that every 65s we are creating approximately 10 CertificateRequest objects.

Confirmed. Our log shows a similar sequence (with 0.5 jitter) for "Triggering new issuance". Because for each issue call, it will first delete the old CertificateRequest object, so by default only one CSR object exists at any time.

We can probably re-use some of the codepaths we created for the "continue on not ready" feature to do this, but will need to also record the number of failed attempts in the metadata store too to properly compute the next "wait period".

We could also utilize ctx.Value() (introduced in PR https://github.com/cert-manager/csi-lib/pull/32) to pass a longer wait time ?

7ing commented 2 years ago

Verified with the fix, denied CR will be retried in following pattern:

T=0s
T=30s
T=60s
T=120s
T=240s
T=300s (5 minutes, the cap)
T=300s (5 minutes, the cap)
...

But found another issue, tracked separately in https://github.com/cert-manager/csi-lib/issues/41 Closing this one.