knative / eventing

Event-driven application platform for Kubernetes
https://knative.dev/docs/eventing
Apache License 2.0
1.4k stars 586 forks source link

[Experimental] DeliverySpec RetryAfter #5811

Open travis-minke-sap opened 2 years ago

travis-minke-sap commented 2 years ago

Description The Retry-After header is a standard part of the HTTP spec which can be returned with 429 and 503 responses in order to specify a duration, or timestamp, which subsequent retries should wait before attempting to resend. It provides downstream event recipients with a mechanism to provide back-pressure when requests are arriving too frequently. The event retry mechanism in Knative eventing currently does not respect the Retry-After header. The intent of this new experimental-feature is to expose the ability for users to opt-in to respecting the Retry-After header.

Design Overview Following the pattern established in the experimental-features process, and closely mirroring the implementation in the similar Delivery Timeout experimental-feature, the plan is to enhance the DeliverySpec to include a new optional retryAfter component. Use of this new component will be gated by the delivery-retryafter experimental feature flag in the config-features ConfigMap and enforced via WebHook validation.

Example DeliverySpec with new retryAfter component...

delivery:
  backoffDelay: PT0.5S
  backoffPolicy: exponential
  retry: 3
  retryAfter:
    enabled: true
    maxDuration: "PT30S"

The new retryAfter component will only take effect if the retry value is specified and is at least 1. The optional maxDuration field provides an override to prevent excessive backoff durations as might be desirable in certain use cases.

Exit Criteria DeliverySpec allows optional configuration of retry behavior for 429 and 503 Retry-After headers.

Experimental Feature Flag Name delivery-retryafter

Experimental Feature Stages The following is the proposed plan for moving through the stages of the experimental feature process...

Affected WG

Additional Context

travis-minke-sap commented 2 years ago

/assign

evankanderson commented 2 years ago

Do we need the enabled bool, or could we require a max duration (even if it's something silly like "1 week")?

Other questions:

  1. Any guidance to implementers about handling this in-process vs in the storage layer?

  2. Should a Retry-After apply to all deliveries to that host, or only that specific delivery?

  3. Should there be an administrator-level ConfigMap parameter to specify a max retry-after duration?

evankanderson commented 2 years ago

(I'm 👍 on this feature, by the way)

travis-minke-sap commented 2 years ago

Thanks for the 👀 's on this Evan - much appreciated! I'll do my best to respond, forgive me if I miss the intent - I don't have as clear an understanding of the larger eventing ecosystem as you do (too narrowly focused on the KafkaChannel ; )

Also, I was holding off on creating the PR (I probably got ahead of myself by doing the work before the issue was reviewed) until I had the e2e tests were done... but I've gone ahead and created it so that folks can see the intended changes. I am still happy to refactor as the community deems necessary ; )

Do we need the enabled bool, or could we require a max duration (even if it's something silly like "1 week")?

I would vote to keep the separation, but am certainly open to that approach if the community prefers it. I only added maxDuration as an afterthought safety gauge, and wasn't sure what the reaction or need would be. It feels a bit wrong to combine the two and require a maxDuration in all cases, and more importantly for it to "infer" the enabled state. Just my $0.02 though - we can see if there are any other opinions ; )

  1. Any guidance to implementers about handling this in-process vs in the storage layer?

I'm not sure what you mean by "handling in the storage layer"? My intent (and what I've implemented thus far) is to handle this exclusively "in-process" on a per-event basis. Meaning, when a dispatcher sends an event and receives back a 429/Retry-After, it would apply that backoff before resending that event only.

  1. Should a Retry-After apply to all deliveries to that host, or only that specific delivery?

As I mentioned in 1. above, I see this as being used to inform the determination of the backoff duration for a single event request only. This is what I've implemented in the message_sender.go SendWithRetries() implementation. There is nothing about this new experimental-feature that would prevent a custom implementation from taking a broader view and throttling other parallel requests.

  1. Should there be an administrator-level ConfigMap parameter to specify a max retry-after duration?

I haven't seen any precedent for similar DeliverySpec changes (e.g. Timeout field) and wouldn't know where such a thing would live? I would rather wait and see if there is request for global config and add it in later if necessary.

matzew commented 2 years ago

If I set retry: 3 (or any larger as 0) I do indicate that I am interested in "retrying".

So, for me I'd thnk, that I am also interested in doing a retry-after recognition, for a good reason.

Therefore not really sure if we need to tweak it and add extra nobs, like enabling/disabling it.

However, I can see that if the server indicates 600 seconds, that this might be too long for me...

matzew commented 2 years ago

(I'm :+1: on this feature too, by the way)

travis-minke-sap commented 2 years ago

If I set retry: 3 (or any larger as 0) I do indicate that I am interested in "retrying".

So, for me I'd thnk, that I am also interested in doing a retry-after recognition, for a good reason.

Therefore not really sure if we need to tweak it and add extra nobs, like enabling/disabling it.

However, I can see that if the server indicates 600 seconds, that this might be too long for me...

Good point @matzew - I think the question you're really asking is... "If/when this feature graduates to Stable/GA, would we want to allow users of the retry capability an option to respect/ignore Retry-After headers?" If we are comfortable answering "No, if you use retry then Retry-After headers WILL be respected", then the DeliverySpec configuration can be reduced to a single optional field as follows...

delivery:
  backoffDelay: PT0.5S
  backoffPolicy: exponential
  retry: 3
  retryAfterMax: PT120S

...and during the experimental-features Alpha/Beta stages the feature flag would simply control whether or not to respect the Retry-After headers, instead of gating the DeliverySpec.RetryAfter configuration during validation.

Personally, I'm fine with this approach as it seems to better adhere to the intentions of the HTTP Spec, but I thought this was undesirable from a "don't change the existing behavior" perspective. It's unclear how many users would actually try out the experimental-feature between Alpha/Stable, so it's possible the switchover might catch them out?

matzew commented 2 years ago

On Thu 14. Oct 2021 at 21:22, Travis Minke @.***> wrote:

If I set retry: 3 (or any larger as 0) I do indicate that I am interested in "retrying".

So, for me I'd thnk, that I am also interested in doing a retry-after recognition, for a good reason.

Therefore not really sure if we need to tweak it and add extra nobs, like enabling/disabling it.

However, I can see that if the server indicates 600 seconds, that this might be too long for me...

Good point @matzew https://github.com/matzew - I think the question you're really asking is... "If/when this feature graduates to Stable/GA, would we want to allow users of the retry capability an option to respect/ignore Retry-After headers?" If we are comfortable answering "No, if you use retry then Retry-After headers WILL be respected", then the DeliverySpec configuration can be reduced to a single optional field as follows...

delivery: backoffDelay: PT0.5S backoffPolicy: exponential retry: 3 retryAfterMax: PT120S

more compact and reasonable to express it like this

...and during the experimental-features Alpha/Beta stages the feature flag

would simply control whether or not to respect the Retry-After headers, instead of gating the DeliverySpec.RetryAfter configuration during validation.

Personally, I'm fine with this approach as it seems to better adhere to the intentions of the HTTP Spec,

ACK, that’s right. Especially if I set retry to “n” (larger 0)

but I thought this was undesirable from a *"don't change the existing

behavior"* perspective. It's unclear how many users would actually try out the experimental-feature between Alpha/Stable, so it's possible the switchover might catch them out?

I’d use it: if I can honor the retry-after header I’d like to do so.

IMO an oversight when designing retry

You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/knative/eventing/issues/5811#issuecomment-943654985, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABGPTTATM6NJN2BBSSAM7DUG4UURANCNFSM5F6AMC6Q .

-- Sent from Gmail Mobile

travis-minke-sap commented 2 years ago

I’d use it: if I can honor the retry-after header I’d like to do so. IMO an oversight when designing retry

I like it! I can re-tool to that implementation if folks agree (or no-one disagrees after a week and a summary at next weeks WG) ?

lionelvillard commented 2 years ago

+1 for retryAfterMax. The default behavior should be to always respect retry-after.

Setting retryAfterMax: PT0S provides an effective way to disable this feature. We could provide a post-install job to set this field, as a way to make sure that existing applications won't break when upgrading.

pierDipi commented 2 years ago

+1 for retryAfterMax. The default behavior should be to always respect retry-after.

+1.

pierDipi commented 2 years ago

The default behavior should be to always respect retry-after.

Do you mean when this goes GA, right?

lionelvillard commented 2 years ago

Do you mean when this goes GA, right?

When this feature is enabled.

travis-minke-sap commented 2 years ago

Hey everyone - thanks for the feedback - great discussion and suggestions! I'm going to summarize what I heard as the preferred approach, along with a related complication and proposal for how to resolve. Sorry it got a bit long-winded - trying to be clear / precise...

New approach...

Complication...

The nice thing about having the Enabled field in the DeliverySpec is that it lets us enforce the experimental-feature flag in the eventing WebHook. This is a single touch-point that will cover all of Knative eventing and it is already watching the config-features ConfigMap.

If we eliminate the Enabled field from the DeliverySpec, then we will instead have to track the experimental-feature state in the lower-level RetryConfig struct which is passed to DispatchMessageWithRetries() in channel/message_dispatcher.go and possibly in RetryConfigFromDeliverySpec() in kncloudevents/retries.go. This would require ALL sandbox implementations to start watching the config-features ConfigMap and track in their context so that it can be passed to those functions above for enforcement. I'm assuming this a deal-breaker and not how the experimental-features were intended to be used for such "common" behavior changes?

Proposed Solution...

We could REQUIRE the use of the retryAfterMax field in the DeliverySpec ONLY while this is an experimental-feature. Meaning, in order to use it as an experimental feature you would have to enable the feature in the ConfigMap AND set the retryAfterMax to some value greater than 0. Then we could still enforce the experimental-feature flag in the eventing WebHook.

If / when the experimental-feature graduates to "Stable / GA", we will remove the requirement for the retryAfterMax field to be populated in order for Retry-After enforcement to occur. It will stop being "dual-use", and will become optional only for specifying a max override duration, or for opting-out of Retry-After headers. The downside is that there will be a small code/behavior change at the switch to "Stable / GA", but it seems reasonable to me - thoughts? I think this is actually close to what @evankanderson was suggesting originally (apologies if I missed the point there ; )

The following table attempts to summarize the operation for each stage/use-case...

EF Stage Feature Flag retryAfterMax Field Absent retryAfterMax Field Present
Alpha / Beta Disabled Accepted by Webhook Validation & no Retry-After header enforced Rejected by WebHook Validation
Alpha / Beta Enabled Accepted by Webhook Validation & no Retry-After header enforced Accepted by Webhook Validation & Retry-After header enforced if max>0
Stable / GA n/a Retry-After header enforced without max override Retry-After header enforced if max>0 with max override

Thanks again for the review - let me know your thoughts, otherwise I'll proceed with this approach (will give it a few days to become lazily accepted ; )

pierDipi commented 2 years ago

I'm ok with the proposed solution.

Or if we're worried about making changes in non-webhook code when this goes GA, we can make this field always required but defaulted by the webhook when the feature is enabled (or GA).

type DeliverySpec struct {
  # ...
  RetryAfterMax *string // JSON stuff
}

Defaulting webhook in alpha and beta phase:

if featureEnabled && ds.RetryAfterMax == nil {
  // Take globalDefault from a CM
  ds.RetryAfterMax = globalDefault
}

Validation webhook in alpha and beta phase:

if featureDisabled && ds.RetryAfterMax != nil {
  // fail webhook
}

Defaulting webhook in GA phase:

if ds.RetryAfterMax == nil {
  // Take globalDefault from a CM
  ds.RetryAfterMax = globalDefault
}

Validation webhook in GA:

// Left only the validation related to the `ds.RetryAfterMax` format

In the dispatcher code by checking if ds.RetryAfterMax == nil we know that is disabled (or enabled).

The problem is that we need to choose a good global default that is good for most use cases (PT60S ?).

travis-minke-sap commented 2 years ago

There's been a good discussion here and in a Slack thread followed by a quiet period so I thought I'd summarize the current approach for another review... (thanks to everyone who participated!)

Essentially the proposal above is the plan. The TLDR is...

Discussion Highlights