Open dibyom opened 1 year ago
Right now, the way this could be done on the eventlistener would be to add custom args for overriding the defaults for the http client. We could build an HTTP client for each cluster interceptor, this could simplify the default EL http client construction as we have to assemble the full tls config for all the interceptors on startup right now in https://github.com/tektoncd/triggers/blob/v0.21.0/pkg/adapter/adapter.go#L124 and keep a watch on it to continually update.
I could see a clusterinterceptorspec like:
kind: ClusterInterceptor
...
spec:
timeouts:
tlshandshake:
responseheader:
expectcontinuetimeout:
readtimeout:
keepalive:
Obviously, these could all be optional values so we can distinguish being unset vs set to 0, but I'm thinking about the "default" behavior here. Would nil
mean "default to the current eventlistener value" vs 0
meaning "no timeout"? Are we concerned about the penalty for rebuilding the interceptor httpclient on every interceptor call?
Would nil mean "default to the current eventlistener value" vs 0 meaning "no timeout"?
Yeah I think that makes sense
Are we concerned about the penalty for rebuilding the interceptor httpclient on every interceptor call?
I think so 😬 do we need to build the interceptor on each call? can we do it periodically or when needed if a interceptor changes?
(Doesn't help with timeouts but for certs at least we could provide tls.Config's GetCertificate similar to how knative/pkg's webhook implementation does:)
I think so 😬 do we need to build the interceptor on each call? can we do it periodically or when needed if an interceptor changes?
Yeah, that was my presumption as well. Let me take a look at how the interceptor watch works and see if we can keep these clients somewhere reasonable and just update on watches
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen
with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
with a justification.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen
with a justification.
/lifecycle rotten
Send feedback to tektoncd/plumbing.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
with a justification.
Mark the issue as fresh with /remove-lifecycle rotten
with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen
with a justification.
/close
Send feedback to tektoncd/plumbing.
@tekton-robot: Closing this issue.
/remove-lifecycle rotten
/lifecycle-frozen
We will handle this in future releases.
/reopen /lifecycle frozen
@khrm: Reopened this issue.
Discussed in https://github.com/tektoncd/triggers/discussions/1451