UpdateLog calls from the watcher should not suffer from accidental canceled context errors when the timing is as such that contexts used for the UpdatLog call are canceled / cleaned up before the call finishes on the API server side.
Actual Behavior
Prior to #712 we saw as much as 100s per hour of UpdateLog calls canceled for context in our production and stress test envs
After using #712 the number reduced, but still occurred with some frequency (maybe single to double digits per hour
Steps to Reproduce the Problem
Run tests under any sort of load where results stores logs for pipelineruns or taskrns
Expected Behavior
UpdateLog calls from the watcher should not suffer from accidental canceled context errors when the timing is as such that contexts used for the UpdatLog call are canceled / cleaned up before the call finishes on the API server side.
Actual Behavior
Prior to #712 we saw as much as 100s per hour of UpdateLog calls canceled for context in our production and stress test envs
After using #712 the number reduced, but still occurred with some frequency (maybe single to double digits per hour
Steps to Reproduce the Problem
Additional Info
Kubernetes version:
various versions
Tekton Pipeline version:
various versions