As brought up in a recent slack conversation, an uptick in 500 errors on the Gitlab side recently seems to be manifesting as an increased number of job failures from tap-gitlab. As proposed in that thread, this could be an opportunity to improve network handling and retries within tap-gitlab.
The SDK refactor (#34) might relate to this as well. While the SDK does have built-in retry capability with backoff, we'd have to make sure the correct errors would get retries, according to the error codes gitlab is returning.
In GitLab by @aaronsteers on Mar 12, 2021, 17:28
As brought up in a recent slack conversation, an uptick in 500 errors on the Gitlab side recently seems to be manifesting as an increased number of job failures from tap-gitlab. As proposed in that thread, this could be an opportunity to improve network handling and retries within
tap-gitlab
.The SDK refactor (#34) might relate to this as well. While the SDK does have built-in retry capability with backoff, we'd have to make sure the correct errors would get retries, according to the error codes gitlab is returning.