sanitizers / patchback-github-app

https://github.com/apps/patchback
GNU General Public License v3.0
12 stars 4 forks source link

ClientPayloadError: Response payload is not completed #41

Closed sentry-io[bot] closed 2 months ago

sentry-io[bot] commented 11 months ago

Sentry Issue: PATCHBACK-1N Ref: https://ansible-open-source.sentry.io/share/issue/649e905fc9514df48b4d1c16d66ec269/

CancelledError: 
  File "asyncio/queues.py", line 166, in get
    await getter

ClientPayloadError: Response payload is not completed
(26 additional frame(s) were not displayed)
...
  File "patchback/event_handlers.py", line 321, in on_label_added_to_merged_pr
    await process_pr_backport_labels(
  File "patchback/event_handlers.py", line 431, in process_pr_backport_labels
    await pr_reporter.update_progress(
  File "patchback/github_reporter.py", line 82, in update_progress
    checks_output = await self._make_comment_from_details(
  File "patchback/github_reporter.py", line 125, in _make_comment_from_details
    await self._comments_api.update_comment(
  File "patchback/comments_api.py", line 17, in update_comment
    await self._api.patch(self._comment_uri, data={'body': comment_text})

Upvote & Fund

Fund with Polar

sentry-io[bot] commented 11 months ago

Sentry issue: PATCHBACK-1M

Ref: https://ansible-open-source.sentry.io/share/issue/02e0b83c1ea642abbd2953511d7c348e/

sentry-io[bot] commented 11 months ago

Sentry issue: PATCHBACK-1P

Ref: https://ansible-open-source.sentry.io/share/issue/e87d38cc9ea448b99182dd48bb92b68f/

webknjaz commented 9 months ago

I bumped aiohttp to v3.9.4rc0 and deployed it so that it would allow us get a more detailed traceback and pinpoint the problem cause.

felixfontein commented 7 months ago

I haven't seen any errors for quite some time now, maybe that fixed the problem? Or something else happened that fixed it?

webknjaz commented 7 months ago

This one is weird. First, it was happening rarely (if at all) on Heroku. Then, it was migrated to OpenShift 3, and it was rare, I think. Then, I migrated the app to an OpenShift 4 cluster and it started happening daily, even twice a day sometimes. With no visible changes in deps/pins/runtime. I think I updated the runtime and the deps at some point with no improvement. So I figured that the runtime might be slow and the problem might somehow be related to timeouts. Then, I patched aiohttp, made a release candidate and bumped to that version. And the problem was gone. There hasn't been any Sentry alerts with this specific traceback. What puzzles me, though, is that the aiohttp bump seems to be the only difference between the faulty and working deployments, but the problem wasn't well-understood. So the new version didn't ship an intentional fix since nobody fully grasped the nature of the bug upstream. Instead, I only made the exception cause chaining explicit in this version. I don't understand why it's not happening anymore. But the fact is that I updated all the bot deployments and haven't seen such tracebacks in a while (there's some related to GitHub being drunk once in a while, but that's entirely different).

Hence my hesitation to declare this fixed.

webknjaz commented 2 months ago

This seems to have been fixed upstream, in aiohttp v3.10.5.