Closed mrwbarg closed 1 year ago
Patch coverage has no change and project coverage change: -0.85
:warning:
Comparison is base (
44d5316
) 76.54% compared to head (7aea780
) 75.69%.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
Even considering that we already have a few actions that are related to Scrapy Cloud, these are useful during the monitoring a spider executions.
The script you are proposing is not related to Spider monitoring, but to job execution inside Scrapy Cloud and nothing from Spidermon is used to execute it. As an extension, we should avoid to add coupling with specific spider running platforms.
I understand the problem, and it is important to monitor jobs that didn't even start! But Probably this script could be added in some internal helper library, inside Scrapy Cloud or in https://github.com/scrapinghub/python-scrapinghub, as this is the library used to interact with the platform. It doesn't seem something that we should add to the core of Spidermon.
There are cases where jobs can fail abruptly in such a way that Spidermon (or any other extensions that run at the end of Scrapy) won’t run.
In these situations, we won’t be alerted that something happened because Spidermon didn’t run at the end, so it won’t generate alerts, and ScrapyCloud doesn’t warn about them.