Closed reefdog closed 3 weeks ago
Several minutes later, it's still adding new scrapes:
Just thinking out loud. I have redo enabled on fail for for jobs. I'm guessing this bug is causing it to be re-queued overall
I sort of reported two things in this one issue: "Facebook scraping failed" and "Facebook scraping seems to be duplicating". The former is probably an actual issue to suss out; the latter is probably me not realizing that retry was enabled, and so the "duplicate" scrapes were the retries.
I do think this means we should find some way to indicate this the Active Scrapes table. For instance, rather than failed tries and currently-running retries as siblings in the table, maybe we recognize retries and nest/group them all together? So you get a single row for that scrape, showing just the most current one, with a link that's something like "See previous attempts" that spawns a modal (or expands the table)?
Regardless, the retry conversation should probably be a second issue and my apologies for mixing it into this bug report.
I ran a Facebook scrape locally, and kept getting this error from Sidekiq:
Each time it threw, the job was duplicated, and thus appeared again in the Active Scrapes table (on the Jobs Status page).