Closed futtta closed 6 years ago
logfile (generated manually from ccss.com dashboard): ccss-dot-com-report.log
After fixes on #4, I could check for this more accurately.
So I can check this and #6, can I change the queue schedule to 10 minutes as we have agreed before by email and also clean up the whole settings in you site? Debug is too noisy with 5s recurrence.
sure! feel free to do so AND to deploy the new version on blog.futtta.be. we will need to write down the timestamp as to allow us (and Jonas) to follow up on number of requests.
current status from ccss.com after new version was installed:
~$ cat ccss-dot-com-report_13-12-2017.log | wc -l
113
~$ cat ccss-dot-com-report_13-12-2017.log | sort | uniq | wc -l
108
so less doubles (5), but more requests nonetheless (and despite not doing is_404
, weird no?
Yeah, yeah... this quest is still in place. I am dealing with proper logging first on #6 because:
'Logs maketh man.' Deny Dias
updated log-file, 107 requests.
6 doubles...
There are real cases where a request for a single page could be fired more than once. They are:
A failed POST request to the generate endpoint. In that case, there will be no rule for the next hit on that page and a new request SHOULD be made. That failure could be caused by many reasons that are out of our control, such as timeouts on our side, network problems, exceptions at ccss.com etc.
A failed GET request to the results endpoint. Causes and consequences are the same as above.
A successful GET request to the results endpoint that returned JOB_QUEUED
or JOB_ONGOING
. In that case, new requests SHOULD be made until the API returns JOB_DONE
.
So, the very few doubles we're seeing are the incarnation of at least one of those three conditions above (it could be more than one, or even all of it in the same job). The queue control is doing its thing properly and we can't avoid those requests, as the final result (good or bad) SHOULD be fetched from the service, no matter if the request is doubled or not.
As such, I consider this issue as invalid, but I'll leave that opened for @futtta's judgement.
closing, but based on current figures we'll probably need other ways to limit request amounts. I'll open a "backlog" issue for that soon-ish ;-)
Indeed!
My guess? We'll need to revisit the whole thing, from the queue object to the queue control, so we can say goodbye to to what came from old|new plugins logic. A refactoring is not enough. We need to rethink the internals more deeply if we want to reduce that request count.
When you create that backlog issue, first create a one for a new panel: counters dashboard. This dashboard might have the following counters:
POST
requests to generate
endpoint was made to create a new job in ccss.com.GET
requests to results
endpoint was fetched from ccss.com.POST
+ GET
).Only with that that sort of stats we can know for sure what to address in the overall request count matter.
This issue (dashboard) should then be referred in the new one as a requirement.
-> 80 requests -> 70 unique requests = 10 doubles -> doubles are requested shortly after original
So why is this happening & how can we avoid this?