Open joshuabaker opened 6 months ago
hmmmm. This seems like an environmental thing, perhaps? In theory, CLI-run PHP tasks should generally not have a max execution time involved?
Definitely environmental. It’s on Fortrabbit so it’s not configurable, unfortunately. It seems to manage circa 500–600 entries and then fails, in my experience.
hmmmm. I'm not sure how I feel about this; it's an externally imposed limitation that normally is configurable. Even if we did break it into paginated queue jobs, we'd be guessing at what the page size should be to ensure that it won't time out (which will depend on a number of factors, such as image size, number of variants, etc.)
Have you tried contacting them to see if they can increase or remove this limitation?
Fair feedback. I was actually thinking the existing page size (i.e. ASSET_QUERY_PAGE_SIZE
), but just across jobs.
I’ll raise with Fortrabbit and see what they say, but I cannot imagine their removing that constraint.
I wonder if this could help:
Big job batching - https://github.com/craftcms/cms/pull/12638
Looks like it would then be only for Craft 4.4+ -- what version of Craft are you using?
This site is currently on 4.8.7.
fortrabbit co-founder here. Sorry for being defensive: Our 'externally imposed limitations' are designed with good intentions at least. We have a couple of them. My experience is that they help to prevent 'incorrect setups' in most cases.
This limitation is about deployment? 20 minutes for a deployment is a long time, too long we think. We also have a 20 minutes limit on SSH connection for similar reasons. Usually a misconfiguration is the cause which can be solved together with the client in support.
We will be in contact with Josh through our client support and see what we can do about this case and of course share here if new ideas for ImageOptimize come to light.
@frank-laemmer Discussed via the support ticket. For posterity, in case anyone else comes across this, the time constraints are implemented to avoid abuse of your hosting platform.
@khalwat I resolved the immediately affected website manually (i.e. manually ran the blocking job locally and uploaded the database).
Am I right in thinking that swapping to craft\queue\BaseBatchedJob
is as simple as adjusting the extends
of ResaveOptimizedImages
? It looks like the property defaults are all sensible (i.e. batch chunk size, etc.).
This limitation is about deployment? 20 minutes for a deployment is a long time, too long we think. We also have a 20 minutes limit on SSH connection for similar reasons. Usually a misconfiguration is the cause which can be solved together with the client in support.
Sure, it makes sense as a default, but a way to override or change it when the client has extraordinary needs might be helpful.
We will be in contact with Josh through our client support and see what we can do about this case and of course share here if new ideas for ImageOptimize come to light.
Great, let me know!
Am I right in thinking that swapping to
craft\queue\BaseBatchedJob
is as simple as adjusting theextends
ofResaveOptimizedImages
? It looks like the property defaults are all sensible (i.e. batch chunk size, etc.).
Well, it will also require a bump in the minimum version of Craft that can use the code (it would need to be ^4.4.0
), but beyond that I'm not sure if simply swapping the base class would do it, or if further adjustments would be needed.
Is your feature request related to a problem? Please describe.
For volumes with thousands of images we often run into time outs.
The next thing that happens is that the queue job fails, which means that we have to restart the process… ending up in a loop.
Describe the solution you would like
Ideally, instead of chunked/paged queues within a queue job, it’d be great for each batch/page to be split into its own queue job. That way it’s easy to just restart the one batch/page of 100.