Closed benatleith closed 1 year ago
Hmmm, I've not had a massive experience with load balancers with Craft, so I'll have to do some reading. I know it can be a little finnicky.
Icon picker stores its cache as a data-cache, which I suppose is file-based (stored normally in storage/runtime/cache
. Sounds like that might be an issue here?
Hi @engram-design - think I've figured out what's happening.
verbb\iconpicker\services\Cache::checkToInvalidate()
is calling filemtime() on the root svg directory to determine whether or not the cache needs to be regenerated. We're deploying our app via CodeDeploy to each instance, and since our svgs are baked into our deployment, the directory where it lives has a different mtime on each box. So the cached timestamp is flip flopping depending on which box the queue runs on. At least I think that's what's going on.
edit: I think I'll just move the directory in question onto shared storage.
Ah, nice pickup. The idea with this is that it was supposed to be smart enough to detect when new icons were added to a folder, or dropped into the folder configured. Otherwise, every time you add new icons, you have to clear caches, which is a tad annoying, but only through development.
Happy to add a TTL setting that overrides this though, I think that makes sense.
We discovered we were experiencing this exact issue this evening. We have pretty much the identical load balanced setup. As a workaround, I have moved the assets to our shared volume. That should work well for our needs. A potential solution could be to have a config setting to disable the automatic invalidation and add a console command to force it. We only ever change them via source code so we would just run the command when we need and no automatic monitoring would be needed.
Should be fixed in 2.0.0
Description
We're running a load-balanced setup which is (we think) in accordance with P&T best-practices: sessions and cache are in the db, storage and web/assets directories are on an NFS share. We have 2 EC2 instances in an auto-scaling group behind an ALB. When only one box is running, everything is fine. As soon as we scale up to 2, we see a runaway situation in the queue:
This goes on ad infinitum until we shut down the second box. At one point there were 360 processes in the queue.
We've only got one sprite sheet, and it's part of the application code package deployed with CodeDeploy (i.e. the svg does not reside on shared storage and so will have a different mtime on each box - could be significant?).
Any ideas?
Steps to reproduce
Additional info
Additional context