silverstripe / silverstripe-s3

Silverstripe module to store assets in S3 rather than on the local filesystem (SS4/SS5 only)
BSD 3-Clause "New" or "Revised" License
20 stars 25 forks source link

feat: implement CacheItemsTrait for better performance handling #70

Closed wilr closed 5 months ago

wilr commented 5 months ago

As reported by https://github.com/silverstripe/silverstripe-s3/issues/67, https://github.com/silverstripe/silverstripe-s3/issues/48 and others the performance of this module on flysystem 3 is bad, very bad. This is my attempt at overhauling the performance of the module to get better results out of the box.

In the original upgrade we picked up jgivoni/flysystem-cache-adapter thinking it would go a job but actually caused more headaches - turned out easier to implement something tailored to our needs.

The new CacheItems integrations natively with the existing CacheInterface within Silverstripe so works out of the box but can also be used with memcached, redis etc.

Performance wise, warm cache it makes 0 calls to AWS. On a cold cache it makes 3N calls for published files (doesObjectExistV2, readObject, fetchFileMetadata).

wilr commented 5 months ago

Note for anyone testing, there are several YAML changes. The intention was this release we'll make a new major version rather than a minor.

obj63mc commented 5 months ago

Will test this in the morning but at first glance, using a new Asset Store looks like it will have the biggest performance gains and essentially is what I have implemented custom already on most of my sites.

Also for sites that have a ton of files or a lot of users, this issue is causing some problems as well, mainly in the asset admin or dialogs when viewing an individual images details. https://github.com/silverstripe/silverstripe-framework/issues/11272

Last thing I noticed in some of the other silverstripe repos are pull requests to not use self:: for calling a class' static method. https://github.com/silverstripe/silverstripe-cms/pull/2955 I saw a few instances of that in the new files so that might want to be checked.

obj63mc commented 5 months ago

Testing currently on one of our larger sites, with over 20k users, 17k files, 22k pages. Overall seems to be working great. Initial load in asset admin still a bit slow but don't know if there will ever be anything that can be done for that when trying to load metadata for 25+ files at once. Will monitor for the next day or so and let you know if I see any issues

obj63mc commented 5 months ago

One thing that I am not 100% sure if this is configurable or not is if there is a way to disable the flushable portion of this cache. Main reason why is I am using Redis to store the cache, but hosting on heroku - when they restart the server daily or we deploy a code change, this cache is then flushed even though we don't need it to.

@wilr - do you know if there is a way to disable that. I think that would be a huge help for people on hosting platforms like heroku

obj63mc commented 5 months ago

Figured out my issue with the PublicCDN adapter, was due to other configuration changes on adding cache headers to the public filesystem. If you are good with the configuration option on the flush then this should be good to go. Will continue to monitor through out the day.

obj63mc commented 5 months ago

So we tested this out on another site as well yesterday afternoon that has about 10k image assets. All has been working fine and as expected. Would say this is good to go.

wilr commented 5 months ago

Merged now @obj63mc I'm happy it's a good initial step forward without making any major philosophical changes.

Further work could be (I'm happy to hear people's thoughts as a trade off between 'correct' and 'fast')

fileSize, mimeType, publicUrl if not cached hits AWS to get that figured out..

Given that it probably doesn't change over time it could be saved to the db. That would cut down 3 queries per file on a cold cache.