magento / magento2

Prior to making any Submission(s), you must sign an Adobe Contributor License Agreement, available here at: https://opensource.adobe.com/cla.html. All Submissions you make to Adobe Inc. and its affiliates, assigns and subsidiaries (collectively “Adobe”) are subject to the terms of the Adobe Contributor License Agreement.
http://www.magento.com
Open Software License 3.0
11.56k stars 9.32k forks source link

L2 caching information and optimization #37421

Open onlinebizsoft opened 1 year ago

onlinebizsoft commented 1 year ago

Summary

We are a website with many store views and many products, our Magento redis cache is really big because how big our system is. I'm looking at L2 caching and hoping it will improve a bit our network transfer, network latency,......

However the document about L2 caching seems to be lacking technical information, it is not clear how it works because it is not possible to just store whole remote cache on the L2 cache and if the L2 cache is too small then even using L2 cache will just add extra load on the top (to save and remove L2 cache).

Can someone explain or share the experience with using this? CC @hostep @georgebabarus @Nuranto @jonathanribas @vzabaznov @Adel-Magebinary @igorwulff @drew7721 @ihor-sviziev

Examples

L2 cache seems to just work as an extra cache level on local web server machine

Proposed solution

We should be able to configure L2 cache to keep some certain cache keys or find a way to make it more effective, otherwise it will be almost useless for a website with big Redis cache size.

Release note

No response

Triage and priority

m2-assistant[bot] commented 1 year ago

Hi @onlinebizsoft. Thank you for your report. To speed up processing of this issue, make sure that the issue is reproducible on the vanilla Magento instance following Steps to reproduce. To deploy vanilla Magento instance on our environment, Add a comment to the issue:

ihor-sviziev commented 1 year ago

Sorry, we never user L2 cache in Magento

onlinebizsoft commented 1 year ago

P/S : In my user case, we have some quite big cache keys because number of websites/store views and number of different modules so our cache size is very big and L2 seems to be not applicable however we plan to update the logic to keep only certain cache key in L2 caching and the rest will go to remote cache storage. This will help to reduce data transfer over network and reduce remote Redis engine CPU to serve big keys Just sharing our plan, if any one has a better idea, please feel free to discuss.

At beginning, I expected a magic things like "preload keys" should work specially with L2 caching but it doesn't. There is nothing specially behind the L2 caching logic.

Nuranto commented 1 year ago

Hi @onlinebizsoft

I'm not sure to understand why you don't want to use fully L2 caching. What would be the benefit of using it only for a part of cache ? A network call is costly, for small entries too.

onlinebizsoft commented 1 year ago

@Nuranto 1. We can't use it fully because our total Redis size is very big and it is much bigger than the total web server memory so it makes no sense to replicate everything on L2 which lead to save/remove operation very often

  1. I know the main thing is network call (even we have all components within a private network). However I'm thinking some very big keys will have much more effect than other small ones so at least I can eliminate big keys from network transfer and I will monitor the result from there.

Do you have a contact, I can connect you and we can share some experience how it is going?

Nuranto commented 1 year ago

Maybe the issue is that you have too much cache than necessary ? How much memory is redis using ? Do you use remote storage ? There's a known bug that produce huge amount of unnecessary cache : https://github.com/magento/magento2/issues/35820

onlinebizsoft commented 1 year ago

@Nuranto you know a website with 100K products can be few GB of cache and the website has like 100 websites on the same installation so you can imagine how big it can be. And of course, we don't have any kind of web server with so much memory and memory should need to be spent for PHP processes instead of caching.

ihor-sviziev commented 1 year ago

I just wonder, maybe separating the cache for each website might help you?

onlinebizsoft commented 1 year ago

@ihor-sviziev yes actually it was an option I was thinking about. However it will make the infrastructure more complex (deployment process,....) and it will lead to ineffective sharing resource, we always need to reserve enough resource for each group of websites. So I'm trying with simpler options first.

Nuranto commented 1 year ago

We have as much products, but only 3 websites indeed. I suppose that makes the difference. Also, we have servers nodes with 192Gb of RAM, which surely helps... I think there's no other option left for you than rewrite (or write a new) L2 caching (see RemoteSynchronizedCache class)

onlinebizsoft commented 1 year ago

An off-topic question, do you use persistent database connection (and/or Redis persistent connection) @Nuranto @ihor-sviziev ?

ihor-sviziev commented 1 year ago

nope :(

jonathanribas commented 1 year ago

Hi @onlinebizsoft, sorry for the late reply, we did some tests yesterday enabling L2 cache on EFS (not /dev/shm).

We have very good results regarding network usage on Redis side, it has clearly dropped!

Network usage without L2 cache:

Screenshot 2023-09-13 at 10 08 07

Network usage with L2 cache enabled:

Screenshot 2023-09-13 at 10 07 00

BUT we have very poor results in terms of PHP response consumption when enabling L2 cache:

PHP transaction time without L2 cache:

Screenshot 2023-09-13 at 10 03 11

PHP transaction time with L2 cache:

Screenshot 2023-09-13 at 10 03 41
onlinebizsoft commented 1 year ago

@jonathanribas it is a bit strange, currently we have tried to enable L2 on our development environment and we didn't notice such slowness. We are not ready to go live yet (our L2 implementation is a custom one to apply our logic for which key to be on L2 only)

jonathanribas commented 1 year ago

@onlinebizsoft, I've run this same test several times: without / with / without / with L2 cache and I got same results. Those environments are clean and without traffic.

m2-assistant[bot] commented 11 months ago

Hi @engcom-Hotel. Thank you for working on this issue. In order to make sure that issue has enough information and ready for development, please read and check the following instruction: :point_down:

engcom-Hotel commented 11 months ago

Hello @onlinebizsoft,

Thanks for the report and collaboration!

After going through the main description, seems like it can be a good feature to have, to configure the L2 cache to keep some certain cache keys or find a way to make it more effective.

Hence marking this issue as a feature request.

Thanks

JamesFX2 commented 9 months ago

@jonathanribas why did you choose to use EFS rather than /dev/shm - this is surely built around use of in-memory store rather than disk reads of up to hundreds of identifiers?

@onlinebizsoft did you ever follow up on this? To get what you'd want would probably involve storing last access times and purging based on that.

onlinebizsoft commented 9 months ago

@jonathanribas why did you choose to use EFS rather than /dev/shm - this is surely built around use of in-memory store rather than disk reads of up to hundreds of identifiers?

@onlinebizsoft did you ever follow up on this? To get what you'd want would probably involve storing last access times and purging based on that.

I dont think it is a good approach because it will add too much logic and extra processing.

JamesFX2 commented 9 months ago

@onlinebizsoft depends how you implement it, I agree adding a write to every read isn't the best but there are new approaches for storing in-memory cache. I am not an expert in this field but I've heard people talking enthusiastically about Swoole (in the context of explaining Laravel Octane).

jonathanribas commented 8 months ago

@JamesFX2, we run M2 on k8s and we don't want to change k8s default size setting for /dev/shm which is 64mb.