cachewerk / relay

The next-generation caching layer for PHP powered by Redis.
https://relay.so
MIT License
186 stars 10 forks source link

sites suddenly are retrieving a 502 #61

Closed danidorado closed 1 year ago

danidorado commented 1 year ago

I'll post later a file with the logs i noticed many of these entries

[1681514675.547182 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.547213 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.552938 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.552970 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.565978 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.566009 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.568875 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.568907 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.569310 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.569326 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.571970 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.571995 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.573129 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.573147 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.631428 DBG 14345] cache.c:49 EPOCH[destroy]: 7f465e7d3a30 [epoch: 1, active: 0] [1681514675.937745 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.937778 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.939433 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.939456 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.940418 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.940435 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.941519 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.941535 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.943199 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.943218 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.945399 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.945420 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.945545 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.945556 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.945783 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.945793 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.946074 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.946088 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.947162 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.947177 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.951198 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.951221 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.952033 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.952049 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.952296 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.952308 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.953883 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.953901 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.954703 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.954720 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.954832 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.954844 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.956612 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.956631 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.957411 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.957427 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.957537 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.957543 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.959257 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.959278 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.959974 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.959991 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.960101 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.960113 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.961610 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.961630 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.962965 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.962985 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.963855 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.963872 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.963997 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.964002 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.964146 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.964157 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.965104 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.965120 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.966145 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.966162 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.966284 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.966295 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.966533 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.966544 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.968747 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.968769 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.969335 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.969350 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.970664 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.970684 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.971793 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.971812 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.972374 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.972390 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.974977 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.975001 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.975437 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.975452 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.976654 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.976674 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.977219 DBG 5866] relay.c:2629 POLL - POLLIN (fd: 8) [1681514675.977876 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.977893 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.978195 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.978209 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb [1681514675.982996 NOT 5903] cache.c:1965 Endpoint 'tcp://default@172.30.0.29:6379?cache=1' has maxed out on rdbs (32) [1681514675.983026 NOT 5903] commands.c:8091 ENDPOINTS: Couldn't get new rdb

danidorado commented 1 year ago

[1681514676.054666 DBG 5903] relay.c:2736 PLINK - installing handlers for 1 links [1681514676.054675 DBG 5903] cache.c:321 ENDPOINT[0] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff55549dee0 keys: 16 [1681514676.054680 DBG 5903] cache.c:321 ENDPOINT[1] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5554ad0f0 keys: 220 [1681514676.054684 DBG 5903] cache.c:321 ENDPOINT[2] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5554acd40 keys: 0 [1681514676.054689 DBG 5903] cache.c:321 ENDPOINT[3] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff55537c7b0 keys: 235 [1681514676.054694 DBG 5903] cache.c:321 ENDPOINT[4] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff55550df80 keys: 397 [1681514676.054698 DBG 5903] cache.c:321 ENDPOINT[5] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5554d7470 keys: 111 [1681514676.054703 DBG 5903] cache.c:321 ENDPOINT[6] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5551e19e0 keys: 98 [1681514676.054718 DBG 5903] cache.c:321 ENDPOINT[7] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5553e0430 keys: 0 [1681514676.054722 DBG 5903] cache.c:321 ENDPOINT[8] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5554649b0 keys: 10 [1681514676.054726 DBG 5903] cache.c:321 ENDPOINT[9] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff555452de0 keys: 78 [1681514676.054730 DBG 5903] cache.c:321 ENDPOINT[10] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5551e6700 keys: 44 [1681514676.054734 DBG 5903] cache.c:321 ENDPOINT[11] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff55543a940 keys: 116 [1681514676.054739 DBG 5903] cache.c:321 ENDPOINT[12] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5553acf60 keys: 8 [1681514676.054743 DBG 5903] cache.c:321 ENDPOINT[13] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5552f8b40 keys: 39 [1681514676.054747 DBG 5903] cache.c:321 ENDPOINT[14] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff555300560 keys: 4 [1681514676.054751 DBG 5903] cache.c:321 ENDPOINT[15] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff555300910 keys: 12 [1681514676.054756 DBG 5903] cache.c:321 ENDPOINT[16] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff555300500 keys: 277 [1681514676.054760 DBG 5903] cache.c:321 ENDPOINT[17] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff55525bc70 keys: 38 [1681514676.054764 DBG 5903] cache.c:321 ENDPOINT[18] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5551f6d40 keys: 25 [1681514676.054767 DBG 5903] cache.c:321 ENDPOINT[19] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff555302840 keys: 42 [1681514676.054772 DBG 5903] cache.c:321 ENDPOINT[20] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff555302490 keys: 182 [1681514676.054775 DBG 5903] cache.c:321 ENDPOINT[21] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5552ffcb0 keys: 230 [1681514676.054779 DBG 5903] cache.c:321 ENDPOINT[22] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff55525b8f0 keys: 168 [1681514676.054783 DBG 5903] cache.c:321 ENDPOINT[23] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff555212490 keys: 93 [1681514676.054787 DBG 5903] cache.c:321 ENDPOINT[24] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff555223050 keys: 88 [1681514676.054790 DBG 5903] cache.c:321 ENDPOINT[25] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff555210e20 keys: 4 [1681514676.054795 DBG 5903] cache.c:321 ENDPOINT[26] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5551fef70 keys: 62 [1681514676.054799 DBG 5903] cache.c:321 ENDPOINT[27] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5551073e0 keys: 28 [1681514676.054803 DBG 5903] cache.c:321 ENDPOINT[28] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5550f2a00 keys: 65 [1681514676.054808 DBG 5903] cache.c:321 ENDPOINT[29] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff5550d4ee0 keys: 12 [1681514676.054812 DBG 5903] cache.c:321 ENDPOINT[30] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff55509e540 keys: 303 [1681514676.054816 DBG 5903] cache.c:321 ENDPOINT[31] - 'tcp://default@172.30.0.29:6379?cache=1' ep: 7ff55508c3d0, rdb: 7ff55508c500 keys: 46

danidorado commented 1 year ago

i think those are request that not set up as virtualhost in nginx but somehow are routed to my server from really old domains that might be hosted there at some point and dns were not changed at the client registrars

tillkruss commented 1 year ago

@michael-grunder Too many PHP workers on too little endpoints?

@danidorado: Can you post php --ri relay?

danidorado commented 1 year ago

[ec2-user@host ~]$ php --ri relay

relay

Relay Support => enabled Relay Version => 0.6.2 Available cache => 1073687680 Available serializers => php, json, igbinary, msgpack Available compression => lzf, zstd, lz4 Binary UUID => eb7c3243-d8ef-424e-897e-47211c26a1b6 Git SHA => a8fb6c4e60ac5afdcd41310ff13b1e8baa3867f5 Allocator => relay License state => unknown License memory cap => 0 License request id =>
relay.enabled => true relay.key => XXXX-XXXX-XXXXXX-XXXXXXX-XXXXXXX-27BVCF relay.maxmemory => 1073741824 relay.maxmemory_pct => 75 relay.eviction_policy => lru relay.eviction_sample_keys => 128 relay.initial_readers => 128 relay.invalidation_poll_usec => 5 relay.pconnect_default => 1 relay.max_endpoint_dbs => 32 relay.loglevel => debug relay.logfile => /export/backups/relay.log

danidorado commented 1 year ago

file on debug mode is growing really big, do you have any recommendation to narrow this down?

would you like i run some kind of grep commands to identify possible interesting records to analize?

danidorado commented 1 year ago

Hello @michael-grunder,

I'm gonna be near computer for the next hours, in case you'd like to get insights to debug this

Thanks

michael-grunder commented 1 year ago

Apologies @danidorado I think we're in opposite time zones.

To clarify, you're getting 502 errors all the time or intermittently?

The statement regarding the inability to acquire a new endpoint database does not necessarily imply an error; it simply means that the maximum configured limit has been reached when Relay attempts to create a new one. Relay should continue to function, so the error must be somewhere else.

The debug log file is very verbose, so you could reduce the log level and see if wny more severe warnings are indicated.

Can you post how your fpm pool(s) are configured (e.g. static, dynamic, # of servers, etc)

danidorado commented 1 year ago

Hi Michael, yeah probably we're on opposites part in the world right now. I'm currently in the maldives GMT -5 but no worries about that. I'll be connected for a while, so feel free to ask me for info.

Errors are popping up in a odd way, can't identify when or why theey are happen at the moment, but i know they are related to relay because as soon i reboot php-fpm to and relay releases the memory used probably for the master process everything starts to work again.

let me know in which state i should configure the logs to narrow it down

pm = dynamic

pm.max_children = 240 pm.start_servers = 40 pm.min_spare_servers = 16 pm.max_spare_servers = 64 pm.max_requests = 2500

Thanks in advance

tillkruss commented 1 year ago

@danidorado: How many CPU cores that this machine have?

danidorado commented 1 year ago

8

danidorado commented 1 year ago

t3.2xlarge

tillkruss commented 1 year ago

Aha, since pm = dynamic, the pm.max_children is ignored. So you're anywhere from 16 to 64 FPM workers simultaneously. @michael-grunder What do you reckon about bumping relay.max_endpoint_dbs from 32 to something higher to test if that resolves it?

michael-grunder commented 1 year ago

It's worth a shot, but Relay should not ever stop working so something else must be going on.

I am going to attempt to replicate this by spinning up a similar setup.

Edit: @danidorado Is it possible to run a little PHP script before cycling when you notice the 502 errors. Something like this would work:

echo json_encode(\Relay\Relay::stats());

That dumps tons of information about Relay's state.

danidorado commented 1 year ago

ok, so you just want i put that file on my filesystem and when it stucks, run from terminal php yourscript.php and that will output the logs?

in that case, i'll pipe that to a file and send it to you

Let me know if that will work, and if you need further info. I'll be watching some sites that i noticed they failed to see if that happens again.

Thanks

danidorado commented 1 year ago

At the moment im getting this output

{"usage":{"total_requests":1,"active_requests":1,"max_active_requests":1,"free_epoch_records":128},"stats":{"requests":0,"misses":0,"hits":0,"errors":0,"dirty_skips":0,"empty":0,"oom":0,"ops_per_sec":0,"bytes_sent":0,"bytes_received":0,"walltime":0},"memory":{"total":1073741824,"limit":1073741824,"active":54144,"used":54144},"endpoints":[],"hashes":{"pid":25968,"runid":"0187965b-f18a-718a-89ea-ad374c1146c6"}}

michael-grunder commented 1 year ago

Apologies for not clarifying, but you'll need to run it on the fpm process. Put it somewhere that you can access via a browser, and then hit it when you see there are 502s happening.

It needs to be run via the browser, because that's where Relay is storing all of the persistent connections for the various workers.

danidorado commented 1 year ago

{"usage":{"total_requests":95605,"active_requests":10,"max_active_requests":80,"free_epoch_records":119},"stats":{"requests":10636248,"misses":2088597,"hits":8124810,"errors":0,"dirty_skips":0,"empty":0,"oom":0,"ops_per_sec":45,"bytes_sent":2085478085,"bytes_received":15418161734,"walltime":2306873643},"memory":{"total":1073741824,"limit":-1,"active":85552128,"used":78503712},"endpoints":{"tcp:\/\/default@172.30.0.29:6379?cache=1":{"connections":[{"used":1,"dirty":0,"keys":[3393,258,557,24,504,602,24],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[3323,554,159,24,646,401,15],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[3546,272,159,125,599,271,176],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[2151,204,68,103,298,321,9],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[2105,610,86,15,191,503,48],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[2573,698,51,736,154,614,5],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1798,489,72,21,446,371,17],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[2131,291,131,187,282,414,6],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[5,18,3],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[57,10,13],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1773,595,140,9,338,409,24],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1633,49,17,67,287],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1475,30,26,4,15,316,10],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[2169,67,39,228,346,428],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1170,33,18,556,96,288,29],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1498,18,19,447,140,15],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[940,55,25,48,4,12],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1035,21,160,159,20,187,1],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1449,67,6,8,24,292,14],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[2866,30,127,40,180,186,46],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1834,356,183,2,210,151,24],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[3863,50,110,3,67,151,32],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[2381,20,171,4,460,193,345],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1615,361,20,130,235,236,8],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[928,96,24,2,357,276,110],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1166,15,142,144,320,12],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[2031,389,46,2,80,182,6],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[12596,11,28,114,223,19],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[780,17,24,1,38,223,6],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1662,28,50,197,82,159,90],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1675,18,46,402,237,51],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[3248,23,8,51,25,153,10],"flushes":1,"last_flush":1681572871}],"redis":{"redis_version":"7.0.7","used_memory":350263136,"used_memory_peak":4712372576,"total_system_memory":-1,"tracking_total_keys":151334,"maxmemory":2488396677,"maxmemory_policy":"volatile-lru","updated":1681855378521105}}},"hashes":{"pid":2626,"runid":"0187858d-b6c0-76c0-8ddb-aefd54f6e466"}}

danidorado commented 1 year ago

it might be happening

{"usage":{"total_requests":20952,"active_requests":307,"max_active_requests":308,"free_epoch_records":2},"stats":{"requests":4511109,"misses":1617600,"hits":2644905,"errors":0,"dirty_skips":0,"empty":0,"oom":0,"ops_per_sec":52,"bytes_sent":1290602896,"bytes_received":16192724581,"walltime":3322815554},"memory":{"total":1073741824,"limit":-1,"active":37397328,"used":30670032},"endpoints":{"tcp:\/\/default@172.30.0.29:6379?cache=1":{"connections":[{"used":1,"dirty":0,"keys":[288,996,1276,237],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1506,301,24],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[195,1420,3,107],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[278,397,214,195],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[756,587,11,70],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[172,676,4,281],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[16,108],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[518,566,3,54],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[76,855,150,38],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[402,636,86],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[351,785,28,217],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[171,446,154,80,5],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[208,1346,46],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[127,748,218],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[27,292,8,60,73],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[226,377,75],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[136,1111,322,177],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[30,578,156,133],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[142,234,68,101],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[234,212,171,34,59],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[1899,136,143,122],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[147,697,185,87],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[163,382,328,90,219],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[847,471,116],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[16,30],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[18,340],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[71,548,83,230,63],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[113,26,2,36],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[169,657,34,12,18],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[68,475],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[235,765,130],"flushes":0,"last_flush":0},{"used":1,"dirty":0,"keys":[414,375,176,184],"flushes":0,"last_flush":0}],"redis":{"redis_version":"7.0.7","used_memory":127733224,"used_memory_peak":4712372576,"total_system_memory":-1,"tracking_total_keys":69733,"maxmemory":2488396677,"maxmemory_policy":"volatile-lru","updated":1682539755695090}}},"hashes":{"pid":20955,"runid":"0187bdba-36cd-76cd-8072-e19b2ba5fdd2"}}

tillkruss commented 1 year ago
{
  "usage": {
    "total_requests": 20952,
    "active_requests": 307,
    "max_active_requests": 308,
    "free_epoch_records": 2
  },
  "stats": {
    "requests": 4511109,
    "misses": 1617600,
    "hits": 2644905,
    "errors": 0,
    "dirty_skips": 0,
    "empty": 0,
    "oom": 0,
    "ops_per_sec": 52,
    "bytes_sent": 1290602896,
    "bytes_received": 16192724581,
    "walltime": 3322815554
  },
  "memory": {
    "total": 1073741824,
    "limit": -1,
    "active": 37397328,
    "used": 30670032
  },
  "endpoints": {
    "tcp://default@172.30.0.29:6379?cache=1": {
      "connections": [
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            288,
            996,
            1276,
            237
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            1506,
            301,
            24
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            195,
            1420,
            3,
            107
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            278,
            397,
            214,
            195
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            756,
            587,
            11,
            70
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            172,
            676,
            4,
            281
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            16,
            108
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            518,
            566,
            3,
            54
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            76,
            855,
            150,
            38
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            402,
            636,
            86
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            351,
            785,
            28,
            217
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            171,
            446,
            154,
            80,
            5
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            208,
            1346,
            46
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            127,
            748,
            218
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            27,
            292,
            8,
            60,
            73
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            226,
            377,
            75
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            136,
            1111,
            322,
            177
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            30,
            578,
            156,
            133
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            142,
            234,
            68,
            101
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            234,
            212,
            171,
            34,
            59
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            1899,
            136,
            143,
            122
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            147,
            697,
            185,
            87
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            163,
            382,
            328,
            90,
            219
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            847,
            471,
            116
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            16,
            30
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            18,
            340
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            71,
            548,
            83,
            230,
            63
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            113,
            26,
            2,
            36
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            169,
            657,
            34,
            12,
            18
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            68,
            475
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            235,
            765,
            130
          ],
          "flushes": 0,
          "last_flush": 0
        },
        {
          "used": 1,
          "dirty": 0,
          "keys": [
            414,
            375,
            176,
            184
          ],
          "flushes": 0,
          "last_flush": 0
        }
      ],
      "redis": {
        "redis_version": "7.0.7",
        "used_memory": 127733224,
        "used_memory_peak": 4712372576,
        "total_system_memory": -1,
        "tracking_total_keys": 69733,
        "maxmemory": 2488396677,
        "maxmemory_policy": "volatile-lru",
        "updated": 1682539755695090
      }
    }
  },
  "hashes": {
    "pid": 20955,
    "runid": "0187bdba-36cd-76cd-8072-e19b2ba5fdd2"
  }
}
danidorado commented 1 year ago

Hello, I think something happens related to how the used memory is displayed

My used memory lately is a bit too low most of the time and i see inconsistencies very often from the php page that displays real time stats and the wordpress widget, memory values doesn't match most of the time in big differences in MB and also the wordpress widget looks stuck when i move from different multisite installations.

tillkruss commented 1 year ago

What does do the Relay metrics say in _Settings > Object Cach_e?

danidorado commented 1 year ago

i have the metrics disabled,

definitely something is wrong, this widget is like this more than 24 hours, no possible only 4MB wordpress dashboard is stuck in 7MB since then as well

Stat Used Total % Meter
Shared allocation 4,992,160 1,073,741,824 0%  
Limit 4,992,160 -1 -499216000%  
tillkruss commented 1 year ago

There isn't enough information here to go on.

danidorado commented 1 year ago

you tell me how we debug this, im happy to assist

danidorado commented 1 year ago

WP Dashboard widget is stuck outputting this numbers since yesterday Status: Connected Drop-in: Valid Cache: 33,799 objects Relay: 7 MB of 1 GB

really weird ig got stuck in 7MB for so long

I've activated metrics in a couple of installations, do you want i enable it at all of them?

danidorado commented 1 year ago

analytics are not looking really good

hit ratio far below 30% more than 1.2M misses no more than 400k hits active memory fluctuating between 4-8MB only

danidorado commented 1 year ago
Screenshot 2023-05-07 at 08 13 26
danidorado commented 1 year ago
Screenshot 2023-05-07 at 11 57 40 Screenshot 2023-05-07 at 11 57 03
danidorado commented 1 year ago

current status still block on the same numbers described above

{ "usage":{ "total_requests":30203, "active_requests":146, "max_active_requests":248, "free_epoch_records":103 }, "stats":{ "requests":2920067, "misses":1994176, "hits":665915, "errors":0, "dirty_skips":0, "empty":0, "oom":0, "ops_per_sec":289, "bytes_sent":914596442, "bytes_received":9624140365, "command_usec":1944823047, "rinit_usec":1991951, "rshutdown_usec":486485, "sigio_usec":2782 }, "memory":{ "total":1073741824, "limit":-1, "active":6867200, "used":4730992 }, "endpoints":{ "tcp:\/\/default@172.30.0.29:6379?cache=1":{ "connections":[ { "used":1, "dirty":0, "keys":[ 27 ], "flushes":0, "last_flush":0 }, { "used":1, "dirty":0, "keys":[ 121, 9 ], "flushes":0, "last_flush":0 }, { "used":1, "dirty":0, "keys":[ 1, 1557 ], "flushes":0, "last_flush":0 }, { "used":1, "dirty":0, "keys":[ 98 ], "flushes":0, "last_flush":0 }, { "used":1, "dirty":0, "keys":[ 128, 4 ], "flushes":0, "last_flush":0 }, { "used":1, "dirty":0, "keys":[ 67 ], "flushes":0, "last_flush":0 }, { "used":1, "dirty":0, "keys":[ 19 ], "flushes":0, "last_flush":0 }, { "used":1, "dirty":0, "keys":[ 18 ], "flushes":0, "last_flush":0 }, { "used":1, "dirty":0, "keys":[ 15 ], "flushes":0, "last_flush":0 }, { "used":1, "dirty":0, "keys":[ 16 ], "flushes":0, "last_flush":0 }, { "used":1, "dirty":0, "keys":[ 10 ], "flushes":0, "last_flush":0 }, { "used":1, "dirty":0, "keys":[ 60 ], "flushes":0, "last_flush":0 }, { "used":1, "dirty":0, "keys":[

           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              10
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              14
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[

           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              68
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              14
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              52
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              71
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              1
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              82,
              28
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              19
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              64
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              29
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[

           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              15,
              18
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              25,
              3
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              12
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              14
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              119,
              19
           ],
           "flushes":0,
           "last_flush":0
        },
        {
           "used":1,
           "dirty":0,
           "keys":[
              13,
              51
           ],
           "flushes":0,
           "last_flush":0
        }
     ],
     "redis":{
        "redis_version":"7.0.7",
        "used_memory":304676704,
        "used_memory_peak":308789704,
        "total_system_memory":-1,
        "tracking_total_keys":85173,
        "maxmemory":2488396677,
        "maxmemory_policy":"volatile-lru",
        "updated":1683471670032785
     }
  }

}, "hashes":{ "pid":23645, "runid":"0187f15f-8065-7065-8ba5-1496c3bc112c" } }

tillkruss commented 1 year ago

What does your flush log show?

danidorado commented 1 year ago

how can i output that?

tillkruss commented 1 year ago

Settings > Object Cache > Tools

danidorado commented 1 year ago
Screenshot 2023-05-07 at 21 49 49
tillkruss commented 1 year ago

It looks like your cache is getting flushed a lot, probably why your cache is relatively small.

danidorado commented 1 year ago

that is a single site, it shouldnt been flushing all , i have also "flush_network":"site"

it's the action triggered when you save a post by vc 'visual composerr / wpbakery'

after more than one day on issues that i try to describe best i can with my knowledges, i did the flush to try to get the things back to work, is the that you see on the screenshot that is not coming from the claudia user, now it's working but i've experienced this behaviour in the last couple of weeks or so when i bumped the last version, i posted also on discord when it happened to me first time

[relay-fpm.php) is telling this now Relay Used Memory: 39.7133 MB

danidorado commented 1 year ago

That user name is only able to log in into a single site on the multisite, so doubt that assumption of having the small cache because its flushed a lot.

On Mon, May 8, 2023 at 01:41 Till Krüss @.***> wrote:

It looks like your cache is getting flushed a lot, probably why your cache is relatively small.

— Reply to this email directly, view it on GitHub https://github.com/cachewerk/relay/issues/61#issuecomment-1537536017, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABTZ2OLJEBXJHZBXUNZBJJTXFACI5ANCNFSM6AAAAAAW7AFNPA . You are receiving this because you were mentioned.Message ID: @.***>

danidorado commented 1 year ago

Hello, i hope you take this matter seriously because as i said stats are really inconsistent and i'm not sure if this is affecting performance and stability somehow

Screenshot 2023-05-12 at 13 41 45 Screenshot 2023-05-12 at 13 41 51
danidorado commented 1 year ago
Screenshot 2023-05-12 at 13 54 05
danidorado commented 1 year ago

I saw there is an update but i cant find the package for my php version and distribution

tillkruss commented 1 year ago

What does the latest https://github.com/cachewerk/relay/blob/main/resources/relay-fpm.php show you?

danidorado commented 1 year ago
Screenshot 2023-05-12 at 23 48 18
danidorado commented 1 year ago

Hi Till, I sent you some messages on discord

danidorado commented 1 year ago

ok, i'll open your IP when you set a date and time to check on this. as i said i want to be present while you're logged in into my server

and i'd like to get an answer about the pricing as well

I've also updated relay 0.6.4. thanks for the info

danidorado commented 1 year ago

several hours after the update

Screenshot 2023-05-17 at 00 05 25
danidorado commented 1 year ago

i understand that we all are very busy with our duties and life, but this thing is not working good since more than a month now

tillkruss commented 1 year ago

i understand that we all are very busy with our duties and life, but this thing is not working good since more than a month now

What do you mean specifically with "this thing is not working good"?

danidorado commented 1 year ago

the proportion of hits and misses seems weird i guess that is the reason you'd like to take a look inside the server, also the amount of memory used sometimes is super low in compared with the allocated memory.

I could be wrong, it's just my perception

tillkruss commented 1 year ago

Can you post a screenshot of the hit ratio from within your WordPress installation Settings > Object Cache.

danidorado commented 1 year ago
Screenshot 2023-05-18 at 01 46 32
danidorado commented 1 year ago

Hello, I hope you guys are having a great weekend.

I'd like to have an answer about your thoughts of what could be going on here. Since around 2 months ago i've been monitoring relay stats, and the stats we're having last couple of months seems not normal comparing to the period from april/may 2022 when i first installed the library until i first posted i was experiencing inconsistent data in the discord support channel (something that i see has been deleted in order to move support here on github).

I'm trying to understand why i'm asked to post something here and not get any feedback in 3 days to at least say those stats are normal which a truly believe are not because otherwise you shouldn't wrote me a meesage in discord asking for ssh access to my environment, telling me "This is usefull" which i guess is useful for you to somehow identify ways to improve the libarary or understand behaviours in particular installations.

"This is useful. Can you give me SSH access?"

I kindly asked you to set a date for the time you'd like to ssh into my servers to understand what are you going to check and do on the server in order to be able to facilitate the info you might be looking in the future, but i got no answer on that after my first mention on 13/05 with another mention on 16/05 and instead i got answered to other question.

Aiming to provide a constructive critic, the support i'm receiving could be better from my point of view. I feel i'm treated like "this guy again... mode" and honestly, i've also felt dodged at some of my questions. So if this is going to be happening i'd kindly ask you to cancel my relay subscription, because the memory usage at the current situation is never going above of 64MB and i was surpassing that limit easily months ago and I even have a dozen of more sites on the server so better stick to the free version and save a huge amount money. If cannot debug this and find a solution to at least determine if something is happening, at the moment i've just received vague answers and posts request that are not leading anywhere.

I'm a paying customer of OCP since 2020 and Relay since it's early launch. I think i deserve a little bit more attention even tough i understand i'm not godaddy, pagely or any of these big accounts that probably might be having much more of your attention.

My apologies if sometimes i'm not using the right words to describe my problems or anyother miss understanding. I'm not english native speaker and i do the best i can to express and communicate in a polite way, so again sorry if i could have sounded not right with my english.

Looking forward to hearing from you

Have a great weekend to all you.

Kindest regards

Screenshot 2023-05-20 at 14 42 49