Closed lferran closed 3 years ago
It makes sense to put it where you have it!
Good catch!
Just one thing: I think with your changes the check_state_size doesn't make sense anymore (it's not used). Maybe we can make max_cache_record_size
nullable in case someone doesn't want the limit
Good catch!
Just one thing: I think with your changes the check_state_size doesn't make sense anymore (it's not used). Maybe we can make
max_cache_record_size
nullable in case someone doesn't want the limit
Yes! agree on your idea. I'd prefer doing it in a separate PR though as a chore (maybe someone is actually using check_state_size
).
ready for review
(don't know why codecov step is failing :cry:)
(don't know why codecov step is failing cry)
maybe it's because the added/deleted empty lines in guillotina/db/transaction.py
5.3.66 released
Seems that we are checking this restriction during the transaction, but not when syncronizing through the pubsub at transaction close time.
Therefore, we are storing in memory and sending to redis/memcached objects that are over the size limit.
Not 100% sure if that's the best way to fix it though, as it could also be done in
BasicCache.fill_cache
.Once this is correctly fix, I will port to
master