plone / guillotina

Python AsyncIO data API to manage billions of resources
https://guillotina.readthedocs.io/en/latest/
Other
187 stars 51 forks source link

Fix: prevent caching objects bigger than max_cache_record_size #1085

Closed lferran closed 3 years ago

lferran commented 3 years ago

Seems that we are checking this restriction during the transaction, but not when syncronizing through the pubsub at transaction close time.

Therefore, we are storing in memory and sending to redis/memcached objects that are over the size limit.

Not 100% sure if that's the best way to fix it though, as it could also be done in BasicCache.fill_cache.

Once this is correctly fix, I will port to master

vangheem commented 3 years ago

It makes sense to put it where you have it!

masipcat commented 3 years ago

Good catch!

Just one thing: I think with your changes the check_state_size doesn't make sense anymore (it's not used). Maybe we can make max_cache_record_size nullable in case someone doesn't want the limit

lferran commented 3 years ago

Good catch!

Just one thing: I think with your changes the check_state_size doesn't make sense anymore (it's not used). Maybe we can make max_cache_record_size nullable in case someone doesn't want the limit

Yes! agree on your idea. I'd prefer doing it in a separate PR though as a chore (maybe someone is actually using check_state_size).

lferran commented 3 years ago

ready for review

(don't know why codecov step is failing :cry:)

masipcat commented 3 years ago

(don't know why codecov step is failing cry)

maybe it's because the added/deleted empty lines in guillotina/db/transaction.py

vangheem commented 3 years ago

5.3.66 released