We have an infinity stream of events from Kafka, each we groupBy(number = 1000) then go to the db and fetch in batch
We have a few queries which are parametrized with lists like
sql """ select ...
where id in (${SomeCodecs.id.list(ids.size)});
"""
We have a session with a pool, prepareCacheSize = 1024
We have about 25 queries
I think what happens is that we have more prepared statements ( lets say about 25 x 1000 ) than the cache size ,
and when the local cache runs out of space it just disposed it from the local cache but not from the service with deallocate.
When the same query with same number of params to list comes up, skunk just prepares again since it's no longer in the cache, so the number prepared statement prepared statement on the postgres server just keeps growing until it either dies or we restart the app
There are a few thing I could/will do at our side but I thought it make sense to report this as a bug
Our Postgres db keeps running out of RAM.
We have an infinity stream of events from Kafka, each we groupBy(number = 1000) then go to the db and fetch in batch
We have a few queries which are parametrized with lists like
sql """ select ... where id in (${SomeCodecs.id.list(ids.size)}); """
We have a session with a pool, prepareCacheSize = 1024 We have about 25 queries
I think what happens is that we have more prepared statements ( lets say about 25 x 1000 ) than the cache size , and when the local cache runs out of space it just disposed it from the local cache but not from the service with deallocate.
When the same query with same number of params to list comes up, skunk just prepares again since it's no longer in the cache, so the number prepared statement prepared statement on the postgres server just keeps growing until it either dies or we restart the app
There are a few thing I could/will do at our side but I thought it make sense to report this as a bug