Closed quantranhong1999 closed 2 weeks ago
I have tried multiple ideas but still can not improve the Postgres blob store concurrent test run time.
I think it is because of the blocking nature of the Postgres connection. 1 Postgres connection can only handle 1 query at once (queued requests need to wait for the current query to be responded before getting proceeded). Therefore multiple concurrent bytea
requests, especially when the blob is big would quickly fill up the request queues and lead to a timeout.
There are complaints/benchmarks about Postgres as a blob store performance. It does not seem to be the best practice to use Postgres as a blob store.
Therefore I propose to not investigate more effort on this and just increase the timeout while testing: https://github.com/apache/james-project/pull/2301
just increase the timeout while testing: https://github.com/apache/james-project/pull/2301
increate jooq timeout + increase testcase timeout (https://github.com/apache/james-project/pull/2279/commits/3af6fe60c5090ddd69cc784f3563cace306643cd) ? If yes, you can cherry-pick this commit and push in single pr
If yes, you can cherry-pick this commit and push in single pr
Done. Thank you.
Why
Some PostgresBlobStore concurrent tests failed recently...
cf https://ci-builds.apache.org/blue/organizations/jenkins/james%2FApacheJames/detail/PR-2290/2/tests/
While increasing the timeout could make the tests more stable, but it would not benefit production usage.
Let's try to see if we can speed up the PostgresBlobStore.