Closed hashimkhanwazir closed 3 years ago
Parliament has a many readers/single writer model of concurrency. In other words, if Parliament receives several requests at once, and all of them are read-only requests (such as a query), then Parliament will execute those requests in parallel. Whenever Parliament receives a request that requires writing (such as an insert, SPARQL Update, create/drop graph, etc.), that request will wait until all other executing requests are finished, and then it will execute. And while it is executing, all other requests will wait in the queue until it is finished.
This model works well in most situations, where the number of concurrent requests is not terribly large and read-only requests dominate. A dozen or two concurrent queries is not uncommon in my experience, and does not cause problems. I would expect that Parliament will handle 128 concurrent read-only requests correctly, but the performance may not be as high as one would like. Mainly this is because all those requests ultimately are reading from the same I/O device. Therefore, caching becomes the key issue for concurrency levels this high.
Parliament uses four pools of memory:
Achieving the right balance between these pools of memory is tricky, and I'm afraid I don't have any concrete advice, especially in the case of 128 concurrent requests. However, there are a few things I can say:
As before, please leave this issue open as a reminder to me to improve the documentation.
Documentation improvements incorporated on 11/1/2021, to appear in version 2.8.0.
I want to perform queries by upto 128 number of parallel/concurrent workers. Can you recommend some specific configuration please? I want to measure the QpS (queries per second) value of many triple stores for parallelism (querying by more than one user at a time).