To force sequential execution of changes, the jsoncs3 share manager makes an m.Lock() call in every called function. This forces requests to be executed sequentially, to a point when the number of requests becomes a bottleneck. In essence, even read requests cannot be executed concurrently.
This can be reproduced by increasing the number of requests per second that ListReceivedShares and then create shares with the same group of users. The List requests wil Lock the jsoncs3 share manager very often. The new shares have to update the group received shares list and cause the Lock to remain open for a comparatively long time, decreasing the number of requests that can be handled per second significantly, causing response time to increase until timeouts occur.
Sharing with the same group causes the bottleneck to manifestate earlier, because the request has to write the received shares list to disk. To do that, it optimistically tries to write a file with an If-Match etag header. And we currently only retry that once.
We have a test cas in cdperf that covers this scenario. You will have to create en env var like this:
To force sequential execution of changes, the jsoncs3 share manager makes an m.Lock() call in every called function. This forces requests to be executed sequentially, to a point when the number of requests becomes a bottleneck. In essence, even read requests cannot be executed concurrently.
This can be reproduced by increasing the number of requests per second that ListReceivedShares and then create shares with the same group of users. The List requests wil Lock the jsoncs3 share manager very often. The new shares have to update the group received shares list and cause the Lock to remain open for a comparatively long time, decreasing the number of requests that can be handled per second significantly, causing response time to increase until timeouts occur.
Sharing with the same group causes the bottleneck to manifestate earlier, because the request has to write the received shares list to disk. To do that, it optimistically tries to write a file with an If-Match etag header. And we currently only retry that once.
We have a test cas in cdperf that covers this scenario. You will have to create en env var like this:
Disabling the
SEED_*
env vers makes k6 read users from pool files:remember, having a single group as the recipient causes writes to the same file, whic we currently only retry once
Then you can
source .envrc
and run the k6 sharing scenario like this:This will use 4 users to execute ten iterations of this sharing scenario.
Just disabling the locks causes failures: