Open kindlich opened 2 years ago
Hi @kindlich. Thank you for the detailed description.
This is definitely not the expected behaviour and will cause inconsistency or make some tuples inaccessible.
We will look into this issue and try to improve the handling of reservation as soon as possible.
As a quick workaround until this problem is fixed, you could for example wrap your command in a simple retry logic like the following:
# retry secret creation up to 3 times. print secret id and break loop if cli returns exit code 0; sleep for 50..100ms if not
parallel "for i in 1 2 3; do o=\$(java -jar cs.jar amphora create-secret {} 2> /dev/null); [ \$? -eq 0 ] && echo \"\$o\" && break || sleep \$(printf \"0.%03d\" \$((50+\$RANDOM%50))); done" ::: {1..2}
However,
Best regards, Sebastian
If you are having problems with inconsistencies where Castor is no longer able to create or share reservations, you can also try clearing all reservations from the cache and resetting the consumption- and reservation-markers to 0. The following is a script that would perform the described task for a standard deployment:
cat <<"EOF" | bash -s
#!/bin/bash
for vcp in starbuck apollo
do
kubectl config use-context kind-"$vcp"
kubectl exec $(kubectl get pods | awk '/^cs-redis/{print $1}') -- bash -c "redis-cli --scan --pattern castor* | xargs redis-cli del"
kubectl exec $(kubectl get pods | awk '/cs-postgres/{print $1}') -- psql -U cs castor -c "UPDATE tuple_chunk_meta_data SET consumed_marker = 0, reserved_marker = 0;"
done
EOF
This way, you may not have to redeploy the entire Carbyne Stack Virtual Cloud.
:warning: While this might be acceptable as long as Carbyne Stack is in an alpha stage and should only be used in test and development environments anyway, I would still like to point out that this leads to a highly insecure environment as tuples can be reused and should never be applied in a production environment.
In an upcoming version, TupleChunkMetaData will be replaced by TupleChunkFragment, which will no longer refer to an entire tuple chunk using only two markers for reservation and consumption, but will allow to refer to segments (hence fragments) of tuple sequences within a chunk.
This concept allows for more fine-grained handling of tuples and reservations, and would also allow for extensions, such as releasing tuples that have already been reserved but never retrieved/used.
The flows for the individual processes could be as follows: | Create Reservation (Primary) | Confirm Reservation (Secondary) | Consume Reservation (Both) |
---|---|---|---|
Is there anything left to do here @sbckr? Otherwise, let's close this issue.
Well, there is still an issue, if too many parallel requests come in.
Currently the Tuple Chunk is split into N
Fragments
If more than N requests come in at once, the master will block some HTTP calls since no DB Entities are available. The HTTP connection stays open, but the request will take a bit longer Meanwhile the clients will time out since "No released tuple reservation" has been found, because the master hasn't processed that request yet.
Let's imagine we have 1 Tuple Fragment available, but 2 concurrent requests:
Though the question is, is this something where you just have to
Or should this be handled at another level?
This issue has been marked stale because it has been open for 90 days with no activity. It will be automatically closed in 30 days if no further activity occurs.
@sbckr: Anything left here?
Hey there,
in some tests we found out, that requesting tuples in parallel can run into timing problems: If the latter reservation gets applied before the prior, then trying to apply the prior will result in an error
Since the issue is about timing, the example below can succeed or fail, but after a few times you usually get the error. I'm using GNU-Parallel to create parallel requests:
Below is a graphic of what I think is the case: 1) Two requests to reserve tuples are made in parallel 2) The Master castor sends the 2nd Tuple Reservation to the slave castors first 3) When the Master then sends the 1st Reservation to the slave castor, the slave will fail.
BR, kindlich