carbynestack / castor

Carbyne Stack tuple store for secure multiparty computation
https://carbynestack.io
Apache License 2.0
7 stars 4 forks source link

Concurrency issues when reserving Tuples in parallel #10

Open kindlich opened 2 years ago

kindlich commented 2 years ago

Hey there,

in some tests we found out, that requesting tuples in parallel can run into timing problems: If the latter reservation gets applied before the prior, then trying to apply the prior will result in an error

Referenced tuples have already been consumed or reserved.

Since the issue is about timing, the example below can succeed or fail, but after a few times you usually get the error. I'm using GNU-Parallel to create parallel requests:

# Create 2 secrets in parallel (amphora needs input-mask tuples so it fails when reserving these)
parallel java -jar cs.jar amphora create-secret {} ::: {1..2}

Below is a graphic of what I think is the case: 1) Two requests to reserve tuples are made in parallel 2) The Master castor sends the 2nd Tuple Reservation to the slave castors first 3) When the Master then sends the 1st Reservation to the slave castor, the slave will fail.

Castor_Parallel_Issue drawio

BR, kindlich

sbckr commented 2 years ago

Hi @kindlich. Thank you for the detailed description.

This is definitely not the expected behaviour and will cause inconsistency or make some tuples inaccessible.

We will look into this issue and try to improve the handling of reservation as soon as possible.

As a quick workaround until this problem is fixed, you could for example wrap your command in a simple retry logic like the following:

# retry secret creation up to 3 times. print secret id and break loop if cli returns exit code 0; sleep for 50..100ms if not
parallel "for i in 1 2 3; do o=\$(java -jar cs.jar amphora create-secret {} 2> /dev/null); [ \$? -eq 0 ] && echo \"\$o\" &&  break || sleep \$(printf \"0.%03d\" \$((50+\$RANDOM%50))); done" ::: {1..2}

However,

Best regards, Sebastian

sbckr commented 2 years ago

If you are having problems with inconsistencies where Castor is no longer able to create or share reservations, you can also try clearing all reservations from the cache and resetting the consumption- and reservation-markers to 0. The following is a script that would perform the described task for a standard deployment:

cat <<"EOF" | bash -s
#!/bin/bash
for vcp in starbuck apollo
do
  kubectl config use-context kind-"$vcp"
  kubectl exec $(kubectl get pods | awk '/^cs-redis/{print $1}') -- bash -c "redis-cli --scan --pattern castor* | xargs redis-cli del"
  kubectl exec $(kubectl get pods | awk '/cs-postgres/{print $1}') -- psql -U cs castor -c "UPDATE tuple_chunk_meta_data SET consumed_marker = 0, reserved_marker = 0;"
done
EOF

This way, you may not have to redeploy the entire Carbyne Stack Virtual Cloud.

:warning: While this might be acceptable as long as Carbyne Stack is in an alpha stage and should only be used in test and development environments anyway, I would still like to point out that this leads to a highly insecure environment as tuples can be reused and should never be applied in a production environment.

sbckr commented 2 years ago

In an upcoming version, TupleChunkMetaData will be replaced by TupleChunkFragment, which will no longer refer to an entire tuple chunk using only two markers for reservation and consumption, but will allow to refer to segments (hence fragments) of tuple sequences within a chunk.

This concept allows for more fine-grained handling of tuples and reservations, and would also allow for extensions, such as releasing tuples that have already been reserved but never retrieved/used.

The flows for the individual processes could be as follows: Create Reservation (Primary) Confirm Reservation (Secondary) Consume Reservation (Both)
strieflin commented 2 years ago

Is there anything left to do here @sbckr? Otherwise, let's close this issue.

kindlich commented 2 years ago

Well, there is still an issue, if too many parallel requests come in. Currently the Tuple Chunk is split into N Fragments

If more than N requests come in at once, the master will block some HTTP calls since no DB Entities are available. The HTTP connection stays open, but the request will take a bit longer Meanwhile the clients will time out since "No released tuple reservation" has been found, because the master hasn't processed that request yet.

Let's imagine we have 1 Tuple Fragment available, but 2 concurrent requests:

  1. Request 1 (master) comes in, master starts working --> locks fragment
  2. Request 1 (slave) comes in, slave starts checking for released reservations
  3. Request 2 (master) comes in, no free tuple fragment --> waiting
  4. Request 2 (slave) comes in, slave starts checking for released reservations --> timeout timer started
  5. Reservation 1 (master) distributed to all clients
  6. Request 1 (master) done --> unlocks fragment
  7. Request 2 (slave) timed out, since no reservation was found within the timeout
  8. Request 2 (master) starts

Though the question is, is this something where you just have to

Or should this be handled at another level?

github-actions[bot] commented 1 year ago

This issue has been marked stale because it has been open for 90 days with no activity. It will be automatically closed in 30 days if no further activity occurs.

strieflin commented 1 year ago

@sbckr: Anything left here?