As discovered while implementing tuple streaming support in ephemeral (carbynestack/ephemeral#27), tuples are reserved, and thus consumed, twice under conditions that are not further analyzed.
Reproduce Issue
Set up new two-party setting with tuples streaming enabled as introduced by carbynestack/ephemeral#27
Upload Tuples to castor (at least GFP input masks and multiplication triples)
Tuples available in this scenario were:
cat << 'EOF' > test_multi.mpc
# Prologue to read in the inputs
port=regint(10000)
listen(port)
socket_id = regint()
acceptclientconnection(socket_id, port)
v = sint.read_from_socket(socket_id, 2)
# The logic
a = MemValue(v[0])
b = MemValue(v[1])
resp = Array(10, sint)
@for_range_multithread(2, 1, 10)
def _(i):
resp[i] = i * a * b
# Epilogue to return the outputs
sint.write_to_socket(socket_id, resp)
EOF
The computation returns a new amphora secret (✔) and tuples are consumed for each of the threads (here 2000 gfp multiplication triples in total) in ephemeral (❌).
Observation
Ephemeral returned the expected result
Castor lists only 1000 gfp multiplication triples consumed:
- ephemeral
```bash
2022-07-28T05:41:53.037Z DEBUG io/tuple_streamer.go:230 Fetched new tuples from Castor {"gameID": "da04b169-b95b-462b-b7fe-8cadb89d6e66", "TupleType": {"Name":"MULTIPLICATION_TRIPLE_GFP","PreprocessingName":"Triples","SpdzProtocol":{"Descriptor":"SPDZ gfp","Shorthand":"p"}}, "ThreadNr": 1, "RequestID": "fc73125d-6d77-3fe1-8c75-2198a1e17c3d"}
2022-07-28T05:41:53.112Z DEBUG io/tuple_streamer.go:230 Fetched new tuples from Castor {"gameID": "da04b169-b95b-462b-b7fe-8cadb89d6e66", "TupleType": {"Name":"MULTIPLICATION_TRIPLE_GFP","PreprocessingName":"Triples","SpdzProtocol":{"Descriptor":"SPDZ gfp","Shorthand":"p"}}, "ThreadNr": 2, "RequestID": "1f17caa0-6b61-357e-8a4a-e25caa209d47"}
```
- castor
```bash
[...]
2022-07-28 05:41:52.737 DEBUG 1 --- [io-10100-exec-7] i.c.c.s.p.t.MinioTupleStore : Starting download from S3 for key 5e8c28ae-0054-4e31-a23c-8327f01d8b15 from byte 193920 to byte 289920
2022-07-28 05:41:52.738 DEBUG 1 --- [io-10100-exec-5] i.c.c.s.r.ReservationRestController : Received update for reservation #1f17caa0-6b61-357e-8a4a-e25caa209d47_multiplicationtriple_gfp to status UNLOCKED
2022-07-28 05:41:52.739 DEBUG 1 --- [io-10100-exec-5] i.c.c.s.p.c.ReservationCachingService : updating reservation 1f17caa0-6b61-357e-8a4a-e25caa209d47_multiplicationtriple_gfp
2022-07-28 05:41:52.740 DEBUG 1 --- [io-10100-exec-5] i.c.c.s.p.c.ReservationCachingService : object in cache at castor-reservation-store::1f17caa0-6b61-357e-8a4a-e25caa209d47_multiplicationtriple_gfp is Reservation(reservationId=1f17caa0-6b61-357e-8a4a-e25caa209d47_multiplicationtriple_gfp, tupleType=multiplicationtriple_gfp, reservations=[ReservationElement(tupleChunkId=5e8c28ae-0054-4e31-a23c-8327f01d8b15, reservedTuples=1000, startIndex=2020)], status=LOCKED)
2022-07-28 05:41:52.741 DEBUG 1 --- [io-10100-exec-5] i.c.c.s.p.c.ReservationCachingService : reservation updated
2022-07-28 05:41:52.768 DEBUG 1 --- [o-10100-exec-10] i.c.c.s.p.t.MinioTupleStore : Starting download from S3 for key 5e8c28ae-0054-4e31-a23c-8327f01d8b15 from byte 193920 to byte 289920
2022-07-28 05:45:19.804 DEBUG 1 --- [ool-2-thread-22] i.c.c.s.d.WaitForReservationCallable : No reservation was found for id 7b2b3571-bc1e-4de4-a603-67a23b6fa219_inputmask_gfp.
2022-07-28 05:45:19.859 DEBUG 1 --- [ool-2-thread-22] i.c.c.s.d.WaitForReservationCallable : No reservation was found for id 7b2b3571-bc1e-4de4-a603-67a23b6fa219_inputmask_gfp.
2022-07-28 05:45:19.864 DEBUG 1 --- [io-10100-exec-5] i.c.c.s.p.c.ReservationCachingService : persisting reservation Reservation(reservationId=7b2b3571-bc1e-4de4-a603-67a23b6fa219_inputmask_gfp, tupleType=inputmask_gfp, reservations=[ReservationElement(tupleChunkId=c3a4bbd8-7517-43e9-9712-67e436e57854, reservedTuples=20, startIndex=20)], status=LOCKED)
2022-07-28 05:45:19.865 DEBUG 1 --- [io-10100-exec-5] i.c.c.s.p.c.ReservationCachingService : put in database at castor-reservation-store::7b2b3571-bc1e-4de4-a603-67a23b6fa219_inputmask_gfp
[...]
```
it can be seen, that ephemeral fetches tuples from castor for two different threads using the reservation (request) IDs fc73125d-6d77-3fe1-8c75-2198a1e17c3d and 1f17caa0-6b61-357e-8a4a-e25caa209d47. Both requests are processed by castor independently, but reference the exact same tuples:
carbynestack/carbynestack#40 resolves this issue as an quite old version of castor, with concurrency issue #10 not being resolved, was used. - simple as that 😓
As discovered while implementing tuple streaming support in ephemeral (carbynestack/ephemeral#27), tuples are reserved, and thus consumed, twice under conditions that are not further analyzed.
Reproduce Issue
Define a program using multiple threads
Expectation
The computation returns a new amphora secret (✔) and tuples are consumed for each of the threads (here 2000 gfp multiplication triples in total) in ephemeral (❌).
Observation
Analyzing the logs from the scenario descibed,
it can be seen, that ephemeral fetches tuples from castor for two different threads using the reservation (request) IDs
fc73125d-6d77-3fe1-8c75-2198a1e17c3d
and1f17caa0-6b61-357e-8a4a-e25caa209d47
. Both requests are processed by castor independently, but reference the exact same tuples:With this, the same tuples are consumed twice and therefore counted only once for consumption.