Closed ch1bo closed 4 weeks ago
Sizes and execution budgets for Hydra protocol transactions. Note that unlisted parameters are currently using arbitrary
values and results are not fully deterministic and comparable to previous runs.
Metadata | |
---|---|
Generated at | 2024-08-15 16:19:16.846805438 UTC |
Max. memory units | 14000000 |
Max. CPU units | 10000000000 |
Max. tx size (kB) | 16384 |
Name | Hash | Size (Bytes) |
---|---|---|
νInitial | 2fac819a1f4f14e29639d1414220d2a18b6abd6b8e444d88d0dda8ff | 3799 |
νCommit | 2043a9f1a685bcf491413a5f139ee42e335157c8c6bc8d9e4018669d | 1743 |
νHead | bd9fad235c871fb7f837c767593018a84be3083ff80f9dab5f1c55f9 | 10194 |
μHead | c8038945816586c4d38926ee63bba67821eb863794220ebbd0bf79ee* | 4607 |
Init
transaction costsParties | Tx size | % max Mem | % max CPU | Min fee ₳ |
---|---|---|---|---|
1 | 5190 | 5.81 | 2.30 | 0.44 |
2 | 5387 | 7.13 | 2.82 | 0.47 |
3 | 5590 | 8.59 | 3.40 | 0.49 |
5 | 5995 | 11.32 | 4.48 | 0.54 |
10 | 6996 | 18.02 | 7.12 | 0.66 |
56 | 16244 | 81.53 | 32.25 | 1.76 |
Commit
transaction costsThis uses ada-only outputs for better comparability.
UTxO | Tx size | % max Mem | % max CPU | Min fee ₳ |
---|---|---|---|---|
1 | 556 | 10.52 | 4.15 | 0.29 |
2 | 745 | 13.86 | 5.65 | 0.34 |
3 | 933 | 17.33 | 7.20 | 0.38 |
5 | 1313 | 24.65 | 10.44 | 0.48 |
10 | 2248 | 45.22 | 19.36 | 0.75 |
20 | 4125 | 95.99 | 40.76 | 1.40 |
CollectCom
transaction costsParties | UTxO (bytes) | Tx size | % max Mem | % max CPU | Min fee ₳ |
---|---|---|---|---|---|
1 | 57 | 549 | 22.14 | 8.66 | 0.42 |
2 | 114 | 659 | 33.03 | 13.08 | 0.54 |
3 | 170 | 769 | 45.15 | 18.08 | 0.68 |
4 | 228 | 879 | 58.49 | 23.65 | 0.83 |
5 | 284 | 989 | 75.61 | 30.75 | 1.03 |
6 | 337 | 1100 | 98.45 | 40.06 | 1.28 |
Parties | Tx size | % max Mem | % max CPU | Min fee ₳ |
---|---|---|---|---|
1 | 626 | 17.71 | 7.79 | 0.38 |
2 | 759 | 19.11 | 9.07 | 0.40 |
3 | 976 | 21.67 | 10.78 | 0.45 |
5 | 1182 | 22.80 | 12.64 | 0.48 |
10 | 2098 | 33.30 | 20.31 | 0.66 |
47 | 7603 | 96.83 | 72.05 | 1.79 |
Close
transaction costsParties | Tx size | % max Mem | % max CPU | Min fee ₳ |
---|---|---|---|---|
1 | 634 | 20.98 | 9.39 | 0.42 |
2 | 787 | 22.49 | 10.82 | 0.44 |
3 | 933 | 23.92 | 12.21 | 0.47 |
5 | 1244 | 27.08 | 15.28 | 0.53 |
10 | 2077 | 35.91 | 23.58 | 0.70 |
50 | 7919 | 98.54 | 83.35 | 1.90 |
Contest
transaction costsParties | Tx size | % max Mem | % max CPU | Min fee ₳ |
---|---|---|---|---|
1 | 662 | 27.16 | 11.67 | 0.48 |
2 | 805 | 28.94 | 13.18 | 0.51 |
3 | 936 | 30.66 | 14.67 | 0.54 |
5 | 1329 | 35.03 | 18.39 | 0.62 |
10 | 2038 | 43.89 | 26.13 | 0.78 |
38 | 6454 | 97.81 | 72.58 | 1.75 |
Abort
transaction costsThere is some variation due to the random mixture of initial and already committed outputs.
Parties | Tx size | % max Mem | % max CPU | Min fee ₳ |
---|---|---|---|---|
1 | 5060 | 17.40 | 7.57 | 0.57 |
2 | 5176 | 28.47 | 12.47 | 0.70 |
3 | 5376 | 42.54 | 18.84 | 0.87 |
4 | 5358 | 55.52 | 24.49 | 1.01 |
5 | 5479 | 72.65 | 32.15 | 1.21 |
6 | 5664 | 90.72 | 40.25 | 1.42 |
FanOut
transaction costsInvolves spending head output and burning head tokens. Uses ada-only UTxO for better comparability.
Parties | UTxO | UTxO (bytes) | Tx size | % max Mem | % max CPU | Min fee ₳ |
---|---|---|---|---|---|---|
5 | 0 | 0 | 5022 | 7.75 | 3.28 | 0.46 |
5 | 1 | 57 | 5056 | 9.08 | 4.08 | 0.48 |
5 | 5 | 284 | 5192 | 13.41 | 6.84 | 0.54 |
5 | 10 | 570 | 5362 | 19.06 | 10.39 | 0.62 |
5 | 20 | 1134 | 5696 | 30.19 | 17.43 | 0.77 |
5 | 30 | 1708 | 6042 | 41.70 | 24.63 | 0.93 |
5 | 40 | 2279 | 6384 | 53.23 | 31.84 | 1.09 |
5 | 50 | 2846 | 6721 | 64.56 | 38.97 | 1.25 |
5 | 81 | 4611 | 7772 | 99.73 | 61.10 | 1.74 |
This page is intended to collect the latest end-to-end benchmark results produced by Hydra's continuous integration (CI) system from the latest master
code.
Please note that these results are approximate as they are currently produced from limited cloud VMs and not controlled hardware. Rather than focusing on the absolute results, the emphasis should be on relative results, such as how the timings for a scenario evolve as the code changes.
Generated at 2024-08-15 16:21:36.359856432 UTC
Number of nodes | 1 |
---|---|
Number of txs | 3000 |
Avg. Confirmation Time (ms) | 4.127247796 |
P99 | 7.150475669999984ms |
P95 | 5.024426099999995ms |
P50 | 3.9332425ms |
Number of Invalid txs | 0 |
Number of nodes | 3 |
---|---|
Number of txs | 9000 |
Avg. Confirmation Time (ms) | 23.531029503 |
P99 | 116.73508501000002ms |
P95 | 32.838504549999996ms |
P50 | 20.6449925ms |
Number of Invalid txs | 0 |
469 tests +1 462 :white_check_mark: +1 19m 41s :stopwatch: + 2m 17s 150 suites +1 7 :zzz: ±0 5 files ±0 0 :x: ±0
Results for commit 419fe95b. ± Comparison against base commit e0cc2c13.
:recycle: This comment has been updated with latest results.
I think I get what is happened here, at least partly, and it seems fine for the solution to be foldl
(maybe the comment should say foldl is required instead of foldr here
; the commit shows this context but the comment alone doesn't) instead of foldr
; but I wonder if there's something ever so slightly more explicit, and explicitly order localTxs
first, so it doesn't depend on details of how that list is built up?
I think I get what is happened here, at least partly, and it seems fine for the solution to be
foldl
(maybe the comment should sayfoldl is required instead of foldr here
; the commit shows this context but the comment alone doesn't) instead offoldr
; but I wonder if there's something ever so slightly more explicit, and explicitly orderlocalTxs
first, so it doesn't depend on details of how that list is built up?
Thanks. I'll try to improve the comment on the code. Regarding ordering localTxs
: A topological order would be doable, but the way the localTxs
are updated, that is only after applying it to the local seen ledger localUTxO
, it must be a valid order always. The pruning then only re-validates it. Also, sorting would kill performance here.
Also, sorting would kill performance here.
Fine.
I was hoping it would be possible to have some kind of type-error that would align the requirement of a <> [b]
and foldl
, but I'm okay if it's a bit too much to ask for at the moment; and fair enough if we don't want to always sort.
Summary: Before, the node would re-apply local transactions in the wrong order which would throw away local state even though the transaction would still apply.
This issue was discovered when we issued very long "transaction chains" in the https://github.com/cardano-scaling/hydra-doom project:
Scenario: A web client creates many transactions rapidly which are "re-spending" the previous transaction's output right away. i.e. given transaction with id
17c6bd63500a219b5ea55eb6600a91ec5c335e392b6de45e52895533971af7b3
, the next transaction constructed and submitted spends from input17c6bd63500a219b5ea55eb6600a91ec5c335e392b6de45e52895533971af7b3#0
First failing tx:
639ea19954e8f57bf140545d515e228cc16d983253409c4887587fcba5415a5c
Client sends
Node receives
Node processes reqtxs
Node logic sequence
ReqTx 7cc9dede4649686c65e660639ea37498a64fec975ef624d504276f8c89c17d57
ReqTx 17c6bd63500a219b5ea55eb6600a91ec5c335e392b6de45e52895533971af7b3
ReqSn 8317 [7cc9dede4649686c65e660639ea37498a64fec975ef624d504276f8c89c17d57]
ReqTx 54abc2a6339ca21650580d7064adf0a67501f38c1f9534609244a7e965e7f309
NewTx 639ea19954e8f57bf140545d515e228cc16d983253409c4887587fcba5415a5c
ReqSn 8318 ["17c6bd63500a219b5ea55eb6600a91ec5c335e392b6de45e52895533971af7b3","54abc2a6339ca21650580d7064adf0a67501f38c1f9534609244a7e965e7f309"]
newLocalUTxO
of state changeSnapshotRequested
does NOT include3223df5097b97a628f014a99faa8eb52d6081db1f89b50d8177756fa68c87fdb#0
!ReqTx 639ea19954e8f57bf140545d515e228cc16d983253409c4887587fcba5415a5c
AckSn 8318
ReqTx c75c070850e0e05af96b16f52017285e138f020297f7a2cf78f55ced9c5cad41
Hypothesis:
After seeing
NewTx
of639ea19954e8f57bf140545d515e228cc16d983253409c4887587fcba5415a5c
and before processing it in aReqTx
we havelocalTxs
(reconstructed):17c6bd63500a219b5ea55eb6600a91ec5c335e392b6de45e52895533971af7b3
54abc2a6339ca21650580d7064adf0a67501f38c1f9534609244a7e965e7f309
8297f936753fe330b661d7cd37907f292c75fe4319e0661a0408eb6954b9eafe
3223df5097b97a628f014a99faa8eb52d6081db1f89b50d8177756fa68c87fdb
and
newLocalUTxO
:The node had previously decided to do a snapshot of
["17c6bd63500a219b5ea55eb6600a91ec5c335e392b6de45e52895533971af7b3", "54abc2a6339ca21650580d7064adf0a67501f38c1f9534609244a7e965e7f309"]
and when processing thatReqSn
now we end up with a snapshotutxo
:Then, pruning local txs results in:
which is notably missing
3223df5097b97a628f014a99faa8eb52d6081db1f89b50d8177756fa68c87fdb#0
and3223df5097b97a628f014a99faa8eb52d6081db1f89b50d8177756fa68c87fdb
respectively!This can happen if the pending
localTxs
are applied in wrong sequence to the local utxo state, especially as we are swallowing any validation errors in thpruneTransactions
function inHydra.HeadLogic
.