Closed pmconrad closed 6 years ago
Note: the 3rd option (full solution) requires all client libraries (bitsharesjs, pybitshares, graphenej and etc) and all the clients that depends on the libraries to upgrade in order to be compatible both before and after the scheduled hard fork time, thus requires extreme coordination efforts, IMHO it's very hard to make it perfect.
Client libraries only require updates where sorting order is relevant. It's hard to judge where this is a "must", and probably not a library issue but an application issue.
We might get away with fewer transaction signatures if authorities were sorted by descending weight.
E. g. if an account has authority(2, key1, 1, key2, 2) the current algorithm for irrelevant signature detection will accept a transaction signed by both keys if key1 is in the list before key2.
the current algorithm for irrelevant signature detection will accept a transaction signed by both keys if key1 is in the list before key2.
Are you saying the irrelevant signature detection algorithm should be changed to reject a transaction signed by both key1 and key2? Or just that the validation of transaction signatures should check for key2, see that the threshold has been met, and skip the key1 check?
The algorithm walks through the (key,weight) pairs and sums up those weights for which it has a signature. If the threshold is met, the iteration stops. If there are signatures left over, it rejects the transaction. Pairs are currently sorted by key address.
If the (key,weight) pairs were sorted by descending weight, the threshold might be met earlier. In the above example, the algorithm would terminate after seeing key2 and complain about the irrelevant key1 signature.
Thank you for the clarification. What I now have in my head is:
Current code does not reject a transaction that is signed by too many signatures, because the list of signatures is not sorted in descending order by weight.
Is the concern only one of performance? I am trying to understand why the prevention of extra signatures. Thank you for your patience.
Yes, it's just a minor optimization because fewer signatures have to be validated. Not a critical issue.
Timings taken up to block 31,000,000 with various core versions seem to indicate that this is a non-issue.
Version | 2.0.180823 | develop as of 2018-10-24 | pr #1401 |
---|---|---|---|
Replay | 2:00:21 | 1:58:15 | 1:59:23 |
Resync | 3:35:51 | 3:46:15 | 3:44:55 |
Resync (full validation) | 7:15:06 | 7:21:59 | 7:18:52 |
Revalidate | n/a | n/a | 2:51:08 |
Revalidate (full validation) | n/a | n/a | 6:13:40 |
(Resync was done at different times of day, fluctuations due to network conditions are to be expected.)
Presumably, in-order back-insertion into flat_maps and flat_sets is quite efficient. This is also what typically happens - on-chain data is already sorted correctly.
The only exception might be the keys_to_add/remove fields in proposal_update_operation
, which presumably don't contain multiple keys in the typical case.
Closing as WONTFIX
Bug Description Remark I've categorized this as a bug although it technically isn't. But since it has significant impact on performance, and is caused by an obscure side-effect (see #1151) I have created this issue as a bug report. Feel free to discuss.
Root cause
public_key_type
has nooperator<
defined. It is still possible to compare public keys, throughoperator<(address&,address&)
, which even happens automatically due to theaddress
constructor with a singlepublic_key_type
argument.Effect Constructing an
address
from apublic_key_type
is expensive, since it involves two hashes. For a single comparison this might even be acceptable, however,public_key_type
is sometimes used as sorting key in maps and sets, which require O(log(n)) such comparisons for each insert or lookup, and O(n*log(n)) for constructing a set of size n.Impacts Describe which portion(s) of BitShares Core may be impacted by this bug. Please tick at least one box.
Performance is impacted where ever public keys are compared. Part of this has been resolved in #1359. Code that involves authorities is still impacted, but mostly relevant to fully validating nodes, i. e. the witnesses.
Protocol impact comes from
authority::key_auths
,proposal_update_operation::key_approvals_to_add
andproposal_update_operation::key_approvals_to_remove
. Note that the sorting order must be preserved for on-chain operations, because otherwise signatures are invalidated.API impact: some API operations accept and/or return sets of public keys. Changing the sorting order may impact clients. The protocol changes also affect the API wrt blocks and transactions.
cli_wallet uses APIs, and also keeps maps of keys in its
wallet.json
file.Additional Context (optional) I see three options:
public_key_type
cache its address representation and use this cached representation for comparison. External impact: none, internal impact: minor, effect: mitigation not full solution.operator<
for two public_key_types. Protocol-related sets must be replaced with vectors, maps with vectors of pairs, uniqueness of keys must be checked invalidate()
, until hardfork the old sorting order must be checked in evaluator. External impact: protocol + API change, internal impact: medium effect: full solution after hardfork date.CORE TEAM TASK LIST