Open instagibbs opened 5 years ago
Thanks @instagibbs!
Somehow, I still get stuck on the subset computation.
Do you mind helping my understanding:
count
has length 2^n
2^n
is the size of the powerset of over n signers count[avail]
Ones
is the size (num of 1s of i) of the subset represented by i
.avail &= avail-1
related to a reduction of ones
by 1?Cheers.
If the size of the set is larger k, how is avail &= avail-1 related to a reduction of ones by 1?
bit trick https://www.geeksforgeeks.org/turn-off-the-rightmost-set-bit/
your understanding seems correct
bit trick https://www.geeksforgeeks.org/turn-off-the-rightmost-set-bit/
Ah very nice, thanks @instagibbs
So as you loop through all sets in the powerset, sets which are > k, are trimmed down to k, beginning from the right side of the bit array.
Will mull it over a little more ... :)
Right, you are picking(biasing the front keys) a set of weights that minimizes the "overlap" between the online signers at higher weights.
Concept ACK. Thanks for the contribution @instagibbs!
I think it's a great idea to have external case-study contributions. I don't know where the best place for these to live is (as chapters 3.x/in a separate directory/in a different branch/something else).
My focus right now is on getting the core chapters as polished as possible, so I won't be able to review this for a while. I've had a quick look and have some high-level feedback:
oops didn't mean to close
it'd be great to have an introductory paragraph explaining the use-case
yeah that's a good idea.
I've just been gnawing on this when I don't feel like doing real work, so I'll try to clean this up with new APIs and better motivation when I can(federated hot wallet where HSMs can communicate with eachother, then offline backup where you cannot bring data back and forth).
Looks pretty cool :)
Although if my calc is right your leaf generation algorithm is somewhere between EDIT: n^2
and n^2*log(n)
.2^n
and 2^n*C(n,k)*log(n)
I think you can do it in the following time.
Let C(n,k) be the algorithm of choosing k from n (n!/(k! * (n-k)!)
)
O(k)*C(n,k)
But here it's 3/5 so who cares :P
It's a one shot thing per k-of-n so should be fine up to whatever limit you can stomach 😂
On Fri, Nov 1, 2019, 7:34 AM Elichai Turkel notifications@github.com wrote:
Looks pretty cool :) Although if my calc is right your leaf generation algorithm is somewhere between n^2 and n^2*log(n)`.
I think you can do it in the following time. Let C(n,k) be the algorithm of choosing k from n (n!/(k! (n-k)!)) O(k)C(n,k)
But here it's 3/5 so who cares :P
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/bitcoinops/taproot-workshop/pull/139?email_source=notifications&email_token=ABMAFU2ZQRKGKCWSYCPLNLTQRQH2VA5CNFSM4JHJ6NPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC2WANQ#issuecomment-548757558, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABMAFU5XZVV3XWQOG53SNWTQRQH2VANCNFSM4JHJ6NPA .
i.e., we can precompute trees for every combination of k and n up to whatever.
On Fri, Nov 1, 2019, 8:01 AM Greg Sanders gsanders87@gmail.com wrote:
It's a one shot thing per k-of-n so should be fine up to whatever limit you can stomach 😂
On Fri, Nov 1, 2019, 7:34 AM Elichai Turkel notifications@github.com wrote:
Looks pretty cool :) Although if my calc is right your leaf generation algorithm is somewhere between n^2 and n^2*log(n)`.
I think you can do it in the following time. Let C(n,k) be the algorithm of choosing k from n (n!/(k! (n-k)!)) O(k)C(n,k)
But here it's 3/5 so who cares :P
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/bitcoinops/taproot-workshop/pull/139?email_source=notifications&email_token=ABMAFU2ZQRKGKCWSYCPLNLTQRQH2VA5CNFSM4JHJ6NPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC2WANQ#issuecomment-548757558, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABMAFU5XZVV3XWQOG53SNWTQRQH2VANCNFSM4JHJ6NPA .
Well, try to input n=15; k=11
:) (took on my machine ~40 seconds), and when n=17 I gave up after 2.5 minutes hehe.
while implementing an algorithm I saw that python already has an algorithm exactly for this which is roughly O(k)*C(n,k)
:
from itertools import combinations
n = range(17)
k = 11
a = combinations(n, k)
print(list(a))
Here 11-of-17 takes less than a second on my machine.
@elichai I might be misunderstanding, but that's not a solution. That simply computes the total set of leaves, it doesn't assign weights.
To elaborate, in this model, we want to take advantage of the times when all n are up(taproot output pubkey), n-1 are up, then when n-2 are up, etc, all the way to k.
In the average case, all n are up. Sometimes 1 is down, infrequently 2 are down, and so on.
In the n-1 case, we can minimize the expected spending cost by prefering leaves that have the most unique subset of signers.
edit: Clearly, I have to work on an elevator pitch for this. It's a little frustrating to explain. Maybe I can explicitly talk about what the tree and average spend would look like with naive equal weighting? updated OP with a bit more explanation/history.
Just tested randomly assigning weights for n=17. While combinations
is extremely fast, the vast majority of the time appears to be the MuSig computations themselves.
Ok. now I get that I misunderstood what you did :) I'm no longer sure if/what suits this. feel free to ignore my complaints(that were mostly jokes as it's just a python example)
Concept ACK
I was able able to successfully run the notebooks without error.
Thanks @instagibbs !
May be interesting to include, or just keep open as a PR for visibility
The biggest chunk to digest is the weight-assigning for the k-of-k MuSig leaves.
Rationale:
Suppose you assigned all the k-of-k leaves equal weighting, leading to a near-perfect tree. Each spend would cost the same, with H-height merkle branches.
The key insight here is that in use-case considered the expectation is nearly perfect uptime for each node, therefore more than k nodes will be up and signing, so we should exploit that redundancy by putting certain nodes leaves closer to the root than others. One initial method I attempted was just to explicitly create (k+1)-of-(k+1) MuSig, (k+2)-of-(k+2) MuSig, etc, and then weight these cases with more probability to the higher-uptime case. This new method is to greedibly pick the "least overlapping" k subsets by assigning weights through brute-force counting of the "left-preferred" key subsets.