Open brunoerg opened 6 months ago
This is a really cool project, cannot wait to see it take off!
Just a note about the branch-and-bound provided by the bdk_coin_select
crate... it is unique in the sense that it is actual branch and bound! However, the waste metric is not provided out-of-the-box (although anyone can try implement any metric for bnb). This is because the waste metric is deemed too wasteful in certain scenarios. We only provide a single metric out-of-the-box; we call it the lowest-fee metric which attempts to reduce the fee of the selection while taking into account the cost to spend the change (using the long-term feerate).
Helpful links:
LowestFee
metric: https://docs.rs/bdk_coin_select/latest/bdk_coin_select/metrics/struct.LowestFee.htmlcc @LLFourn
It'll be cool to have some sort of differential testing (love to hear ideas as to how we should compare the outputs). My point is that don't expect bdk_coin_select
's bnb result to be the same as other implementations.
cc @nymius
Hello @brunoerg
I'm working on coin-selection simulator. The idea is to have a common framework to run test consistently over different bitcoin coin selection implementations. I mainly followed this simulator, but dropped the node requirements of bitcoin-core test framework and tried to generalize it so more coin-selection projects can implement the needed methods to take advantage of the simulator without much work.
Currently the simulator has interfaces with bitcoindevkit/coin-select, bitshala/rust-coin-select and an independent python implementation of bitcoin core algorithms. I implemented them as a proof of concept, the code can be flaky.
Now I'm trying to isolate the bitcoin-core coin-selection core code, looking for testing it without having to work with all the other aspects of a wallet.
I mention all this because I've never applied fuzzing
myself, besides thinking that I have an idea of how it works. I guess "differential testing" refers to a comparative approach, but cannot get the meaning of "target" in this context.
My poor understanding makes me feel our projects are correlated, but would like to be corrected if not.
Do you already have some considerations about how this should be done?
@evanlinjin I propose this as a first output comparison proxy candidate.
I mention all this because I've never applied fuzzing myself, besides thinking that I have an idea of how it works. I guess "differential testing" refers to a comparative approach, but cannot get the meaning of "target" in this context. My poor understanding makes me feel our projects are correlated, but would like to be corrected if not.
Differential fuzzing is used to identify discrepancies in the behavior of different implementations of the same specification or protocol. It works by providing the same inputs to multiple implementations and observing the output. "Target" is basically a code that will receive the bytes from fuzzer, send them to the applications under test and will act as an oracle to compare the outputs.
As @evanlinjin said, we do not expect BnB from Core and BDK to have EXACTLY same results, so we need a way to compare their outputs, will take a look at your comparison code.
A target to perform differential fuzzing between BDK and Bitcoin Core, especially for BnB, would be nice!