Open dr-orlovsky opened 3 years ago
To be clear, we don't actually guarantee at the API level that you can't create out-of-range Amount
s. You can do this right now by e.g. adding together two large Amount
s.
Also as Sanket says we should be careful to check whether we actually want to make out-of-range amounts be unparseable ... does this match Bitcoin Core?
Also what is the sensible range for SignedAmount
? If you subtract two Amount
s should you get a SignedAmount
?
Seems like Bitcoin Core is perfectly willing to decoderawtransaction
transactions with outputs in excess of 21MM.
I'm not sure if this is desirable. It's a consensus-level limit, not a data-type limit. It just doesn't belong in this layer, and would be an overhead and artificial constraint. Someone might want to use Amount
to calculate some aggregate over history of blockchain, etc and just need ability to store values above the consensus limit. From the user of this library perspective I don't see how protecting my code from going above the consensus limit protects me from anything meaningful.
I think it would be quite nice to limit the amount range - when you write Bitcoin code that deals with amounts you very, very regularly have to check the range to avoid overflow, and doing it at the data type level with Result
s for arithmetic operations makes it much harder to accidentally have an overflow which can cause panic or incorrect behavior. Of course actually doing this may be annoying, but in theory I like it.
Looks like we just need a dedicated Amount21m
type to simplify live of wallet devs - and leave Amount
as it is
I'm honestly curious - are there any examples showing where consensus limit of 21M is actually useful? My (possibly naive and wrong) intuition tells me that such a check is always redundant and without a merit? Even in consensus code one would check that sum of outputs is larger than sum of inputs, or that the coinbase is smaller or equal to the value dependent on the block height. In any scenario I can come up with, one would need some more important, fundamental checks (like inputs vs outputs in a tx, or LN funding transaction outputs value vs closing channel outputs value), which make 21M limit completely redundant.
For all practical purposes 21M BTC limit does not even actually exist anywhere in the system. It's just a derivative of the coinbase reward schedule, that people enjoy talking about it as marketing point, no?
The reason the check exists in Bitcoin Core at all is due to the money-printing overflow bug in early versions of Bitcoin. Similar overflow bugs can exist in many, many places, including in the comparison checks you mention. Checking for overflow during arithmetic is something or amount type needs to do, and further restricting the range to 21M is just a little bit safer way of doing that.
On Aug 2, 2021, at 01:55, Dawid Ciężarkiewicz @.***> wrote:
I'm honestly curious - are there any examples showing where consensus limit of 21M is actually useful? My (possibly naive and wrong) intuition tells me that such a check is pretty always redundant and without a merit? Even in consensus code one would check that sum of outputs is larger than sum of inputs, or that the coinbase is smaller or equal to the value dependent on the block height. In any scenario I can come up with, one would need some more important, fundamental checks (like inputs vs outputs in a tx, or LN funding transaction outputs value vs closing channel outputs value), which make 21M limit completely redundant.
For all practical reasons 21M BTC limit does not actually exist anywhere in the system. It's just a derivative of the coinbase reward schedule, that people enjoy talking about it as marketing point, no?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
Similar overflow bugs can exist in many, many places, including in the comparison checks you mention.
Of course they can, but I can't see how 21M is special here. If a bug like this happens and instead of 100M BTC, one is able to produce 10M BTC (still under 21M), or even 1M, or even +500sats more than the right amount, is this anywhere less of a problem? If a LN implementation allows for creating invalid channel refund tx by just off by 1 error is this not catastrophic?
Checking for overflow during arithmetic is something or amount type needs to do, and further restricting the range to 21M is just a little bit safer way of doing that.
This "just a little bit safer" seems purely "intuitive" in the bad sense of the world. The overflow protection - yes, that one makes sense, and reliably squash whole class of problems. But either overflow protection works, or not. There's no point in second guessing it.
Adding artificial 21M range check everywhere seems like a Voodoo engineering to me, just needlessly wasting performance and limiting the use of the general purpose type for things like aggregates. And the fact that it might "feel" like it is making anything "safer" is just false sense of security, which is more of a problem that the imaginary benefits of it.
Sure, its not magic, but 21M also happens to be the lowest possible value you can pick that wont cause issues. Its not at all "voodoo engineering", its being as conservative as possible, which is totally something any self-respecting Bitcoin software should do. There are a few types of things where this could be useful, most especially where we have a value and then user code converts them to satoshis before adding a fixed number of values together. 21M being as conservative as we can be ensures that its potentially a lot harder to hit that overflow in user code, which is an absolutely massive win on our part.
Remember also that Bitcoin's 21M limit fits exactly in a 64 bit float's mantissa with only one extra bit - meaning you can add two 21M-capped values together as a float and not risk precision loss, but you cannot add two arbitrary integers as such.
One more weak argument: having 21M limit hard-coded in non-Core software adds more disincentive to inflationary hard fork. :)
its being as conservative as possible, which is totally something any self-respecting Bitcoin software should do. There
I'm sorry but that's exactly how cargo/voodoo works. Instead of rational case-specific arguments, its about "what a good Bitcoiner should do", "self-respect" and in-group image, using vague words like "conservative". None of this is good engineering. I'll leave it at that.
There are a few types of things where this could be useful, most especially where we have a value and then user code converts them to satoshis before adding a fixed number of values together.
That's still very vague with jumping to self-assuring conclusion that "its a massive win". What exactly is the scenario here and how did it actually win anything in the end to end scenario? Was money loss / consensus failure prevented? Why was the user code summing stuff up, and why was detecting going over 21M meaningful at all, that a check against 19M or 22M wouldn't do. I guess I'm asking for at least one well explained and thought out example of a realistic enough application, that was saved by that range check where it would fail without it.
I know that I am a PITA here because I put a burden of proof on other people, and maybe all in all I'm in the wrong here and will have to admit it soon, as there is a good argument and example to be made in favor of this, but so far I haven't seen anything more than "it feels safer so it must be a good idea".
The drawbacks of this range check is:
not awfully lot, but enough IMO to ask for a decent justification.
Remember also that Bitcoin's 21M limit fits exactly in a 64 bit float's mantissa with only one extra bit
Off topic, but I pretty much always roll my eyes when I interact with Bitcoin Core and have to deal with damn floats in an accounting software.
One more weak argument: having 21M limit hard-coded in non-Core software adds more disincentive to inflationary hard fork.
@Kixunil I know that you're kidding, but I can't help to notice that it's not really true in any meaningful sense, and it is mixing ideological considerations with mundane implementation aspects, which is not good for any of them.
but so far I haven't seen anything more than "it feels safer so it must be a good idea".
I'm frankly pretty gobsmacked that this is at all controversial. I'm not arguing 21M is somehow a perfect answer, but just checking for overflows is definitely not as good here. As I said, if a user takes Amount
s and as_sat()
s them, then adds them together, its easy to hit an overflow if we just allow any 64-bit value. That is almost certainly a bug, likely with security implications, in user code. Sure, it would also be resolved if we limited the value to 2^64 / (Reasonable divisor), but why overthink things when we have a perfectly reasonable limit hard-coded in the definition of Bitcoin?
This isn't theoretical, Bitcoin has had critical vulnerabilities as a result of overflow issues, and if we start with values bounded by 21M, you have to add up a lot of 21M-values before you hit an overflow, instead of having to add two. eg in rust-lightning we add two values together all the time, if those values are bounded by 21M, there's no overflow risk.
Frankly, I consider it a rust oversight that overflows aren't checked by default (given the check is literally free in nearly all practical applications today, unless you're literally limited by program code size, which is pretty rare), but it's what it is.
it limits usability of this general purpose type
Does it materially? More broadly, this isn't a general-purpose type, this is a bitcoin amount.
it has a runtime performance cost
If you can identify this in any benchmark anywhere, I'll shut up and move on, but this ignores lots of practical microarchitectural features in modern CPUs. :facepalm:
it provides a false sense of safety
I'm not sure why you'd say that? I don't know why a user would see a 21M check and say "welp, means amounts can never be out of range" instead of just "well, the amounts wont overflow" which is what this provides.
I pretty much always roll my eyes when I interact with Bitcoin Core and have to deal with damn floats in an accounting software.
Same here but too many people use floats in practice :man_shrugging:
it's not really true in any meaningful sense
One either has to rewrite all other software or face unreliability. I think this case was actually experienced in electrs
when someone tried to use it with a shitcoin (don't remember the details). Yeah, it was mostly a joke but with some element of truth in it.
I consider it a rust oversight that overflows aren't checked by default
IDK, try!(try!(a + b) + c)
would be pretty annoying at that time, even with ?
it's not nice. Panics are somewhat interesting but AFAIK you can turn debug_assertions
on in release builds.
IDK, try!(try!(a + b) + c) would be pretty annoying at that time, even with ? it's not nice. Panics are somewhat interesting but AFAIK you can turn debug_assertions on in release builds.
Yes, I meant panics, similar to the way they run in debug mode. AFAIU, they're undefined (though in practice wrap), and its super obnoxious that they panic in debug by default but don't panic in release by default (I don't care if you can turn it off or not), at least when there's zero runtime cost (which in today's CPUs is true - an untaken branch based on a flag already set in a register is free, unless you're constrained by the extra byte or two of program code).
AFAIU, they're undefined (though in practice wrap)
It's defined to wrap if debug_assertions
is off. I think I see how it can be annoying although I've seen worse. (Drop behavior in Swift changing based on mode comes to mind.)
I'm not arguing 21M is somehow a perfect answer, but just checking for overflows is definitely not as good here.
I'm fine with the fact that overflow might not be enough, but the 21M feels like a bandaid. And the fact that it intuitively feels like a no-brainer is a tell tale sign that the problem was not explored enough and there might be better ideas.
As I said, if a user takes Amounts and as_sat()s them, then adds them together, its easy to hit an overflow if we just allow any 64-bit value.
But it only takes (please correct my math)... 8784 additions to hit the overflow with 21M limit. If the user forgot about the overflow when adding stuff, the chances are high that they are not adding just two numbers, but running it in a loop and the bounds of that loop might as well be above above that limit. So this approach still has holes. And the fact that Amount
now does some guarantees about the range might as well backfire by making users too careless.
If as_sat
is potential source of critical bugs, why not address that? Why have something that returns unprotected u64
so casually, which invites security issues. Maybe as_sats()
should return newtype that forbids math or something? That would be the idiomatic Rust way - we have this neat, convenient and powerful type system, why not use it to guarantee that the problem can't happen no matter what?
try!(try!(a + b) + c)
would be pretty annoying at that time,
I agree. But what if any addition on Amount
returned some AmountOrOverflow
that works more like a NaN
value that would be checked exactly at the end of all the math?
So this works:
let sum : Amount = amount.add(other_amount).add(another_amount).check()?;
let sum : Amount = (amount + other_amount + another_amount).check()?;
// you can add as much stuff as you wish, it will still force you to check once and only once
let sum : Amount = outputs.iter().fold(Amount(0), |a, b| a + b)).check()?;
I think this is totally doable, ergonomic and works 100% of time, no questions asked! No magic numbers and crossed fingers involved, like we're still savages using stone age programming languages. :D
And for users that really really really need u64
, you can add amount.get_sats_u64_i_need_integer_overflow_issues_in_my_life()
escape hatch, that is a huge eyesore and remind them that they they are playing with fire.
Or maybe there are other designs that fix the problem reliably, with a good tradeoff between user-proofing and ergonomics.
TBH, I find NaN
-like behavior interesting even though it's not exactly ordinary Rust.
this approach still has holes
We're not trying to build a perfect solution that solves all problems, that's up to our users. But pick any arbitrary number and there's a number of additions to hit overflow. Eg lightning has up to 400-some-odd outputs per transaction, so that could reasonably be your divisor and you'd pick a number based on that.
If you feel really strongly against 21M, sure, lets pick NNN million, I don't care?
If as_sat is potential source of critical bugs, why not address that? Why have something that returns unprotected u64 so casually, which invites security issues. Maybe as_sats() should return newtype that forbids math or something? That would be the idiomatic Rust way - we have this neat, convenient and powerful type system, why not use it to guarantee that the problem can't happen no matter what?
Because we live in the real world, and users need numbers from time to time. Yes, we should also make Amount
not overflow by default. Dropping unchecked addition and replacing it with checked_add
and friends is also a good idea, but...
I'm still somewhat astounded that this is even a debate. Yea, its not perfect, but so what. Why is this a bad idea? You haven't in any way communicated why this is seriously limiting in terms of usability? There isn't a "burden of proof" one way or the other here. Its software, if someone writes a patch, it gets weighed against not having that patch, if its better than not, it can land.
Yea, its not perfect, but so what.
Just my opinion, but I can't help but see it as a crude and helpless workaround, that might have been justified in a lesser programming language incapable of more robust approach. It's neither necessary nor sufficient and is making minor but undesirable sacrifice limiting usefulness of otherwise universal interoperability type in the cornerstone crate of composable Rust Bitcoin ecosystem, making it unsuitable for representing things like e.g. total value routed through a LN node over years, value of transactions using segwit over arbitrary period of time, or a total public debt of El Salvador Interplanetary Empire in year 2166, right before the peak of the next debt supercycle.
The limit of 21M could (should?) be enforced for value
in impl consensus::Decodable for TxOut
and impl consensus:Encodable for TxOut
. In this context it makes a lot of sense. Larger values are meaningfully illegal/invalid, these operations are already failable. Seems like a lot of potential overflow attacks would have to go through these two anyway, so it would largely accomplish the same thing, without any downsides, no?
Any reason why TxOut::output
is not Amount
? As we argue about adding some checks to Amount
, right now user can just take value: u64
straight out TxOut
, without ever going through any Amount
, start adding them as raw u64
and have overflow issues, no matter if Amount
provides any protection or not.
If you accept the idea that Amount
should be an universal type across the ecosystem, then it does seem meaningful to have Amount<T = u64>
, allowing users to go to smaller or larger underlying types, without loosing all the type-system machinery built to ensure their butts are always covered. Though maybe that's too much already and it makes things more confusing, not sure.
A combination of:
value: Amount
on TxOut
Amount
Amount
universally across the ecosystemwould guarantee that no Rust Bitcoin user will ever screw up with value integer overflow, IMO.
Wow such a long discussion...
My thoughts are:
Sats
, not Amount
u64
type as a way of computing amounts for things that may overflow 21M - like aggregated data in blockchain. That's why we will need to_u64
on Sats
Sats
should be used all accros Tx, PSBT and similar structures and promoted to the usecase when the amount is related to the amount of satoshis for some procedure (and not agregated data on transactions)It seems this does not contradict to anyone position, or I am missing something?
I'm not convinced Sats
and Amount
should be two different types, unless you meant renaming. Casting u64
to Amount
is already quite annoying and I was hoping we could get rid of it if Amount
was used in TxOut
. But maybe more importantly it's totally unclear from the name what is the difference in semantics between Sats
and Amount
.
If we want to define aggregate amount type it should be clear from the name that it is such. e.g. AggregateAmount
or AggregateSats
maybe there's something shorter. SatStats
is quite short but maybe confusing? Anyway, is there an actual application in the wild that computes statistics? I don't remember seeing any. I'm not happy with adding random "may be useful for someone" stuff while there's a bunch of stuff that's not present and some people actually need it.
So I think we could just add the limit to Amount
and a comment that it's unsuitable for statistics with suggestion to ping us if someone is writing application that needs such type.
I'm not convinced Sats and Amount should be two different types, unless you meant renaming.
I meant renaming. For "amount" overflowing 21M we already have u64 and u128 types
I would prefer living in a parallel universe with Amount
being called Sats
but not sure if such wide breakage is worth it. We could phase it in with #[deprecated = "Use Sats"] type Amount = Sats
and keep it that way for reasonably long time.
For "amount" overflowing 21M we already have u64 and u128 types
I believe semantics-enforcing types are great but as I said not worth the trouble if nobody is using it.
Well, u64/128 will be used for aggregated data only which will not need to be printed out/parsed from "X BTC/sat" by default - and we may provide simple u64::rem_btc(self) -> (u32, u32)
method for those needing to convert u64 sats aggregate into tuple of full bitcoins and satishis reminder.
Casting
u64
toAmount
is already quite annoying and I was hoping we could get rid of it if Amount was used in TxOut
The biggest problem with raw u64
in TxOut
is that it exposes users to carelessly introduce u64
-overflow security vulnerabilities, by just jumping straight to math with u64
, where Rust will remove the overflow check in release build by default. The idiomatic Rust way is to guide the user of any API to do the right thing. Overflow protection should be explicitily opt-out, not opt-in.
It seems this does not contradict to anyone position, or I am missing something?
I am still meh on the whole thing, but I don't think I have anything more to say, and I feel I exhausted my/your time already. It is not all that important anyway, except for the above point.
I've found some free time and wrote a PoC of the type-system overflow checking enforcement/"monadic checked arithmetic" . https://github.com/dpc/overflow-proof . I wanted to verify if it is possible/practical and how ergonomic would it be.
I've found an interesting use case for values that are guaranteed to be < MAX_MONEY
. Casting to i64
is always safe for them. This is obviously useful for infallible conversion to SignedAmount
but a less obvious use case is storing in postgres. Postgres only supports i64
, so keeping the value in range guarantees storing will not fail due to value being out of range.
Another case similar to the above: casting to msat is infallible if there's the MAX_MONEY
limit. I find this really nice.
Yet another argument for limit: if something like rustc_layout_scalar_valid_range_end
gets stabilized we could add it and get layout optimizations. (eg.g. Option<Amount>
would have the same size as Amount
) Sure, it'll take years but the limit has to be decided before 1.0.
If this type's primary use is for UTXO's value
than I guess range check makes sense, but I would recommend renaming such type to (OutputValue
? TxOutValue
?) and/or making sure to put the exact meaning in the first line of docstring. I guess what bothered me most is that Amount
and other names suggests some general purpose use, while this type is intrinsically linked with the use in TxOut
and the user needs to understand it's bitsize and range limit. And I think type-system enforced overflow (over-range) checks would be valuable (or just have users convert to and from u64
for doing math, and check in the try_from
).
While I agree with the sentiment I'm not sure if "UTXO value" is a clear way of documenting this. E.g. BIP21 should also use this restricted value because it will be used to create UTXO but the relationship may not be so obvious to newbies. I don't have the mental capacity to come up with something significantly better right now, so feel free to suggest, will think about it later.
I'd like to move this forward. It seems we agree that having the check there is OK if it's properly documented. Renaming doesn't seem appealing, so I propose to keep the name but change the first line of doc string (shown on docs from page) to:
Unsigned bitcoin amount up to 21 million bitcoins
for Amount
and:
Signed bitcoin amount between -21 million bitcoins and 21 million bitcoins included
for SignedAmount
.
With detailed explanation in subsequent paragraphs:
This type guarantees that the value doesn't exceed (-)21 million bitcoins and thus operators always panic on overflow (regardless of
debug_assertions
& co). Howeverunsafe
code still must not rely on this property. Consider usingchecked_*
orsaturating_*
methods if panics on overflows are not acceptable.The type should be used in transaction outputs or other contexts where amounts over the limit indicate an error. It may not be suitable for aggregate data - in block explorers, accounting... Use primitive types or your own newtype for those cases.
~Do you want me to work this into #1162? I don't see 1162 going into this release so I have time to resolve the max_value/MAX thing and add your suggestion above.~
Just as a note, a SignedAmount
with a cap of 21 million does not make a lot of sense because what is this type used for? Its not a UTXO value. What was SignedAmount
introduced for? If we introduce an invariant of 21 million to Amount
can we get rid of SignedAmount
altogether?
Is it possible that the current Amount
type is conflating two things
value
field in TxOut
Perhaps if we had two types:
TxAmount
- an amount that can represent a utxo value or total transaction value (hard cap of 21 million, only checked_
ops)Amount
- general purpose integer type useful for any bitcoin application (signed, parsing, pretty printing, all the usual ops, checked, wrapping, etc (just like I just added for u5
)Is it possible that the current
Amount
type is conflating two things
Exactly-o.
SignedAmount
with a cap of 21 million does not make a lot of sense
I think you're right.
Perhaps if we had two types
Rather this:
pub struct Amount<Purpose=TxValue>(u64, PhantomData<Purpose>) where Purpose: Purpose;
// lint to avoid confusion if someone accidentally creates `Amount` with useless purpose
pub trait Purpose: sealed::Purpose {}
pub enum TxValue {}
pub enum General {}
impl sealed::Purpose for TxValue {
const MAX: u64 = 21_000_000__000_000_00;
}
impl sealed::Purpose for General {
const MAX: u64 = u64::MAX;
}
mod sealed {
pub trait Purpose {
const MAX_VAL: u64;
}
}
impl<T: Purpose> Amount<T> {
pub const MAX: Amount = Amount(T::MAX);
// all checks are based on MAX
}
// Keep most APIs the same, the difference is just max value.
// Amount<TxValue> will have try_from_sat instead of from_sat which will be on Amount<General>
// Also From<TxValue> for General and TryFrom<General> for TxValue
Interesting idea, cheers. I'll have a play with it.
IMO: the job of checking 21M limit logically belong in one place: (de)serialization to/from consensus layer. Seems like this new Amount
already wraps the raw u64
from the very moment of deserialization so can prevent any possibility of missing a wrap-around handling, so I wonder what is 21M limit even giving us anymore? AFAIR in the past making it harder to get a unchecked wraparound in release mode build was the biggest argument for it.
I also wonder if having SignedAmount
is needed at all. Is there a single bit of code in rust-bitcoin that actually need it? Is it common enough in the ecosystem that it demands having it in rust-bitcoin
? There is no such a thing as negative amounts in the consensus. To me rust-bitcoin
should have the "TXO value type" wrapping the u64
and making sure users don't naively screw up overflow handling, and that's kind of it.
How about we just remove SignedAmount
, breaks this problem down a little bit.
@dpc sanity checking various other inputs (e.g. BIP21) and proving the specific variable was checked. In my experience it's better to hit errors sooner rather than later.
@dpc sanity checking various other inputs (e.g. BIP21) and proving the specific variable was checked.
BIP21? The payment URL? What exactly would we be checking? That someone didn't ask in the URL to send more than all BTC in existence ever?
I know it's a sacred number to Bitcoiners, but that 21M is just not a very useful check. The only interesting thing is overflow protection, and 21M was only tangentially related.
Actually yes, because if there is a large number something is seriously screwed up. This brings the check from inside the code to the boundary where inputs are taken, so error handling is much nicer. Actually implementing a wallet with BIP21 support means that the exact number given in the URI will be eventually placed in tx output so having the same restriction makes perfect sense.
Actually yes, because if there is a large number something is seriously screwed up.
If it was 19M BTC, or even 100K BTC it wouldn't?
Actually implementing a wallet with BIP21 support means that the exact number given in the URI will be eventually placed in tx output so having the same restriction makes perfect sense.
What actually makes sense is comparing the requested payment value with a total balance of inputs available, which must be done anyway in any sane wallet implementation, rendering this whole endeavor completely futile and redundant.
If it was 19M BTC, or even 100K BTC it wouldn't?
Maybe, but we can't tell for sure.
comparing the requested payment value with a total balance of inputs available
Yeah, but then you either have an unwrap
in the code that people need to re-check every time the code changes or funny error handling. Bubbling up types is great.
Yeah, but then you either have an
unwrap
in the code that people need to re-check every time the code changes or funny error handling. Bubbling up types is great.
The code of any sane wallet will have to check the value from the invoice again available balance and handle the condition. It is going to need to be able to bubble this error up, all the way to the user. That is the only check that was ever needed, infinitely more strict and precise. Having a superfluous arbitrary limit does not make anything simpler.
I'm kind of giving up on it. To me it seems that this meaningless 21M number is too hard to let go of, and it's too tempting to be smart about knowing it and thus putting it somewhere. Seems like most people believe it just has to be good to have it.
So we end up contemplating two types and even considering some sealed generic shenanigans, for something that could have been just a single u64
wrapped in overflow checking newtype. Not the end of the world, I guess.
FWIW after reading this thread multiple times and working on various things in Amount
, SignedAmount
, and other integer types I tend to agree with @dpc. I don't really feel like I have much to add to the conversation but I wanted to register my gut feeling that Amount
should just wrap a u64
and have nothing to do with 21 million.
After reviewing https://github.com/rust-bitcoin/rust-bitcoin/pull/1338 I realized that monadic handling is probably really better.
In that PR, overflow checking was initially forgotten and even if it did panic, it'd lead to a DoS vector - someone sending a bad PSBT could cause a service to crash. Monadic handling fixes this. I'd prefer a different design than the PoC crate by @dpc though:
// Maybe this could be just `Result<Amount, OverflowError>` but I'm not sure if coherence will let us
#[derive(Copy, Clone, Debug, PartialEq, PartialOrd, Hash)] // no Eq, No Ord
pub enum UncheckedAmount {
Valid(Amount),
Overflow(OverflowError),
}
impl UncheckedAmount {
pub fn to_result(self) -> Result<Amount, OverflowError> {
match self {
Valid(amount) => Ok(amount),
Overflow(error) => Err(error),
}
}
// unwrap and expect analogous to those on Result
}
/// Information about overflow
///
/// With `extra-debug-info` feature it carries the location of the code which caused overflow.
/// This will make error paths slower but much easier to debug.
// make sure `Debug` writes location in a usable format - if not write it manually.
#[derive(Copy, Clone, Debug, Hash)]
#[non_exhaustive] // this is critical to avoid problems with feature being on or off.
pub struct OverflowError {
#[cfg(feature = "extra-debug-info")]
location: &'static core::panic::Location<'static>,
}
// impl PartialEq, PartialOrd by always returning false/None
// impl Display and std::error::Error; conditionally display panic location (maybe with `#` though?)
The conditional inclusion of overflow location resolves one of my main dislikes about monadic handling. (The other one being non-standard Rust, which I think is outweighed by the added security here.)
Nice @Kixunil, I like this a lot. I agree that trying to reason "by hand" about overflow in all cases is tiring and difficult to get right even when you notice all of the locations of potential overflow.
Amount
type should be definitely improved withMoneyRange
-like functionality, since we guarantee at API level that Amount can't exceed consensus value (having internal value made private).From here: https://github.com/rust-bitcoin/rust-bitcoin/pull/599#pullrequestreview-649739700