Closed peddie closed 5 years ago
I am going to be offline for the weekend (which begins shortly here in UTC+10), but here is the program I used to generate the table. Hope it can help!
https://gist.github.com/peddie/63c2dc942c2535d7e1e9afe8d9bf4c69
You nailed it! Awesome.
The numbers look good to me, but of course, that doesn't mean they are correct. Can you use your MPFR-fu to validate these? One idea can be to check the nextafter
and make sure the value you obtain results in undefined behavior after the conversion :-)
OK, that was a bad joke. Maybe check that nextafter
in both directions when converted at least don't match minBound
/maxBound
for that type? (Of course, undefined means it might match, but that could be a good start.)
It might also be possible to directly use the corresponding x86 instruction, and make sure that the nextafter
values raise the corresponding floating-point flag (underflow/overflow), but that requires some research into how to code/check for those things in C unless you're already familiar with that vernacular. (Use inline assembly to enable exceptions, trap them, set rounding mode, issue the instruction, check the flag.. That's roughly the way it would go, but I haven't done that sort of coding in a long time and would need to do a bunch of googling first.)
@peddie
I think there's light at the end of the tunnel. I reworked the entire conversion mechanism. To do so, we really have to state precisely what these functions do. I wrote this down in the source code:
https://github.com/LeventErkok/sbv/blob/master/Data/SBV/Core/Floating.hs#L155-L180
I've also added testing functions to IEEEFloatConvertible
class sFloatInRange
and sDoubleInRange
which the end-users can employ to do their own custom processing. (This would be handy if we have a bug somewhere, or if they want to do something special with out-of-bounds values.)
I trust the implementation now matches what we state there. I've good confidence that this is what Haskell does, and what SBV models; both in constant folding and translation to SMTLib. I'd also like to think that the C-translation that falls out of this implements the same, but I've been wrong about that before. So, it'd be great if you can validate that to gain some confidence.
Specifically, I think it'd be great if you can:
Make sure the boundaries table you produced (great job!) is actually good. (i.e., there were no z3 optimization gotchas or anything.) The table is here: https://github.com/LeventErkok/sbv/blob/master/Data/SBV/Utils/FloatConversionBounds.hs
I'm hoping you'll be able to use your MPFR skills to ensure the bounds are indeed correct.
Make sure the generated C code actually does what we claim as the semantics of these operations. (I know that they all compile cleanly as I have a test case for that; but I don't have any easy way to make sure they all produce the exact same value as Haskell would.)
Once we gain some confidence into these two items, I think we can close this ticket.
The numbers look good to me, but of course, that doesn't mean they are correct. Can you use your MPFR-fu to validate these?
Yes. I'm still having trouble with a few conversions to uint16_t
for some reason which I think is internal to MPFR, but I think the rest of the constants are close at hand.
One idea can be to check the nextafter and make sure the value you obtain results in undefined behavior after the conversion :-)
This may have been a joke, but you may be aware that GCC and clang both provide runtime instrumentation via -fsanitize=undefined
that is supposed to warn exactly when undefined behavior occurs. I thought this might solve our problem, but after some testing, I don't think it's reliable enough; clang's one doesn't warn in many situations where FE_INEXACT
is raised by a direct cast, and sometimes it does warn when FE_INEXACT
is not raised (e.g. (uint8_t) 256.1
) (see below for more about this). GCC's one never warns at all!
It might also be possible to directly use the corresponding x86 instruction, and make sure that the nextafter values raise the corresponding floating-point flag (underflow/overflow), but that requires some research into how to code/check for those things in C unless you're already familiar with that vernacular.
This is a great idea. Unfortunately neither of my C compilers seems to provide RoundNearestTiesToAway
mode, since the IEEE standard states that it's optional for a binary implementation, so I haven't done anything with this mode yet (I have not yet resorted to inline assembly, though maybe I should). The method I'm using for the other modes is a little sketchy: I take the limiting value of floating type, assign it to a variable of the appropriate integral type, check that no FP exception has been set, call nextafter()
in the appropriate direction to get the first out-of-bounds floating value, then repeat the assignment-with-cast and check that FE_INEXACT
or FE_INVALID
has been set. Why FE_INEXACT
? I haven't found any documentation on the behavior of type casting with regard to floating-point exceptions (my man page for math_error(7)
doesn't say a word about what FE_INEXACT
means!). I find in the IEEE standard:
When a NaN or infinite operand cannot be represented in the destination format and this cannot otherwise be indicated, the invalid operation exception shall be signaled. When a numeric operand would convert to an integer outside the range of the destination format, the invalid operation exception shall be signaled if this situation cannot otherwise be indicated.
But FE_INVALID
is only set in a handful of cases. So there must be some other way to indicate it. From (GCC's docs on FP exceptions)[https://www.gnu.org/software/libc/manual/html_node/Rounding.html#Rounding]:
This exception is signalled if a rounded result is not exact (such as when calculating the square root of two) or a result overflows without an overflow trap.
I don't seem to have been able to enable trapping on FE_OVERFLOW
with the GNU-specific feenableexcept()
, so I can't really confirm what's going on here. But how poetic, to have to rely on undocumented behavior to work around undefined behavior!
The results of my test are dependent on optimization setting and compiler, but with GCC I was able to confirm the expected rounding behavior in all but the following cases:
double -> uint64_t upper FE_TONEAREST
double -> uint64_t upper FE_UPWARD
double -> uint64_t upper FE_DOWNWARD
double -> uint64_t upper FE_TOWARDZERO
double -> int64_t upper FE_TONEAREST
double -> int64_t lower FE_TONEAREST
double -> int64_t upper FE_UPWARD
double -> int64_t lower FE _UPWARD
double -> int64_t upper FE _DOWNWARD
double -> int64_t lower FE_DOWNWARD
double -> int64_t upper FE_TOWARDZERO
double -> int64_t lower FE_TOWARDZERO
In these cases, no FP exception of any kind is set as a result of the second rounding operation. clang claims failures on many more conversions, with FE_INVALID
not set on many (but not all) upper-bound checks and no exception set at all on all other failures (these shown below):
float -> uint8_t upper FE_UPWARD
float -> uint8_t lower FE_DOWNWARD
float -> uint16_t upper FE_UPWARD
float -> uint16_t lower FE_DOWNWARD
float -> uint32_t upper FE_TONEAREST
float -> uint32_t upper FE_UPWARD
float -> uint32_t upper FE_DOWNWARD
float -> uint32_t lower FE_DOWNWARD
float -> uint32_t upper FE_TOWARDZERO
float -> uint64_t lower FE_DOWNWARD
float -> int8_t upper FE_UPWARD
float -> int8_t lower FE_DOWNWARD
float -> int16_t upper FE_UPWARD
float -> int16_t lower FE_DOWNWARD
float -> int32_t lower FE_TONEAREST
float -> int32_t lower FE_UPWARD
float -> int32_t upper FE_DOWNWARD
float -> int32_t lower FE_DOWNWARD
float -> int32_t upper FE_TOWARDZERO
float -> int32_t lower FE_TOWARDZERO
float -> int64_t lower FE_TONEAREST
float -> int64_t lower FE_UPWARD
float -> int64_t upper FE_DOWNWARD
float -> int64_t lower FE_DOWNWARD
float -> int64_t upper FE_TOWARDZERO
float -> int64_t lower FE_TOWARDZERO
Note that none of the failures overlap between the two compilers. These results are for -O2 -mfpmath=sse -fno-builtin
in all cases.
(Here is the code for this program)[https://gist.github.com/peddie/f6d4285b7b7a2cdbfea055ed45da075e]. I apologize for the structure; it was hastily generated from the original SBV table with an emacs macro, so it's a bit tough to inspect the constants. But I know I'm getting "failures" above from constants that are really easy to check visually.
I was hoping that working entirely in C could give us some useful results, as it's the only language I know which even provides access to these modes and exceptions. If you can shed any light on what I should expect to happen in this situation or anything wrong with my approach, I'd be happy to revisit this method. Barring that, it looks like this can't really tell us anything reliable, so I will press on with MPFR and assembly as time permits.
(Use inline assembly to enable exceptions, trap them, set rounding mode, issue the instruction, check the flag.. That's roughly the way it would go, but I haven't done that sort of coding in a long time and would need to do a bunch of googling first.)
Same here, unfortunately, so I'll revisit MPFR first.
Great analysis! Looking forward to seeing what you find out.
Just FYI: I'm having second thoughts on how the SBV translation is done actually. I want to think a bit further about this, I think I can simplify it. So, hold off on validating the SBV->C translation. But the bounds information would be nice to have, because I think it's good to have that regardless of how we do the translation.
I should've something solid in the next couple hours.
@peddie
I had to rework the conversions because it wasn't really reflecting what Haskell is doing. Here's the new spec:
https://github.com/LeventErkok/sbv/blob/master/Data/SBV/Core/Floating.hs#L155-L203
Unfortunately, I had to leave the one case unspecified: If the float doesn't fit into the target type, and if it doesn't also fit into Int64
range (regardless of the signedness of the target), then the result is undefined. The issue is that there's no way to get Z3 or C translation to work correctly in this case: The C-case follows from undefinedness. SMTLib case follows because the conversion is unspecified, and the correct way to do this would be to convert the float to a Real
and then to an Int
, and then to a bit-vector, and I know for a fact that float-to-real conversion (while defined in SMTLib) isn't really supported by z3 or mathsat. It's just too difficult for solvers to deal with this case.
Other than this one case, I think the semantic now captures Haskell's behavior, which is SBV's goal. I also trust this new encoding will produce "correct" C code, I'm hoping you'll validate that.
The nice thing that I like about the new design is that it gives the user a choice: The two new predicates sFloatInRange
and sDoubleInRange
(together with fpIsNaN
, fpIsInfinite
) gives you the control to check and do the conversions as you wish in the user code. I understand this is important to you.
I'm still hoping you'll validate the bounds and the generated C code, though I'll leave it up to you to decide how much effort is worth it. I'm satisfied with the current state, unless you point out a bug that goes against the spec as described now.
Take a look and let me know what you think!
I was able to work around the problem I was having with MPFR and generate a table of all the constants we're looking for. I'm assert()
ing when generating this table that a) the MPFR value for a boundary fits()
into the desired integral type and b) the MPFR conversion of the nextafter[f]()
value in the desired floating type does not fits()
, so if my logic is correct, this should be as good as MPFR knows how to do. Here is the code if you're interested.
Some of these values don't match Z3's results, and in the cases I can easily inspect, I think MPFR is wrong and Z3 is correct. Take for example the second value in the first and second lines (float
-> uint8_t
): RoundNearestTiesToEven
and RoundNearestTiesToAway
should agree about whether a tie should be rounded downwards to an odd number (255), as Z3 claims, but MPFR has different values in these two cases.
I'm not sure there's more to be gained from MPFR in the near term. I was hoping everything would line up, but the discrepancies I've looked at seem likely to be MPFR doing nearly the right thing but not quite. If I can figure out how to interact with the MPFR project I will ask about some of the discrepancies which don't make sense to me. Meanwhile, I think I'll get your latest version of SBV and examine its behavior for problems with code generation.
False 8 RoundNearestTiesToEven ( -0x1p-1 ,0x1.fefffep+7 ), ( -0x1p-1 ,0x1.fefffffffffffp+7 )
False 8 RoundNearestTiesToAway ( -0x0p+0 ,0x1.fep+7 ), ( -0x0p+0 ,0x1.fep+7 )
False 8 RoundTowardPositive ( -0x1.fffffep-1 ,0x1.fep+7 ), ( -0x1.fffffffffffffp-1 ,0x1.fep+7 )
False 8 RoundTowardNegative ( -0x0p+0 ,0x1.fffffep+7 ), ( -0x0p+0 ,0x1.fffffffffffffp+7 )
False 8 RoundTowardZero ( -0x1.fffffep-1 ,0x1.fffffep+7 ), ( -0x1.fffffffffffffp-1 ,0x1.fffffffffffffp+7 )
False 16 RoundNearestTiesToEven ( -0x1p-1 ,0x1.fffefep+15 ), ( -0x1p-1 ,0x1.fffefffffffffp+15 )
False 16 RoundNearestTiesToAway ( -0x0p+0 ,0x1.fffep+15 ), ( -0x0p+0 ,0x1.fffep+15 )
False 16 RoundTowardPositive ( -0x1.fffffep-1 ,0x1.fffep+15 ), ( -0x1.fffffffffffffp-1 ,0x1.fffep+15 )
False 16 RoundTowardNegative ( -0x0p+0 ,0x1.fffffep+15 ), ( -0x0p+0 ,0x1.fffffffffffffp+15 )
False 16 RoundTowardZero ( -0x1.fffffep-1 ,0x1.fffffep+15 ), ( -0x1.fffffffffffffp-1 ,0x1.fffffffffffffp+15 )
False 32 RoundNearestTiesToEven ( -0x1p-1 ,0x1.fffffep+31 ), ( -0x1p-1 ,0x1.fffffffefffffp+31 )
False 32 RoundNearestTiesToAway ( -0x0p+0 ,0x1.fffffep+31 ), ( -0x0p+0 ,0x1.fffffffep+31 )
False 32 RoundTowardPositive ( -0x1.fffffep-1 ,0x1.fffffep+31 ), ( -0x1.fffffffffffffp-1 ,0x1.fffffffep+31 )
False 32 RoundTowardNegative ( -0x0p+0 ,0x1.fffffep+31 ), ( -0x0p+0 ,0x1.fffffffffffffp+31 )
False 32 RoundTowardZero ( -0x1.fffffep-1 ,0x1.fffffep+31 ), ( -0x1.fffffffffffffp-1 ,0x1.fffffffffffffp+31 )
False 64 RoundNearestTiesToEven ( -0x1p-1 ,0x1.fffffep+63 ), ( -0x1p-1 ,0x1.fffffffffffffp+63 )
False 64 RoundNearestTiesToAway ( -0x0p+0 ,0x1.fffffep+63 ), ( -0x0p+0 ,0x1.fffffffffffffp+63 )
False 64 RoundTowardPositive ( -0x1.fffffep-1 ,0x1.fffffep+63 ), ( -0x1.fffffffffffffp-1 ,0x1.fffffffffffffp+63 )
False 64 RoundTowardNegative ( -0x0p+0 ,0x1.fffffep+63 ), ( -0x0p+0 ,0x1.fffffffffffffp+63 )
False 64 RoundTowardZero ( -0x1.fffffep-1 ,0x1.fffffep+63 ), ( -0x1.fffffffffffffp-1 ,0x1.fffffffffffffp+63 )
True 8 RoundNearestTiesToEven ( -0x1.01p+7 ,0x1.fdfffep+6 ), ( -0x1.01p+7 ,0x1.fdfffffffffffp+6 )
True 8 RoundNearestTiesToAway ( -0x1p+7 ,0x1.fcp+6 ), ( -0x1p+7 ,0x1.fcp+6 )
True 8 RoundTowardPositive ( -0x1.01fffep+7 ,0x1.fcp+6 ), ( -0x1.01fffffffffffp+7 ,0x1.fcp+6 )
True 8 RoundTowardNegative ( -0x1p+7 ,0x1.fffffep+6 ), ( -0x1p+7 ,0x1.fffffffffffffp+6 )
True 8 RoundTowardZero ( -0x1.01fffep+7 ,0x1.fffffep+6 ), ( -0x1.01fffffffffffp+7 ,0x1.fffffffffffffp+6 )
True 16 RoundNearestTiesToEven ( -0x1.0001p+15 ,0x1.fffdfep+14 ), ( -0x1.0001p+15 ,0x1.fffdfffffffffp+14 )
True 16 RoundNearestTiesToAway ( -0x1p+15 ,0x1.fffcp+14 ), ( -0x1p+15 ,0x1.fffcp+14 )
True 16 RoundTowardPositive ( -0x1.0001fep+15 ,0x1.fffcp+14 ), ( -0x1.0001fffffffffp+15 ,0x1.fffcp+14 )
True 16 RoundTowardNegative ( -0x1p+15 ,0x1.fffffep+14 ), ( -0x1p+15 ,0x1.fffffffffffffp+14 )
True 16 RoundTowardZero ( -0x1.0001fep+15 ,0x1.fffffep+14 ), ( -0x1.0001fffffffffp+15 ,0x1.fffffffffffffp+14 )
True 32 RoundNearestTiesToEven ( -0x1p+31 ,0x1.fffffep+30 ), ( -0x1.00000001p+31 ,0x1.fffffffdfffffp+30 )
True 32 RoundNearestTiesToAway ( -0x1p+31 ,0x1.fffffep+30 ), ( -0x1p+31 ,0x1.fffffffcp+30 )
True 32 RoundTowardPositive ( -0x1p+31 ,0x1.fffffep+30 ), ( -0x1.00000001fffffp+31 ,0x1.fffffffcp+30 )
True 32 RoundTowardNegative ( -0x1p+31 ,0x1.fffffep+30 ), ( -0x1p+31 ,0x1.fffffffffffffp+30 )
True 32 RoundTowardZero ( -0x1p+31 ,0x1.fffffep+30 ), ( -0x1.00000001fffffp+31 ,0x1.fffffffffffffp+30 )
True 64 RoundNearestTiesToEven ( -0x1p+63 ,0x1.fffffep+62 ), ( -0x1p+63 ,0x1.fffffffffffffp+62 )
True 64 RoundNearestTiesToAway ( -0x1p+63 ,0x1.fffffep+62 ), ( -0x1p+63 ,0x1.fffffffffffffp+62 )
True 64 RoundTowardPositive ( -0x1p+63 ,0x1.fffffep+62 ), ( -0x1p+63 ,0x1.fffffffffffffp+62 )
True 64 RoundTowardNegative ( -0x1p+63 ,0x1.fffffep+62 ), ( -0x1p+63 ,0x1.fffffffffffffp+62 )
True 64 RoundTowardZero ( -0x1p+63 ,0x1.fffffep+62 ), ( -0x1p+63 ,0x1.fffffffffffffp+62 )
Thanks Matt. I think there's enough evidence here that Z3 values are good. And if you actually confirm there are MPFR bugs, I think z3 folks would love to hear that their tool was instrumental in that; if you do pursue that, please do keep me and z3 folks in the loop.
Regarding SBV: Yes; it'll be great to get some confidence that the C translation does the right thing. I trust you noticed the caveat about the out-of-bounds Int64
range, right? I had to make that underspecified as I couldn't get any proofs going with z3. (Note that Word64
is fine; the unspecified case is when the value is out-of-range for the target and out of range of Int64
. If you've got either, the translation should work fine.)
The changes you've made look great to me. I think I'll have a much easier time working with the generated code now, as the scope of undefined behavior is greatly reduced. I will let you know in the next few days whether I find any problems on the C side.
Unfortunately, I had to leave the one case unspecified: If the float doesn't fit into the target type, and if it doesn't also fit into
Int64
range (regardless of the signedness of the target), then the result is undefined. The issue is that there's no way to get Z3 or C translation to work correctly in this case: The C-case follows from undefinedness. SMTLib case follows because the conversion is unspecified, and the correct way to do this would be to convert the float to aReal
and then to anInt
, and then to a bit-vector, and I know for a fact that float-to-real conversion (while defined in SMTLib) isn't really supported by z3 or mathsat. It's just too difficult for solvers to deal with this case.
This makes sense; not sure what else you can do (at least on the C side) when faced with this issue.
The nice thing that I like about the new design is that it gives the user a choice: The two new predicates
sFloatInRange
andsDoubleInRange
(together withfpIsNaN
,fpIsInfinite
) gives you the control to check and do the conversions as you wish in the user code. I understand this is important to you.
This is indeed pretty convenient for my use case (and it matches mpfr_fits_*_p()
which turned out to be handy).
@peddie
Hi Matt. I'll put out an SBV release this weekend, and I'm afraid the next release after that won't be for quite a while. It'd be good to close this ticket before the release goes out. For the purposes of this, perhaps we can just assume the constants as computed now are good, and just make sure that the C- code that gets generated is good? If you can at least run some cases and convince yourself of the C output, it'd be great. (They should all generate very similar code anyhow, aside from the particular bounds used.)
I'm assuming the constants are OK for now; I simply haven't had time to move our internal system to the latest SBV and run it through the gauntlet yet due to other responsibilities. It shouldn't take too long once I can get to it, and I will do my best to get this done before your weekend begins. Thanks for the heads-up.
Great!
Since SBV only generates C code when rounding mode is RoundNearestTiesToEven
, the number of cases you need to test is rather small actually. And it always generates the "same" kind of code. I think the following combos would do:
Float -> SWord8, SInt8, SWord64, SInt64
Double -> SWord8, SInt8, SWord64, SInt64
So, essentially conversions to signed/unsigned 8 and 64 would cover it all I think. Of course, if you automate, then you can go for all 16 combinations adding SInt/Word16
; and SInt/Word32
.
I'm starting on this work now, and I'm finding sFloatInRange
and sDoubleInRange
not quite sufficient for my use case. There are two reasons for this:
Each identifies whether the input floating value is within the bounds from the constant table, but this does not match the logic in genericFPConverter
's resultFromIntermediate
; some values for which sFloatInRange
returns False
will be rounded (safely, with defined behavior) if you actually call fromSFloat
on them.
Neither lets me know in which direction I'm out of the range of the result type.
In my ideal world, I'd be able to get all the information at play here through the public interface:
sFloatInRange
)?sFloatSafelyRoundable
or something?)data OutOfBounds = BelowMinBound | AboveMaxBound
and sFloatOutOfRange :: SFloat -> Maybe OutOfBounds
, sFloatNotSafelyRoundable :: SFloat -> Maybe OutOfBounds
)?But I understand that as described, this may be a bit more complex than most users want. The function I have implemented on my end is equivalent to sFloatNotSafelyRoundable
, so I can find out whether I'm in danger of undefined behavior and know what to do to avoid it all at once.
Good points all around!
I've added new predicates:
sFloatOverflow, sFloatUnderflow, sDoubleOverflow, sDoubleUnderflow
These should tell you if you can underflow/overflow in each case. sFloatInRange
and sDoubleInRange
still exist; but are now defined in terms of the above in the obvious sense.
(Note that it's not possible to give Maybe
results or things like data OutOfBOunds = BelowMin | AboveMax
in a symbolic context. These functions will properly work when rounding mode and the inputs are possibly symbolic.)
Regarding the conversion via intermediate: That one is on purpose to match the Haskell semantics. I'm not saying what Haskell is doing is reasonable, but SBV has a goal of matching what Haskell does as close as possible. (This isn't always possible as we found out when the float is out-of-range for Int64, but I don't know how to overcome that.)
Will these additions be sufficient for your purposes? Also, would be awesome to have some confirmation on the C-output preserving the claimed semantics.
Also, just exported conversionBounds
from Data.SBV.Internals
: In case you need the bounds for some future purpose.
Thanks very much for those changes. Especially with conversionBounds
in hand, I shouldn't have any further issues.
The C code looks pretty good so far (I've run a few hundred random cases). I will do (heaps) more tests tomorrow and report back.
Wonderful! Looking forward to hearing about those test results, hopefully, it'll all go well.
I do need to make an SBV release by this Saturday sometime, so would be great to at least know that the C translation is not "obviously broken."
I wasn't able to do quite as many tests today as I'd planned, for various reasons, but I still haven't found any issues with the generated C by testing (and I do know that plenty of cases involving rounding behavior occurred). As with any randomized testing system, I can never be totally sure. I think there's "nothing obviously wrong."
I began preparing my MPFR bug report and discovered that in fact some of our Z3-derived constants are incorrect. I'll give one case as an example: going from Float
to Int32
using RoundNearestTiesToEven
. Here is a table of the relevant values (great job on crackNum
, by the way; it's really great!):
Value | Hex | Decimal | Binary
-------------------------------------------------------------------------------
Z3 | 0x1p+31 | 2147483648.00 | 0 10011110 00000000000000000000000
INT32_MAX | | 2147483647 |
MPFR | 0x1.fffffep+30 | 2147483520.00 | 0 10011101 11111111111111111111111
If we try to round 0x1p+31
using rintf()
, we'll of course get back the same decimal value as above. It's clearly greater than INT32_MAX
. I'm not sure any rounding (in the IEEE 754 sense of having to choose which 32-bit Float
to use to represent a more precise result) is actually invoked here; this jibes with the Z3 and MPFR tables having identical values for all rounding modes. If I cast rintf(0x1p+31)
to int32_t
and run the program with the undefined behavior sanitizer enabled, I do get a warning (and a negative result from the cast). If I ask GHC about this situation, it agrees (as does SBV, in spite of the table):
*Data.SBV> let x = 0x1.fffffep+30 :: SFloat
*Data.SBV> let xi = fromSFloat sRNE $ fpRoundToIntegral sRNE x :: SInt32
*Data.SBV> x
2.1474835e9 :: SFloat
*Data.SBV> xi
2147483520 :: SInt32
*Data.SBV> sFromIntegral xi :: SFloat
2.1474835e9 :: SFloat
*Data.SBV> let y = 0x1p31 :: SFloat
*Data.SBV> let yi = fromSFloat sRNE $ fpRoundToIntegral sRNE y :: SInt32
*Data.SBV> y
2.1474836e9 :: SFloat
*Data.SBV> yi
-2147483648 :: SInt32
*Data.SBV> sFromIntegral yi :: SFloat
-2.1474836e9 :: SFloat
Are we missing another constraint from our optimization problem? Maybe we need to add a constraint about round-tripping or something? The floating-point value after fpRoundToIntegral
should be required to match the floating-point value converted from the rounded integer value, perhaps? I'll try it.
I'm terribly sorry to give this news at the eleventh hour. I should be able to give this my full attention tomorrow (my Saturday).
Hi Matt! Great analysis!
I agree that it's bizarre to map to a value that's in bounds, but not what you'd expect. So, perhaps MPFR has a point on this.
I'm not sure how to get z3 to compute the MPFR bounds. Will need to do some thinking, but please charge ahead. You've a much better understanding of this than I do.
I have identified a (very minor) issue with the C code:
> fmap (flip showHFloat "") $ unliteral (sFromIntegral (-(1 :: SWord64)) :: SDouble)
Just "0x1.fffffffffffffp63"
Then compile it:
> compileToC Nothing "test" $ do { input <- cgInput "input" :: SBVCodeGen SWord64; let { out = sFromIntegral (-input) :: SDouble }; cgReturn out }
...
SDouble test(const SWord64 input)
{
const SWord64 s0 = input;
const SWord64 s1 = (- s0);
const SDouble s2 = (SDouble) s1;
return s2;
}
When I run this program:
test(1) : 0x1p+64 (1.8446744073709552e+19)
So it's a one-bit mismatch.
This one doesn't even use toSFloat
/toSDouble
. Sigh. But it's actually a pure Haskell problem no?
Prelude Data.Word Numeric> flip showHFloat "" $ (fromIntegral (-(1 :: Word64)) :: Double)
"0x1.fffffffffffffp63"
Versus:
#include <stdio.h>
#include <inttypes.h>
int main(void)
{
uint64_t inp = 1;
uint64_t ninp = - inp;
double out = (double) ninp;
printf("%a\n", out);
}
which prints:
0x1p+64
Are these "corresponding" programs in Haskell/C in your mind? If not, what should be the correct translation? (Note that there's no SBV involved in this one at all!)
Yeah, I agree that this is just a difference in how C and Haskell do their rounding. I think it will only matter in this edge case, but I suppose since the value 0x1p64
is inside the bounds of the "defined rounding behavior" table, I had hoped that the generated C might match Haskell/SBV in this case. I will work around it myself.
What would be the correct translation that avoids this?
I think it's likely to be more complex than is worth it in the general case. Haskell and C agree on what -1 :: Word64
ought to be; it's just to do with conversion to double. I'm trying to dig into how this happens (I believe in GMP, which doesn't sound promising).
Some additional food for thought: in number of these mismatches I've come across, the mismatch stems from SBV doing constant folding in Haskell and generating a reduced C program; if the full program is generated for some input variable (rather than an inlined constant) and applied to the original constant value in C, I get a different result. My understanding is that it would be quite difficult to avoid this behavior in SBV, since it's so efficient about recovering sharing and the like immediately (rather than in a later pass before a compiler backend or something). But it's a bit of a complicating factor even if you're OK with the program as run in Haskell or in Z3 not matching the program as run in C.
That's absolutely correct. Constant folding aggressively is essential for getting any sort of performance out of SBV. And these mismatches always come about when Haskell/SMTLib/C don't agree. The trick then becomes to carefully avoid those. The Haskell/SMTLib mismatches I'm most worried about since the whole goal of SBV is to reason about Haskell; and I'd like C to have the same meaning but it has received less attention. And floating-point by itself is a beast to start with.
I'm having some trouble looking into the table calculation. It seems like Z3 is contradicting itself. Could this have to do with the new blast-based Metric
for floating-point types?
> optResult <- optimizeWith z3 Lexicographic $ do { x <- free "x" :: Symbolic SDouble; maximize "x" x; let { rounded = observe "rounded" $ fpRoundToIntegral sRNE x; converted = observe "converted" $ fromSDouble sRNE rounded :: SInt64; roundTripped = observe "roundTripped" $ sFromIntegral converted :: SDouble; }; constrain $ roundTripped .== rounded }
> let xout = case optResult of { LexicographicResult m -> maybe (error "disaster!") id $ getModelValue "x" m; _ -> error "oh no!" } :: Double
> showHFloat xout ""
"0x1p63"
> prove $ do { x <- free "x" :: Symbolic SDouble; constrain $ x .== literal xout; let { rounded = observe "rounded" $ fpRoundToIntegral sRNE x; converted = observe "converted" $ fromSDouble sRNE rounded :: SInt64; roundTripped = observe "roundTripped" $ sFromIntegral converted :: SDouble; }; pure $ roundTripped .== rounded }
Falsifiable. Counter-example:
converted = -9223372036854775808 :: Int64
roundTripped = -9.223372036854776e18 :: Double
rounded = 9.223372036854776e18 :: Double
x = 9.223372036854776e18 :: Double
> satResult <- sat $ do { x <- free "x" :: Symbolic SDouble; constrain $ x .== literal xout; let { rounded = observe "rounded" $ fpRoundToIntegral sRNE x; converted = observe "converted" $ fromSDouble sRNE rounded :: SInt64; roundTripped = observe "roundTripped" $ sFromIntegral converted :: SDouble; }; pure $ roundTripped .== rounded }
> flip showHFloat "" <$> (getModelValue "x" satResult :: Maybe Double)
Just "0x1p63"
The theorem is the same as the optimization problem, with the maximize "x" x
replaced by constrain $ x .== literal xout
and the constrain $ roundTripped .== rounded
replaced by pure $ roundTripped .== rounded
. Have I done something underspecified?
Well, look at the last 4 lines of the conversions table. It says everything in the range -1p63
to 1p63
is safe. Even for float. That's just wrong, isn't it?
Here're the lines for reference: https://github.com/LeventErkok/sbv/blob/master/Data/SBV/Utils/FloatConversionBounds.hs#L63-L66
I don't think this has to the with the Metric
expansion, but something else is going on. I don't trust these numbers anymore. Hmm, back to square one.
One thing I'm noticing is that we can't really use SBV to compute these numbers! Because the construction of the problem relies on these numbers! Kind of a chicken-and-egg problem. Does that make sense?
We need to directly use z3 or figure out some other way.
There's also this: https://gforge.inria.fr/tracker/index.php?func=detail&aid=6212&group_id=136&atid=619
I'm now really hesitant to include these predicates of bounds checking with SBV, and the conversion table all together.
Instead, I'm inclined to go back to the original version of these definitions, but put a comment saying:
This is less functionality, but it is honest and clear. (And implementable in the obvious way!)
All the "inRange"/"overflow"/"underflow" predicates would go away. Note that those can later be defined outside of SBV, if we can eventually figure out a "correct" bounds table somehow.
Unless I hear otherwise from you, I'll implement this and put out release 8.1. What do you think?
I think if you're in a hurry to get the release out, definitely go ahead with the 8.0 behavior and thorough docs. It seems like we know there should be a better way, but we clearly don't know exactly how to do it yet, so it doesn't make sense to throw it out there.
I can ask on the MPFR list whether there are still concerns related to that ticket. I keep testing edge values hoping to catch them out on something I know for sure is wrong, but so far it's been right in every case. I notice now, though, that the MPFR table also features -0x1p63
for Float
-> Int64
conversions, which seems pretty unlikely. I'll look at that next.
I've found no other issues with the C generation so far either (not that it matters if you are planning to roll back).
I'd trust MPFR for float/double. (Though they might have issues for other precisions, but that's not really our concern.)
I'd like the predicates to be present in SBV, but I agree that it won't happen until we can get those bound numbers computed somehow. I'll put a release out by rolling back the definitions to 8.0 essentially, and perhaps predicates can make it to 8.2; though I'm not sure when that'll be a possibility.
The reason I'm rushing is because I'm switching jobs and starting a new one on Monday, and unfortunately my new employer isn't quite lenient about open-source. So, it isn't clear to me when there'll be an 8.2.
But if you ever construct that table and create the correct overflow/underflow predicates, I'd strongly encourage you to put that on to hackage as a separate package built on top of SBV; and users can be very precise about conversions that way. And perhaps one day that table can be integrated into SBV proper.
@peddie
Hi Matt.. I've completely gone back on the semantics, and made it completely unspecified if the input is out-of-bounds for the target. I've also changed so that even -oo
, oo
, and NaN
are unspecified now. It really doesn't make sense to map them to 0
. (I think my original motivation was because that's what Haskell did, but in hindsight, it's just as meaningless.)
Furthermore, SBV performs no constant folding when given a float/double: They all go symbolic. This was a necessary change to make sure that people don't trip up over different meanings if you get to constant fold or not, as constant folding is really not something users have any control over. I think this'll simplify your life as well to a certain extent; since you don't have to worry about SBV constant-folding things depending on how you code your test programs.
The nice effect of this is that the generated C is absolutely dead-simple now. Simply do an rint
or rintf
; followed by a type-cast. The SBV code doing all this stuff is simpler as well.
Please give it a shot and let me know what you think. However, as far as 8.1 is concerned, I think this's how it'll go out; unless you point out some obvious bug. (Hopefully, you'll see this in time to respond!)
I'll open a new ticket regarding missing predicates. I want them, but not sure if we can fit it into the current release.
SBV 8.1 is now out on hackage; closing the door on anything new for a while. But let's continue the discussion on #462; I'd like to at least understand why z3 is producing what it's producing. I have an inkling that we should be able to "fix" it by avoiding the undefined behavior somehow, but so far it eluded me.
I'm using the latest commit
2066aba0074a11f7409007b0417f33e40efb753b
.If I write a program that converts from floating-point to integer type:
and apply it directly to a negative constant whose rounded value is outside the range of the integral type in GHCI, I see (for example):
The generated C code disagrees with the Haskell interpreter if the constant is avoided:
results in
which when called thus
yields
However, if we let constant-folding see what we're about, we will get a different answer:
results in
If I ask Z3 to check the interpreter's evaluation, I seem to get disagreement:
But I can also constrain it to be equal to
255
, and Z3 is just as happy.What's going on here?