Closed ThomasHaas closed 7 months ago
Hi Thomas,
up to now, the evaluation of FloatingPoints (FPs) was of low priority.
There can be issues wth converting SMT-based FPs to machine-based FPs, because SMT allows arbitrary bitsizes for exponent and mantissa, whereas machine-based FPs align with IEEE standard of single- or double-precision, i.e., 32bit (1+8+23bit) or 64bit (1+11+52bit). Note that not all SMT solvers support arbitrary bitsizes and might fail on complex input.
Do you have any special use-case that is interesting to be implemented and supported in JavaSMT?
I understand the conversion issues, but I guess that Rational
should be able to represent all possible values for any FP format, no?
I guess one cannot do computations on the Java-side directly but has to always do it over the SMT solver (unfortunately I found no Java library that can represent any floating point type).
The use-case is that we would like to verify GPU code (SPIR-V) and that contains various float types, including Float16
which Java cannot handle natively.
There are special values, like INF and NAN, and NEG_ZERO, which can not represented as Rationals. The most secure way for FPs would be String representation. I am not yet sure about the best way of implementing this.
I see. I didn't even think about those special values. That makes things complicated.
There is now improved support for floating-point numbers in the model. I will close this issue. Feel free to reopen, if any further requirements come up.
The improved support is sufficient for now. Thank you very much.
Model.evaluate
onFloatingPointFormula
does not exist. There is a genericModel.evaluate(Formula)
method that returns anObject
. For Z3, this seems to be aRational
when passing aFloatingPointFormula
(by first converting to reals).Is this is an oversight or are there underlying issues that make it hard to provide an evaluate method that always return
Rational
?