ultimate-pa / ultimate

The Ultimate program analysis framework.
https://ultimate-pa.org/
194 stars 40 forks source link

Probably wrong result on esa_01_newlib.c #168

Open Heizmann opened 7 years ago

Heizmann commented 7 years ago

Alberto Griggio improved the fp support in MathSAT. We now have the SMT power to analyze the trunk/examples/programs/real-life/esa_01_newlib.c file. However, the Automizer toolchain finds a division by zero that I cannot reproduce if I compile the program with gcc an run it.

Ultimate Output preferences that I used violation witness

greitsch commented 7 years ago

Something is very wrong here.

I've compiled a very small program (see below) that reflects the error trace you provided. However, I don't get a division by 0 error in Ultimate there. Neither do I get a warning or an error with GCC.

I think I've used the same settings as you, but I may have been mistaken. I used the Debug-GUI.

I've noticed that the analysis of the original program takes a lot longer than the small program, even though the control flow should be almost the same. I've only removed most of the unused definitions and includes. The computations after Ultimate's "Conjunction of SSA is sat" seem to be the bottleneck here - I've waited about 10 min or more for just one iteration. So is it possible that there is some bug in the SMT solver which leads to those results?

In the original program, the function had a parameter x which wasn't set anywhere. That parameter was set to x = 0.0017616152249817557 in the counterexample, I suppose by the SMT solver.

The violation witness says that z == 0.0000031032882008875215 after it was assigned with z = x x. However, when I compute x x manually, i.e. with infinite precision using Wolfram Alpha, with the value computed by the SMT solver, 0.0017616152249817557, I get z == 0.00000310328820088752175170139785448249. So maybe there is something going on with the precision of doubles vs. precision of reals in the SMT solver which leads to the division by 0?

Sample-Program:

static const double
one= 1.00000000000000000000e+00,
pi = 3.14159265358979311600e+00,
pio2_hi = 1.57079632679489655800e+00,
pio2_lo = 6.12323399573676603587e-17,
pS0 = 1.66666666666666657415e-01,
pS1 = -3.25565818622400915405e-01,
pS2 = 2.01212532134862925881e-01,
pS3 = -4.00555345006794114027e-02,
pS4 = 7.91534994289814532176e-04,
pS5 = 3.47933107596021167570e-05,
qS1 = -2.40339491173441421878e+00,
qS2 = 2.02094576023350569471e+00,
qS3 = -6.88283971605453293030e-01,
qS4 = 7.70381505559019352791e-02;

int main()
{
    double z,p,q,r;
    double x = 0.0017616152249817557;
    double result;
    z = x*x;
    p = z*(pS0+z*(pS1+z*(pS2+z*(pS3+z*(pS4+z*pS5)))));
    q = one+z*(qS1+z*(qS2+z*(qS3+z*qS4)));
    r = p/q;
    result = pio2_hi - (x - (pio2_lo-x*r));
    return 0;
}
Heizmann commented 7 years ago

I think I've used the same settings as you, but I may have been mistaken. I used the Debug-GUI.

But you got the same strange result that I got. It is good to see that the Ultimate results are reproducible with Ultimate.

The computations after Ultimate's "Conjunction of SSA is sat" seem to be the bottleneck here

In Ultimate we check every feasible trace twice. The first check is with branch encoders, the second check is without branch encoders. In this trace there should not be any branch encoders hence both checks should take the same amount of time. According to my experience this is the case. (An obvious optimization would be to disable the second check in case there are no branch encoders.)

So maybe there is something going on with the precision of doubles vs. precision of reals in the SMT solver which leads to the division by 0?

Could you check this on a smaller example and compare the result with z3 (and probably GCC)? Then we can file a bug report for MathSAT if necessary.

greitsch commented 7 years ago

Could you check this on a smaller example and compare the result with z3 (and probably GCC)? Then we can file a bug report for MathSAT if necessary.

The problem is that I don't have a smaller example that produces the same error. I tried with the small program above, which technically should be the same as the large program's violation witness projection. But this program is safe. I'll try to crosscheck the computed values tomorrow, maybe some more magic is going on in the original program that wasn't obvious to me in the first place.

Heizmann commented 7 years ago

Maybe the following program?

double x = 0.0017616152249817557;
double z = x*x;
double c = MULTIPLICATION_RESULT_FROM_GCC;
if (z != c) {
     //@ assert \false;
}
greitsch commented 7 years ago

I believe I might have a clue on what's happening and why Ultimate says that there might be happening a division by 0.

I think, the problem is in how Ultimate handles constant declarations in the source code. The code in the example does not contain a main procedure. Instead, the procedure in which the error happens has some arbitrary name. Therefore, the entry point for Ultimate is undefined. In the program, there are also a number of static constants in the global scope defined. The definition of the procedure follows right after the declaration of the global constants.

I believe, Ultimate skips the definition of the global constants in its CFG, leaving values for the defined constants at arbitrary (havoc-ed) values. You can see this by looking at the witness you provided. In the witness, the value for one==-0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008973913714174393 which clearly is not equal to 1.

In that sense, I think Ultimate is correct when it reports that there might happen a division by 0. Because if we assume arbitrary values for the constants, this clearly is the case.

We should think about how we want to handle such programs that run in library mode. Maybe we could add a flag which states that in library mode, globally defined constants must still be initialized (or not), to ensure that we can verify library functions correctly as well.

greitsch commented 7 years ago

I was able to reproduce the error that was stated by Ultimate in C on the small program that represents the original error trace. I switched from library mode to verifying the main function of the program by changing the function in the original program to int main().

With that change, all values of the globally defined constants are preserved and Ultimate still finds a division by 0 in the modified, easier program (we get a timeout on the whole program, I guess the program itself is too difficult). The error trace is here. When using the computed values inside the following C program, I get the expected output Division by 0..

#include <stdio.h>

typedef union
{
  double value;
  struct
  {
    __uint32_t lsw;
    __uint32_t msw;
  } parts;
} ieee_double_shape_type;

static const double
one= 1.00000000000000000000e+00,
pi = 3.14159265358979311600e+00,
pio2_hi = 1.57079632679489655800e+00,
pio2_lo = 6.12323399573676603587e-17,
pS0 = 1.66666666666666657415e-01,
pS1 = -3.25565818622400915405e-01,
pS2 = 2.01212532134862925881e-01,
pS3 = -4.00555345006794114027e-02,
pS4 = 7.91534994289814532176e-04,
pS5 = 3.47933107596021167570e-05,
qS1 = -2.40339491173441421878e+00,
qS2 = 2.02094576023350569471e+00,
qS3 = -6.88283971605453293030e-01,
qS4 = 7.70381505559019352791e-02;

int main()
{
 double z,p,q,r,w,s,c,df,x;
 double one_plus_x;

 // Fill values with values from SMT solver
 x=1.6552276283186724;

     one_plus_x = (one+x);
     z = one_plus_x*0.5;
     p = z*(pS0+z*(pS1+z*(pS2+z*(pS3+z*(pS4+z*pS5)))));
     q = one+z*(qS1+z*(qS2+z*(qS3+z*qS4)));
     if (q == 0)
             printf("Division by 0.\n");
     r = p/q;
     return 0;
}

Output:

Division by 0.
danieldietsch commented 7 years ago

So basically, we do not handle static const correctly.

greitsch commented 7 years ago

Upon my findings, I've rerun the analysis on the original input file. I've replaced the library method in the example with a main method to obtain one distinct entry point for Ultimate. The file is this one: esa_01_newlib.c.txt

The settings are the same that @Heizmann provided here.

Unfortunately, an exception occurs in iteration 2:

java.lang.UnsupportedOperationException: unsupported: several stores
    at de.uni_freiburg.informatik.ultimate.modelcheckerutils.smt.ElimStore3.getArrayStore(ElimStore3.java:94)

The full output is here: ultimate-output.txt

Note that the verification takes about 12 hours until the error happens. I don't know how to reduce the runtime, unfortunately.

Heizmann commented 7 years ago

Thanks. Could you commit the example in our repository. The Exception is a known missing feature in our quantifier elimination for arrays. I would need a weekend to implement it. We should first try to translate the C program into a Boogie program that does not use arrays.

greitsch commented 7 years ago

Commit b20ca0c8 contains the example.

danieldietsch commented 6 years ago

@greitsch will rerun the example