Open sakehl opened 4 months ago
If you want to make arbitrary large files that inhibit this behavior, this is the python code used to generate them:
base_s = """
context_everywhere n > 0;
context_everywhere x0 != null ** x0.length == n ** (\\forall* int i=0..n; Perm(x0[i], write));
{annotations}
int main(int n, int[n] x0{args}){{
loop_invariant 0 <= i && i<= n;
//loop_invariant (\\forall int j=0..i; x0[j] == 0{sumsinv});
for(int i=0; i<n;i++){{
x0[i] = 0{sums};
}}
}}
"""
annotation_s = "context {name} != None && |{name}.get| == n;"
arg_s = ", option<seq<int>> {name}"
sumpart_s = "+ {name}.get[i]"
sumpartinv_s = "+ {name}.get[j]"
def generate_s(n: int)->str:
anns = '\n '.join(map(lambda i: annotation_s.format(name=f"x{i}"), range(1,n+1)))
args = ''.join(map(lambda i: arg_s.format(name=f"x{i}"), range(1,n+1)))
sumsinv = ''.join(map(lambda i: sumpartinv_s.format(name=f"x{i}"), range(1,n+1)))
sumparts = ''.join(map(lambda i: sumpart_s.format(name=f"x{i}"), range(1,n+1)))
return base_s.format(annotations=anns, args=args, sumsinv=sumsinv, sums=sumparts)
filename = "data/quant_s{i}.pvl"
for i in range(1,20):
with open(filename.format(i=i), 'w') as f:
f.write(generate_s(i))
So I generated a program with 100 plusses
So running intljij profiler (what an amazing app btw) it's pretty clear what function is slow:
I think this called many times because each plus checks the type of it's predecessor again.
And since type information is not saved in the mean time, it keeps getting called again and again.
Ok bingo
when I change NumericBinExprImpl to:
lazy val saved_t: Type[G] = getNumericType
override def t: Type[G] = {
saved_t
}
instead of
override def t: Type[G] = {
getNumericType
}
So this essentially solves the problem. Now the question is: Is this safe using lazy vals? Or do we need to re-compute everytime.
And how can we do this consistently
It is safe to make the t
definition a lazy val, it's done in many places that do a non-trivial computation. I arbitrarily do it anytime it's not a constant, basically :)
So running with 100 plusses can now get done within 2 minutes. Here the verification time is still trivial, but now the 'simplify' pass is pretty slow. Not sure what is going on there.
No clue though:
Something todo with this:
But I'll leave at this for now
I wanted to share the following programs, which I investigated for run times. These are the verification times:
data/quant_s1.pvl 8.662854194641113 success data/quant_s2.pvl 7.364089488983154 success data/quant_s3.pvl 7.336841583251953 success data/quant_s4.pvl 7.506519317626953 success data/quant_s5.pvl 7.664606809616089 success data/quant_s6.pvl 7.704848051071167 success data/quant_s7.pvl 7.864739894866943 success data/quant_s8.pvl 7.981231689453125 success data/quant_s9.pvl 8.1972496509552 success data/quant_s10.pvl 8.15115475654602 success data/quant_s11.pvl 8.232187032699585 success data/quant_s12.pvl 9.20827865600586 success data/quant_s13.pvl 9.747354984283447 success data/quant_s14.pvl 10.299208879470825 success data/quant_s15.pvl 11.907938718795776 success data/quant_s16.pvl 15.329325914382935 success data/quant_s17.pvl 18.94451141357422 success data/quant_s18.pvl 27.495386838912964 success data/quant_s19.pvl 45.91024327278137 success
Investigating the slow down, it seems that nearly all the time is spend with VerCors. Doing all the passes. Actually verification takes maybe a second or 2 at most.
quant_s.zip
This is good example program: