Open JacquesCarette opened 4 years ago
I wonder if it would make sense to optionally abort unification if it is too costly. In that case you could perhaps more easily have identified this problem. Can we cheaply measure the cost of unification in some reasonable way?
I guess you mean having some pragma that you can put at the top of a file or on the command-line to time-limit unification problems? That certainly would help identifying problems.
I have recently sped up typechecking time of some of the worst offenders in
agda-categories
. One of the biggest culprit is, unsurprisingly, figuring out some implicits. The hard part is to figure out which ones are very costly.I realize that providing timings for solving for each implicit in a file is likely too much to ask for. [I'm totally fine with wading through a lot of output, it's the 'precise timing' that's asking too much.] I would be happy with some 'decent proxies' though, relating to "problem size" or some such. Anything that might help me zero in on the problem spots.
In all the cases I have encountered, the "problem" was that the culprit was a large record (or two or three) that already existed in scope, but which the unifier reconstructed completely from scratch. I'm not saying that Agda was wrong to be doing that particular reconstruction, I'm saying that if I had known that was what I was 'asking', I would have gladly helped along by providing the correct answer to check.