Open Sventimir opened 3 years ago
As a workaround for this (and similar) issues (which we should fix!), as well as a general programming-style, I recommend using equational reasoning instead of long chains of rewrites.
In this specific example, I would use the (different) proof:
import Syntax.PreorderReasoning
import Data.Nat
lemma : S (2 * (S (2 * q))) = 4 * q + 3
lemma = Calc $
|~ 1 + (2 * (1 + (2 * q)))
~~ 1 + (2 + 2*(2*q)) ...(cong (1+) $ multDistributesOverPlusRight 2 1 _)
~~ 3 + 2*(2*q) ...(Refl)
~~ 3 + 4*q ...(cong (3+) $ multAssociative 2 2 _)
~~ 4 * q + 3 ...(plusCommutative _ _)
I discourage using long chains of rewrites because using many rewrites means we implicitly mutate the implicit type-checking goal. This kind of imperative programming makes it hard to read and understand what's going on. It also leads to a style where we do lots and lots of mindless and small rewrites, instead of a clearer high-level argument.
Equational reasoning, instead, exposes the implicit goal as the intermediate terms, and maybe choose clearer arguments. Spelling the implicit goals out can also help you omit arguments to the rewriting justification.
There's still lots more work left to do on equational reasoning, for example, better support for using congruence, and algebraic-simplification libraries.
The following example compiles in an instant (about 0.4s to be exact) :
However, replace the final
Refl
with a hole and the compilation time increases dramatically. Actually it ran for 15 minutes, completely occupying one of the cores and consumed almost 60G of RAM before I decided to kill it:it also took a significant amount of time to die.
The problem can be mittigated somewhat by extracting a part of the rewrite into a separate lemma, e.g.:
In this example replacing the final
Refl
with a hole also increases compilation time, but only to about 2s.Of course having a hole instead of a
Refl
allows us to delete some rewrites and see how it impacts behaviour. In my experience around 7 consecutiverewrite
s is a boundary beyond which compilation time and memory consumption grow rapidly with each nextrewrite
.This is the behaviour I observe. What is the correct one, I don't know. I wish compilation time and memory consumption grew lineary with the number of steps in a proof, but perhaps it's unreasonable to expect that. In any case the sudden and rapid growth of compilation time is suspicious.