Open dranov opened 1 month ago
Hi @dranov! #130 re-enables caching during translation, but unfortunately, there's no noticeable performance improvement. While the loop might have some impact, I believe the primary cause of the slowdown is the complexity of our pattern-matching. The intricate patterns generate numerous ite
conditions that need to be checked sequentially for each pattern, leading to the slowdown. I think the best way to improve performance is to use a discrimination tree (Lean.Meta.DiscrTree
) for pattern-matching. This structure leverages a Lean.Meta.DiscrTree.Trie
internally, which should significantly boost performance.
I am running
lean-smt
on goals that are relatively large (hundreds of times in a single file) and the translation to SMT (i.e. not the actual invocation of the solver) is a major bottleneck. For instance:takes 2.5 seconds on my computer. This seems wildly excessive. For instance,
auto
takes about 100ms, and that includes calling the SMT solver.I've looked into what might be causing this and two things came up:
I'm not sure to what extent either of these would improve the performance or how to proceed in fixing them, but if given some hints, I can give it a try.