Open teorth opened 4 hours ago
claim
Should this tool be a metaprogram, or is it fine for it to shallowly parse the files (i.e. search for (G: Type*) [Magma G] (G: Equation(\d+) G) : Equation(\d+) G
and ∃ (G: Type) (_: Magma G), Equation(\d+) G ∧ ¬ Equation(\d+) G
, and add special suuport for Equation1_true
)?
Should this tool be a metaprogram, or is it fine for it to shallowly parse the files (i.e. search for
(G: Type*) [Magma G] (G: Equation(\d+) G) : Equation(\d+) G
and∃ (G: Type) (_: Magma G), Equation(\d+) G ∧ ¬ Equation(\d+) G
, and add special suuport forEquation1_true
)?
In the longer term I think a metaprogram would be the most robust solution (that would also be adaptable for other projects), but a "quick and dirty" shallow approach may be faster in the short term. So it could go either way. One compromise is to try to write a quick and dirty program, but in a way that many of the components of the code could be refactored into a more robust long-term solution later.
If #22 could be modified so Implication
contains whether it's conjectured I could directly process the Output
structure it produces
Isn't this already implemented in scripts/process_implications.py
?
Given one or more Lean files as input, extract all the implications and proof_wanted statements from that file. Then, for any two equations i, j that appear anywhere in these implications, report the status of the implication i => j as one of the following: