Open abentkamp opened 5 years ago
Hmm if we solve #64 (maybe with a custom way of handling a <= [a]
?) then we can make AVATAR an inference, I guess.
I don't think this should impact the heuristics much?
I don't know how large the impact is on heuristics, but I saw that it confused @petarvukmirovic who wrote an assertion assert((C.proof_depth c) <= C.proof_depth new_c);
.
Moreover, I am currently adding new kinds of step, e.g. a CNF step and a term conversion step. Maybe I shouldn't do that?
I think more kinds of steps might be good, if you handle the TSTP output and llproof checker properly (which I assume is the whole point anyway). In that case AVATAR could get its own kind of proof step that refines "inference"?
I don't know if Avatar really needs it's own kind of step because if ϕ <= [ψ]
is interpreted as ϕ ∨ ¬ ψ
, then it is not even a real Esa step.
What do you mean by "refines inference"? Right now it looks like this:
type kind =
| Intro of source * role
| Inference of rule * tag list
| Simplification of rule * tag list
| Cnf of skolem list
| Conv (** Conversion of different term datatypes *)
| Esa of rule * tag list
| Trivial (** trivial, or trivial within theories *)
| Define of ID.t * source (** definition *)
| By_def of ID.t (** following from the def of ID *)
Would you prefer this?
| Inference of sort * rule * tag list
where sort
can be CNF, Conv, Esa, etc.?
I mean, it's an inference, but the llprover must handle it differently (namely, by not using opaque
for the assumptions, otherwise it's not obvious why it's a tautology). It could actually be "trivial" for these clauses, and a regular inference for the boolean clause that embeds the actual splitting rule :-)
Clause.proof_depth
/Proof.Step.inferences_perfomed
considers only Inference or Simplifcation steps, but not Esa steps (e.g. Avatar). This might confuse the heuristics using it.