Joseph - For LUniv and LUnivRPI, the number of literals which dual isn't in
the clause of lowered pivots is computed. It has to be one. But if
it's zero, we can stop to collect/fix anything and just reintroduce
lowered nodes now with this "zero-dual" node as "root" node.
Bruno - This idea seems interesting. Do you have any idea whether such cases
occur often in practice?
Joseph - No idea.
Bruno - Might there be proofs where this idea would perform badly though?
Joseph - It depends on how the idea is implemented. For instance, an exception
might be thrown when a "zero-dual" node is encountered. In this case
the algorithm might be much faster. But a small proof of false below
the found node might exist, and then the compression ratio will be
worse than in the original algorithm. On the other hand, a more careful
(and slow) implementation might exists were such drawbacks are avoided.
Idea 2:
Joseph - Same for LU, if the "dual" of an already collected Unit is
collected, we can stop collecting and resolve this units together.
But in this case, we have to take care of the unit order (one Unit
might be an indirect premise of the other).
Bruno - Also interesting, but in this case I can easily imagine a proof where
it would not be a good idea to do this:
Joseph - I've imagine two possible implementations for this idea. The first
one is to traverse the proof top-down for collecting units. The other
one is to check only when reintroducing nodes. I believe this second
solution to be much more interesting because we might test for
duplication, for tautology and even in 3pass for proof of false.
Here are two ideas for improving LU and LUniv suggested by Joseph in this thread: https://groups.google.com/forum/?fromgroups#!topic/skeptik-dev/knoIZHwfKyY%5B1-25%5D
Idea 1:
Joseph - For LUniv and LUnivRPI, the number of literals which dual isn't in the clause of lowered pivots is computed. It has to be one. But if it's zero, we can stop to collect/fix anything and just reintroduce lowered nodes now with this "zero-dual" node as "root" node.
Bruno - This idea seems interesting. Do you have any idea whether such cases occur often in practice?
Joseph - No idea.
Bruno - Might there be proofs where this idea would perform badly though?
Joseph - It depends on how the idea is implemented. For instance, an exception might be thrown when a "zero-dual" node is encountered. In this case the algorithm might be much faster. But a small proof of false below the found node might exist, and then the compression ratio will be worse than in the original algorithm. On the other hand, a more careful (and slow) implementation might exists were such drawbacks are avoided.
Idea 2:
Joseph - Same for LU, if the "dual" of an already collected Unit is collected, we can stop collecting and resolve this units together. But in this case, we have to take care of the unit order (one Unit might be an indirect premise of the other).
Bruno - Also interesting, but in this case I can easily imagine a proof where it would not be a good idea to do this:
Joseph - I've imagine two possible implementations for this idea. The first one is to traverse the proof top-down for collecting units. The other one is to check only when reintroducing nodes. I believe this second solution to be much more interesting because we might test for duplication, for tautology and even in 3pass for proof of false.