Even for small predicates, languages like a simple BV language can scale to ~150,000 programs of depth 3. Enumerating these terms is prohibitively expensive. I think we need to leverage some core insight we know as human beings about what good rewrite rules look like.
I think as a start, we can do enumeration exploring basic axioms over rewrites; e.g., commutativity, associativity, etc. for a set of variables, all operators, and a set of constants (0, 1, 2). This initial space can be complete because it is small.
Then, we should ditch completeness for future exploration -- we should look here for inspiration on what their rewrite rules look like. For example, one thing that's common is that operators on one side of a rewrite often show up on the other.
Implementation map:
[ ] 1. Implement bottom-up "axiom" enumeration
[x] 2. Implement constant/term ditching at higher depths -- limit leaves to "forall" variables which could represent any term
[ ] 3. Study above paper (and others) closely, try to get generalized sense of what a "good" rewrite rule is
[ ] 4. Refine higher-depth enumeration -- set timeouts, implement random search, add ML angle, etc.
Even for small predicates, languages like a simple BV language can scale to ~150,000 programs of depth 3. Enumerating these terms is prohibitively expensive. I think we need to leverage some core insight we know as human beings about what good rewrite rules look like.
I think as a start, we can do enumeration exploring basic axioms over rewrites; e.g., commutativity, associativity, etc. for a set of variables, all operators, and a set of constants (0, 1, 2). This initial space can be complete because it is small.
Then, we should ditch completeness for future exploration -- we should look here for inspiration on what their rewrite rules look like. For example, one thing that's common is that operators on one side of a rewrite often show up on the other.
Implementation map: