I believe that our framework can be relatively easily extended to learn rule weights. I feel this is a low-hanging fruit, and may lead to better results without having to worry about where to get the rules from. If @tdmeeste or @rockt have any cycles, maybe something to look at for them. If we have the datasets already prepared, it's a matter of extending the TF loss. I have some ideas how the loss would look like. Maybe I find some time to hack this in as well.
Generally, I am looking low-hanging fruits that add heft to the paper and make us less relying on improvements by rule injection (which may or not may materialise).
Todo:
[X] adapt syntax for clause parser to define rule weights and learnable rule weights
[X] implement weighted loss (and negated weighted loss)
[x] provide way to easily print out weights per clause (might need dictionary from clauses to variables)
[ ] run on synthetic dataset with partial transitivity to validate whether a non-0.5 weight is learned.
[ ] "dynamic weights" based on relation representations
I believe that our framework can be relatively easily extended to learn rule weights. I feel this is a low-hanging fruit, and may lead to better results without having to worry about where to get the rules from. If @tdmeeste or @rockt have any cycles, maybe something to look at for them. If we have the datasets already prepared, it's a matter of extending the TF loss. I have some ideas how the loss would look like. Maybe I find some time to hack this in as well.
Generally, I am looking low-hanging fruits that add heft to the paper and make us less relying on improvements by rule injection (which may or not may materialise).
Todo: