uclnlp / inferbeddings

Injecting Background Knowledge in Neural Models via Adversarial Set Regularisation
MIT License
59 stars 12 forks source link

Learning "Rule Weights" #4

Open riedelcastro opened 7 years ago

riedelcastro commented 7 years ago

I believe that our framework can be relatively easily extended to learn rule weights. I feel this is a low-hanging fruit, and may lead to better results without having to worry about where to get the rules from. If @tdmeeste or @rockt have any cycles, maybe something to look at for them. If we have the datasets already prepared, it's a matter of extending the TF loss. I have some ideas how the loss would look like. Maybe I find some time to hack this in as well.

Generally, I am looking low-hanging fruits that add heft to the paper and make us less relying on improvements by rule injection (which may or not may materialise).

Todo: