uclnlp / inferbeddings

Injecting Background Knowledge in Neural Models via Adversarial Set Regularisation
MIT License
59 stars 12 forks source link

Closed form adversaries - next steps #17

Closed tdmeeste closed 7 years ago

tdmeeste commented 7 years ago

Next steps:

tdmeeste commented 7 years ago

@pminervini for the closed form expressions for other clauses, I think I'm up to speed writing them down (and I'm having fun doing so - thanks @riedelcastro!), but I haven't got much time next week. Here's my proposal: you could start implementing the expressions we've already got, and as soon as the first ones are there, I can set up some synthetic dataset experiments, and after that, write down the expressions for the ones you are going to implement next. What do you think? In any case, I'm eager to see some numerical results!!

pminervini commented 7 years ago

@tdmeeste sounds great, thank you! I went through the calculations in closed_form.tex in the last days and they were really bullet-proof - that's amazing!

I'll be adding the datasets @rockt obtained and the new regularisers in the next days, starting immediately

riedelcastro commented 7 years ago

Great progress here!

We need to make sure that when testing this empirically we can validate that the objective values of the closed form solution are always as high as or higher than the values of the iterative solution, not (just) the final accuracies (which generally might not be higher at all).

tdmeeste commented 7 years ago

Agreed, since the max approach was mostly but not always better than the sum approach. Maybe we can test whether the violation loss for the --adv-pooling='max' method approaches the closed form loss with increasing --adv-batch-size (in the case --adv-init-ground is not set).

pminervini commented 7 years ago

Closing this one now - my feeling is that those closed form solutions in the Appendix might suffice