Closed Victor0118 closed 5 years ago
==> decatt_sick_5e-4_0.001_0.5_0.1.log <== INFO - pearson_r spearman_r mse KL-divergence loss INFO - test 0.80094564 0.7184082390455326 0.3711671233177185 0.6171465432980202
Logic seems fine, just nitpicks. Do we have results for *QA now?
I haven't run *QA. Might do that later.
@daemon All of your comments fixed.
With only one trial, the results of DecAtt on
Cool. There are still some extraneous comments, but I think it's in good enough shape.
Reference: code: https://github.com/lanwuwei/SPM_toolkit/tree/master/DecAtt paper: A Decomposable Attention Model for Natural Language Inference
@likicode @daemon Could you take a look at this PR when you are available?