Open jeremytanjianle opened 3 years ago
Unfortunately, I can't think of any straightforward way to increase recall since the model is trained for generation, using token-level cross entropy loss. Perhaps you can try lowering the probability of producing the <arg>
token?
Love the paper.
I've tried it on my own closed domain dataset and achieved poor recall.
I believe the low recall is due to imbalanced labels, but I value recall over precision. Is there some way to tune the model to increase recall at the cost of precision?