Training translation model with VQ outputs, perplexity is not a great fit for dev set evaluation, and bleu or others is also not great.
What I really want to use is accuracy - that gets the accuracy for all predicted factors - but even if just the main token, that would be great.
in constants, a metric named accuracy is declared and set up, but it is never reported anywhere.
AssertionError: Early stopping metric accuracy not found in validation metrics.
I would love to see accuracy supported again, and not removed.
Training translation model with VQ outputs,
perplexity
is not a great fit for dev set evaluation, andbleu
or others is also not great. What I really want to use isaccuracy
- that gets the accuracy for all predicted factors - but even if just the main token, that would be great.in
constants
, a metric namedaccuracy
is declared and set up, but it is never reported anywhere.I would love to see accuracy supported again, and not removed.