Closed akkikiki closed 5 years ago
Thanks for the PR! In general, as Matt Gardner once told me and I agree, they've decided not to include this flag in the example configurations since it's not good practice to extract the test results while developing / hyperparameter tuning a model. However, since this configuration is mainly for the purpose of reproducing, I agree that it should be there.
Oh, that makes sense. Thanks for merging it anyway!
SUMMARY: The current config does not output "test_UAS", "test_LAS", "test_UEM", "test_LEM", "test_loss" into the standard output or to
metrics.json
even "test_data_path" is specified in the config file. Following another example in the AllenNLP config, the solution is to simply add"evaluate_on_test": true
to the config.TEST: Ran
allennlp train $CONFIG -s $OUTPUT_DIR
, where $CONFIG is borrowed and slightly modified from the test config in AllenNLP.Example output of
metrics.json
without"evaluate_on_test": true
(i.e., the current config)Example output of
metrics.json
with"evaluate_on_test": true