Closed philipcori closed 4 years ago
Oh, that function was a utility that I quickly wrote to evaluate a model. It shouldn't be there. I've removed it.
To evaluate the perplexity of a model, you can pass 'perplexity' as a KERAS_METRIC
in config.py
. This is now done by default:
https://github.com/lvapeab/nmt-keras/blob/5a29099098e47ff84579b5e8c28c7769c952918c/config.py#L32
PS: it requires to update Keras.
Got it, thanks. However, it only computes perplexity during training. Is there a way to compute perplexity on the validation set? I would like to have the metric_name
here be 'perplexity' rather than sacrebleu:
callbacks.append(PrintPerformanceMetricOnEpochEndOrEachNUpdates(nmt_model,
dataset,
gt_id='target_text',
metric_name=['sacrebleu'],
set_name=['val'],
batch_size=256,
each_n_epochs=20,
extra_vars=search_params,
reload_epoch=0,
is_text=True,
input_text_id=input_text_id,
index2word_y=vocab,
sampling_type='max_likelihood',
beam_search=True,
save_path=nmt_model.model_path,
start_eval_on_epoch=0,
write_samples=True,
write_type='list',
verbose=True))
Thanks again for the help.
Added (https://github.com/MarcBS/multimodal_keras_wrapper/commit/22d10a950cec72fc64ea727bbfa1c02c2463edfa).
You can now evaluate with Perpleixty as any other metric, e.g.:
METRICS = ['sacrebleu', 'perplexity']
You'll need to update multimodal-keras-wrapper
and nmt-keras
.
Great, thank you.
Hello, I am trying to evaluate my model using the perplexity metric provided in keras-wrapper. However, it seems that nmt-keras was not integrated with this, as I get the following error:
compute_perplexity()
expects arguments y_pred, and y_true, and does not expect extra_vars or pred_list. Is there a way I can still evaluate perplexity?