I am also trying to use the pertained model to evaluate the test set of CNN / Daily Mail summarization dataset. However, I keep having an error:
FailedPreconditionError (see above for traceback): Attempting to use uninitialized value SentSelector/sent_encoder/bidirectional_rnn/bw/gru_cell/candidate/biases[[Node:SentSelector/sent_encoder/bidirectional_rnn/bw/gru_cell/candidate/biases/read = IdentityT=DT_FLOAT,_device="/job:localhost/replica:0/task:0/cpu:0"]]
I am running main.py using the following parameters:
python main.py --mode evalall --model end2end --vocab_path ./data/finished_files/vocab --data_path ./data/finished_files/test.bin --decode_method greedy --eval_method loss --log_root log --single_pass 1 --exp_name exp_sample --load_best_eval_model True --eval_ckpt_path ./log/end2end/exp_sample/train/bestmodel-51000
I am wondering if this is the correct parameters to run the pretrained model?
If you want to load the pretrained model, please set load_best_eval_model to False since load_best_eval_model will try to load the model from the evaluation directory (eval(_${EVAL_METHOD})).
Thanks for the great work !
I am also trying to use the pertained model to evaluate the test set of CNN / Daily Mail summarization dataset. However, I keep having an error: FailedPreconditionError (see above for traceback): Attempting to use uninitialized value SentSelector/sent_encoder/bidirectional_rnn/bw/gru_cell/candidate/biases[[Node:SentSelector/sent_encoder/bidirectional_rnn/bw/gru_cell/candidate/biases/read = IdentityT=DT_FLOAT,_device="/job:localhost/replica:0/task:0/cpu:0"]]
I am running main.py using the following parameters: python main.py --mode evalall --model end2end --vocab_path ./data/finished_files/vocab --data_path ./data/finished_files/test.bin --decode_method greedy --eval_method loss --log_root log --single_pass 1 --exp_name exp_sample --load_best_eval_model True --eval_ckpt_path ./log/end2end/exp_sample/train/bestmodel-51000
I am wondering if this is the correct parameters to run the pretrained model?
Your help is appreciated
Thanks!