Closed VP007-py closed 6 years ago
Hi @PVinay737 ,
in fact, you were doing nothing wrong. Through your problem I detected a recently introduced bug. I've just fixed it (https://github.com/lvapeab/multimodal_keras_wrapper/commit/f6d87f7607f263cd8517107202e1561aec2ac72a). You should update the keras_wrapper to the last version.
Cheers!
@lvapeab,that doesn't seem to be the fix
python sample_ensemble.py --models trained_models/EuTrans_esen_AttentionRNNEncoderDecoder_src_emb_250_bidir_True_enc_LSTM_32_dec_ConditionalLSTM_32_deepout_linear_trg_emb_250_Adam_0.001/epoch_1 trained_models/EuTrans_esen_AttentionRNNEncoderDecoder_src_emb_250_bidir_True_enc_LSTM_32_dec_ConditionalLSTM_32_deepout_linear_trg_emb_250_Adam_0.001/epoch_2 --dataset datasets/Dataset_EuTrans_esen.pkl --text examples/EuTrans/test.en
[02/11/2018 17:13:05] Reading parameters from config.py
[02/11/2018 17:13:05] <<< Cupy not available. Using numpy. >>>
Using Theano backend.
[02/11/2018 17:13:06] <<< Cupy not available. Using numpy. >>>
[02/11/2018 17:13:06] Using an ensemble of 2 models
[02/11/2018 17:13:06] <<< Loading model from trained_models/EuTrans_esen_AttentionRNNEncoderDecoder_src_emb_250_bidir_True_enc_LSTM_32_dec_ConditionalLSTM_32_deepout_linear_trg_emb_250_Adam_0.001/epoch_1_Model_Wrapper.pkl ... >>>
[02/11/2018 17:13:06] <<< Loading model from trained_models/EuTrans_esen_AttentionRNNEncoderDecoder_src_emb_250_bidir_True_enc_LSTM_32_dec_ConditionalLSTM_32_deepout_linear_trg_emb_250_Adam_0.001/epoch_1.h5 ... >>>
[02/11/2018 17:13:10] <<< Loading optimized model... >>>
[02/11/2018 17:13:13] <<< Optimized model loaded. >>>
[02/11/2018 17:13:13] <<< Model loaded in 7.5471 seconds. >>>
[02/11/2018 17:13:13] <<< Loading model from trained_models/EuTrans_esen_AttentionRNNEncoderDecoder_src_emb_250_bidir_True_enc_LSTM_32_dec_ConditionalLSTM_32_deepout_linear_trg_emb_250_Adam_0.001/epoch_2_Model_Wrapper.pkl ... >>>
[02/11/2018 17:13:13] <<< Loading model from trained_models/EuTrans_esen_AttentionRNNEncoderDecoder_src_emb_250_bidir_True_enc_LSTM_32_dec_ConditionalLSTM_32_deepout_linear_trg_emb_250_Adam_0.001/epoch_2.h5 ... >>>
[02/11/2018 17:13:15] <<< Loading optimized model... >>>
[02/11/2018 17:13:18] <<< Optimized model loaded. >>>
[02/11/2018 17:13:18] <<< Model loaded in 5.2335 seconds. >>>
[02/11/2018 17:13:18] <<< Loading Dataset instance from datasets/Dataset_EuTrans_esen.pkl ... >>>
[02/11/2018 17:13:19] <<< Dataset instance loaded >>>
[02/11/2018 17:13:19] Removed "val" set outputs of type "text" with id "target_text.
[02/11/2018 17:13:19] Applying tokenization function: "tokenize_none".
[02/11/2018 17:13:19] Loaded "val" set inputs of data_type "text" with data_id "source_text" and length 2996.
[02/11/2018 17:13:19] Loaded "val" set inputs of data_type "ghost" with data_id "state_below" and length 2996.
[02/11/2018 17:13:19] Loaded "val" set inputs of type "file-name" with id "raw_source_text".
[02/11/2018 17:13:19]
<<< Predicting outputs of val set >>>
[02/11/2018 17:13:26] We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.
[02/11/2018 17:13:27] We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.
Total cost of the translations: 25239.669452 Average cost of the translations: 8.424456
The sampling took: 104.785528 secs (Speed: 0.034975 sec/sample)
Traceback (most recent call last):
File "sample_ensemble.py", line 61, in <module>
sample_ensemble(args, params)
File "/home/Vinay737/nmt-keras/nmt_keras/apply_model.py", line 113, in sample_ensemble
verbose=args.verbose)
File "/home/Vinay737/nmt-keras/src/keras-wrapper/keras_wrapper/utils.py", line 998, in decode_predictions_beam_search
x_text))
File "/home/Vinay737/nmt-keras/src/keras-wrapper/keras_wrapper/utils.py", line 997, in <lambda>
map(lambda alignment, x_sentence: np.argmax(alignment[:, :max(1, len(x_sentence))], axis=1), alphas,
IndexError: too many indices for array
Just pushed (another) fix. Please, try again.
I'd rather to maintain the sampling process independent of the evaluation. After sampling, you can run your desired evaluator. There is a script (utils/evaluate_from_file.py) for computing several metrics according to the coco
evaluation package. You can use this (or any other) for computing BLEU.
After training the model,I tried decoding it with two iterations and i am getting the following error.
Any fixes ?