lvapeab / nmt-keras

Neural Machine Translation with Keras
http://nmt-keras.readthedocs.io
MIT License
533 stars 130 forks source link

IOError: cannot write mode RGBA as JPEG #84

Closed VP007-py closed 6 years ago

VP007-py commented 6 years ago

After running main.py I am getting the following error

Using Theano backend.
[26/10/2018 19:19:01] <<< Cupy not available. Using numpy. >>>
[26/10/2018 19:19:01] Running training.
[26/10/2018 19:19:01] Building EuTrans_esen dataset
[26/10/2018 19:19:01]   Applying tokenization function: "tokenize_none".
[26/10/2018 19:19:01] Creating vocabulary for data with data_id 'target_text'.
[26/10/2018 19:19:01]    Total: 513 unique words in 9900 sentences with a total of 98304 words.
[26/10/2018 19:19:01] Creating dictionary of all words
[26/10/2018 19:19:01] Loaded "train" set outputs of data_type "text" with data_id "target_text" and length 9900.
[26/10/2018 19:19:01] Loaded "train" set inputs of type "file-name" with id "raw_target_text".
[26/10/2018 19:19:01]   Applying tokenization function: "tokenize_none".
[26/10/2018 19:19:01] Loaded "val" set outputs of data_type "text" with data_id "target_text" and length 100.
[26/10/2018 19:19:01] Loaded "val" set inputs of type "file-name" with id "raw_target_text".
[26/10/2018 19:19:01]   Applying tokenization function: "tokenize_none".
[26/10/2018 19:19:01] Loaded "test" set outputs of data_type "text" with data_id "target_text" and length 2996.
[26/10/2018 19:19:01] Loaded "test" set inputs of type "file-name" with id "raw_target_text".
[26/10/2018 19:19:01]   Applying tokenization function: "tokenize_none".
[26/10/2018 19:19:01] Creating vocabulary for data with data_id 'source_text'.
[26/10/2018 19:19:01]    Total: 686 unique words in 9900 sentences with a total of 96172 words.
[26/10/2018 19:19:01] Creating dictionary of all words
[26/10/2018 19:19:01] Loaded "train" set inputs of data_type "text" with data_id "source_text" and length 9900.
[26/10/2018 19:19:01]   Applying tokenization function: "tokenize_none".
[26/10/2018 19:19:01] Creating vocabulary for data with data_id 'state_below'.
[26/10/2018 19:19:01]    Total: 513 unique words in 9900 sentences with a total of 98304 words.
[26/10/2018 19:19:01] Creating dictionary of all words
[26/10/2018 19:19:01] Loaded "train" set inputs of data_type "text" with data_id "state_below" and length 9900.
[26/10/2018 19:19:01] Loaded "train" set inputs of type "file-name" with id "raw_source_text".
[26/10/2018 19:19:01]   Applying tokenization function: "tokenize_none".
[26/10/2018 19:19:01] Loaded "val" set inputs of data_type "text" with data_id "source_text" and length 100.
[26/10/2018 19:19:01] Loaded "val" set inputs of data_type "ghost" with data_id "state_below" and length 100.
[26/10/2018 19:19:01] Loaded "val" set inputs of type "file-name" with id "raw_source_text".
[26/10/2018 19:19:01]   Applying tokenization function: "tokenize_none".
[26/10/2018 19:19:01] Loaded "test" set inputs of data_type "text" with data_id "source_text" and length 2996.
[26/10/2018 19:19:01] Loaded "test" set inputs of data_type "ghost" with data_id "state_below" and length 2996.
[26/10/2018 19:19:01] Loaded "test" set inputs of type "file-name" with id "raw_source_text".
[26/10/2018 19:19:01] Keeping 1 captions per input on the val set.
[26/10/2018 19:19:01] Samples reduced to 100 in val set.
[26/10/2018 19:19:01] <<< Saving Dataset instance to datasets//Dataset_EuTrans_esen.pkl ... >>>
[26/10/2018 19:19:01] <<< Dataset instance saved >>>
[26/10/2018 19:19:01] <<< Building AttentionRNNEncoderDecoder Translation_Model >>>
-----------------------------------------------------------------------------------
        TranslationModel instance
-----------------------------------------------------------------------------------
_model_type: AttentionRNNEncoderDecoder
name: EuTrans_esen_AttentionRNNEncoderDecoder_src_emb_32_bidir_True_enc_LSTM_32_dec_ConditionalLSTM_32_deepout_linear_trg_emb_32_Adam_0.001
model_path: trained_models/EuTrans_esen_AttentionRNNEncoderDecoder_src_emb_32_bidir_True_enc_LSTM_32_dec_ConditionalLSTM_32_deepout_linear_trg_emb_32_Adam_0.001/
verbose: 1

Params:
    ACCUMULATE_GRADIENTS: 1
    ADDITIONAL_OUTPUT_MERGE_MODE: Add
    ALIGN_FROM_RAW: True
    ALPHA_FACTOR: 0.6
    AMSGRAD: False
    APPLY_DETOKENIZATION: False
    ATTENTION_DROPOUT_P: 0.0
    ATTENTION_MODE: add
    ATTENTION_SIZE: 32
    BATCH_NORMALIZATION_MODE: 1
    BATCH_SIZE: 50
    BEAM_SEARCH: True
    BEAM_SIZE: 6
    BETA_1: 0.9
    BETA_2: 0.999
    BIDIRECTIONAL_DEEP_ENCODER: True
    BIDIRECTIONAL_ENCODER: True
    BIDIRECTIONAL_MERGE_MODE: concat
    BPE_CODES_PATH: examples/EuTrans//training_codes.joint
    CLASSIFIER_ACTIVATION: softmax
    CLIP_C: 5.0
    CLIP_V: 0.0
    COVERAGE_NORM_FACTOR: 0.2
    COVERAGE_PENALTY: False
    DATASET_NAME: EuTrans
    DATASET_STORE_PATH: datasets/
    DATA_AUGMENTATION: False
    DATA_ROOT_PATH: examples/EuTrans/
    DECODER_HIDDEN_SIZE: 32
    DECODER_RNN_TYPE: ConditionalLSTM
    DEEP_OUTPUT_LAYERS: [('linear', 32)]
    DETOKENIZATION_METHOD: detokenize_none
    DOUBLE_STOCHASTIC_ATTENTION_REG: 0.0
    DROPOUT_P: 0.0
    EARLY_STOP: True
    EMBEDDINGS_FREQ: 1
    EMBEDDINGS_LAYER_NAMES: ['source_word_embedding', 'target_word_embedding']
    EMBEDDINGS_METADATA: None
    ENCODER_HIDDEN_SIZE: 32
    ENCODER_RNN_TYPE: LSTM
    EPOCHS_FOR_SAVE: 1
    EPSILON: 1e-07
    EVAL_EACH: 1
    EVAL_EACH_EPOCHS: True
    EVAL_ON_SETS: ['val']
    EVAL_ON_SETS_KERAS: []
    EXTRA_NAME: 
    FF_SIZE: 128
    FILL: end
    FORCE_RELOAD_VOCABULARY: False
    HEURISTIC: 0
    HOMOGENEOUS_BATCHES: False
    INIT_ATT: glorot_uniform
    INIT_FUNCTION: glorot_uniform
    INIT_LAYERS: ['tanh']
    INNER_INIT: orthogonal
    INPUTS_IDS_DATASET: ['source_text', 'state_below']
    INPUTS_IDS_MODEL: ['source_text', 'state_below']
    INPUT_VOCABULARY_SIZE: 689
    JOINT_BATCHES: 4
    LABEL_SMOOTHING: 0.0
    LABEL_WORD_EMBEDDINGS_WITH_VOCAB: True
    LENGTH_NORM_FACTOR: 0.2
    LENGTH_PENALTY: False
    LOG_DIR: tensorboard_logs
    LOSS: categorical_crossentropy
    LR: 0.001
    LR_DECAY: None
    LR_GAMMA: 0.8
    LR_HALF_LIFE: 100
    LR_REDUCER_EXP_BASE: -0.5
    LR_REDUCER_TYPE: exponential
    LR_REDUCE_EACH_EPOCHS: False
    LR_START_REDUCTION_ON_EPOCH: 0
    MAPPING: examples/EuTrans//mapping.es_en.pkl
    MAXLEN_GIVEN_X: True
    MAXLEN_GIVEN_X_FACTOR: 2
    MAX_EPOCH: 500
    MAX_INPUT_TEXT_LEN: 50
    MAX_OUTPUT_TEXT_LEN: 50
    MAX_OUTPUT_TEXT_LEN_TEST: 150
    METRICS: ['coco']
    MINLEN_GIVEN_X: True
    MINLEN_GIVEN_X_FACTOR: 3
    MIN_OCCURRENCES_INPUT_VOCAB: 0
    MIN_OCCURRENCES_OUTPUT_VOCAB: 0
    MODE: training
    MODEL_NAME: EuTrans_esen_AttentionRNNEncoderDecoder_src_emb_32_bidir_True_enc_LSTM_32_dec_ConditionalLSTM_32_deepout_linear_trg_emb_32_Adam_0.001
    MODEL_SIZE: 32
    MODEL_TYPE: AttentionRNNEncoderDecoder
    MOMENTUM: 0.0
    MULTIHEAD_ATTENTION_ACTIVATION: linear
    NESTEROV_MOMENTUM: False
    NOISE_AMOUNT: 0.01
    NORMALIZE_SAMPLING: False
    N_GPUS: 1
    N_HEADS: 8
    N_LAYERS_DECODER: 1
    N_LAYERS_ENCODER: 1
    N_SAMPLES: 5
    OPTIMIZED_SEARCH: True
    OPTIMIZER: Adam
    OUTPUTS_IDS_DATASET: ['target_text']
    OUTPUTS_IDS_MODEL: ['target_text']
    OUTPUT_VOCABULARY_SIZE: 516
    PAD_ON_BATCH: True
    PARALLEL_LOADERS: 1
    PATIENCE: 10
    POS_UNK: True
    REBUILD_DATASET: True
    RECURRENT_DROPOUT_P: 0.0
    RECURRENT_INPUT_DROPOUT_P: 0.0
    RECURRENT_WEIGHT_DECAY: 0.0
    REGULARIZATION_FN: L2
    RELOAD: 0
    RELOAD_EPOCH: True
    RHO: 0.9
    SAMPLE_EACH_UPDATES: 300
    SAMPLE_ON_SETS: ['train', 'val']
    SAMPLE_WEIGHTS: True
    SAMPLING: max_likelihood
    SAMPLING_SAVE_MODE: list
    SAVE_EACH_EVALUATION: True
    SCALE_SOURCE_WORD_EMBEDDINGS: False
    SCALE_TARGET_WORD_EMBEDDINGS: False
    SEARCH_PRUNING: False
    SKIP_VECTORS_HIDDEN_SIZE: 32
    SKIP_VECTORS_SHARED_ACTIVATION: tanh
    SOURCE_TEXT_EMBEDDING_SIZE: 32
    SRC_LAN: es
    SRC_PRETRAINED_VECTORS: None
    SRC_PRETRAINED_VECTORS_TRAINABLE: True
    START_EVAL_ON_EPOCH: 1
    START_SAMPLING_ON_EPOCH: 1
    STOP_METRIC: Bleu_4
    STORE_PATH: trained_models/EuTrans_esen_AttentionRNNEncoderDecoder_src_emb_32_bidir_True_enc_LSTM_32_dec_ConditionalLSTM_32_deepout_linear_trg_emb_32_Adam_0.001/
    TARGET_TEXT_EMBEDDING_SIZE: 32
    TASK_NAME: EuTrans
    TEMPERATURE: 1
    TENSORBOARD: True
    TEXT_FILES: {'test': 'test.', 'train': 'training.', 'val': 'dev.'}
    TIE_EMBEDDINGS: False
    TOKENIZATION_METHOD: tokenize_none
    TOKENIZE_HYPOTHESES: True
    TOKENIZE_REFERENCES: True
    TRAINABLE_DECODER: True
    TRAINABLE_ENCODER: True
    TRAIN_ON_TRAINVAL: False
    TRG_LAN: en
    TRG_PRETRAINED_VECTORS: None
    TRG_PRETRAINED_VECTORS_TRAINABLE: True
    USE_BATCH_NORMALIZATION: True
    USE_CUDNN: True
    USE_L1: False
    USE_L2: False
    USE_NOISE: False
    USE_PRELU: False
    USE_TF_OPTIMIZER: True
    VERBOSE: 1
    WARMUP_EXP: -1.5
    WEIGHT_DECAY: 0.0001
    WORD_EMBEDDINGS_LABELS: ['source_text', 'target_text']
    WRITE_VALID_SAMPLES: True
    layer: ('linear', 32)
-----------------------------------------------------------------------------------
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
source_text (InputLayer)        (None, None)         0                                            
__________________________________________________________________________________________________
source_word_embedding (Embeddin (None, None, 32)     22048       source_text[0][0]                
__________________________________________________________________________________________________
src_embedding_batch_normalizati (None, None, 32)     128         source_word_embedding[0][0]      
__________________________________________________________________________________________________
remove_mask_1 (RemoveMask)      (None, None, 32)     0           src_embedding_batch_normalization
__________________________________________________________________________________________________
bidirectional_encoder_LSTM (Bid (None, None, 64)     16640       remove_mask_1[0][0]              
__________________________________________________________________________________________________
annotations_batch_normalization (None, None, 64)     256         bidirectional_encoder_LSTM[0][0] 
__________________________________________________________________________________________________
source_text_mask (GetMask)      (None, None, 32)     0           src_embedding_batch_normalization
__________________________________________________________________________________________________
annotations (ApplyMask)         (None, None, 64)     0           annotations_batch_normalization[0
                                                                 source_text_mask[0][0]           
__________________________________________________________________________________________________
state_below (InputLayer)        (None, None)         0                                            
__________________________________________________________________________________________________
ctx_mean (MaskedMean)           (None, 64)           0           annotations[0][0]                
__________________________________________________________________________________________________
target_word_embedding (Embeddin (None, None, 32)     16512       state_below[0][0]                
__________________________________________________________________________________________________
initial_state (Dense)           (None, 32)           2080        ctx_mean[0][0]                   
__________________________________________________________________________________________________
initial_memory (Dense)          (None, 32)           2080        ctx_mean[0][0]                   
__________________________________________________________________________________________________
state_below_batch_normalization (None, None, 32)     128         target_word_embedding[0][0]      
__________________________________________________________________________________________________
initial_state_batch_normalizati (None, 32)           128         initial_state[0][0]              
__________________________________________________________________________________________________
initial_memory_batch_normalizat (None, 32)           128         initial_memory[0][0]             
__________________________________________________________________________________________________
decoder_AttConditionalLSTMCond  [(None, None, 32), ( 23873       state_below_batch_normalization[0
                                                                 annotations[0][0]                
                                                                 initial_state_batch_normalization
                                                                 initial_memory_batch_normalizatio
__________________________________________________________________________________________________
proj_h0_batch_normalization (Ba (None, None, 32)     128         decoder_AttConditionalLSTMCond[0]
__________________________________________________________________________________________________
logit_ctx (TimeDistributed)     (None, None, 32)     2080        decoder_AttConditionalLSTMCond[0]
__________________________________________________________________________________________________
logit_lstm (TimeDistributed)    (None, None, 32)     1056        proj_h0_batch_normalization[0][0]
__________________________________________________________________________________________________
permute_general_1 (PermuteGener (None, None, 32)     0           logit_ctx[0][0]                  
__________________________________________________________________________________________________
logit_emb (TimeDistributed)     (None, None, 32)     1056        state_below_batch_normalization[0
__________________________________________________________________________________________________
out_layer_mlp_batch_normalizati (None, None, 32)     128         logit_lstm[0][0]                 
__________________________________________________________________________________________________
out_layer_ctx_batch_normalizati (None, None, 32)     128         permute_general_1[0][0]          
__________________________________________________________________________________________________
out_layer_emb_batch_normalizati (None, None, 32)     128         logit_emb[0][0]                  
__________________________________________________________________________________________________
additional_input (Add)          (None, None, 32)     0           out_layer_mlp_batch_normalization
                                                                 out_layer_ctx_batch_normalization
                                                                 out_layer_emb_batch_normalization
__________________________________________________________________________________________________
activation_1 (Activation)       (None, None, 32)     0           additional_input[0][0]           
__________________________________________________________________________________________________
linear_0 (TimeDistributed)      (None, None, 32)     1056        activation_1[0][0]               
__________________________________________________________________________________________________
out_layer_linear_0_batch_normal (None, None, 32)     128         linear_0[0][0]                   
__________________________________________________________________________________________________
target_text (TimeDistributed)   (None, None, 516)    17028       out_layer_linear_0_batch_normaliz
==================================================================================================
Total params: 106,917
Trainable params: 106,213
Non-trainable params: 704
__________________________________________________________________________________________________
[26/10/2018 19:19:06] Preparing optimizer: Adam [LR: 0.001 - LOSS: categorical_crossentropy - CLIP_C 5.0 - CLIP_V  0.0 - LR_OPTIMIZER_DECAY 0.0 - ACCUMULATE_GRADIENTS 1] and compiling.
[26/10/2018 19:19:06] Starting training!
[26/10/2018 19:19:06] <<< Training model >>>
[26/10/2018 19:19:06] Training parameters: {'reload_epoch': 0, 'wo_da_patch_type': 'whole', 'shuffle': True, 'verbose': 1, 'data_augmentation': False, 'class_weights': None, 'epochs_for_save': 1, 'lr_reducer_exp_base': -0.5, 'lr_gamma': 0.8, 'start_eval_on_epoch': 1, 'n_epochs': 500, 'patience': 10, 'tensorboard': True, 'epoch_offset': 0, 'da_patch_type': 'resize_and_rndcrop', 'mean_substraction': False, 'normalize': True, 'n_gpus': 1, 'n_parallel_loaders': 1, 'each_n_epochs': 1, 'patience_check_split': 'val', 'da_enhance_list': [], 'reduce_each_epochs': False, 'lr_warmup_exp': -1.5, 'lr_reducer_type': 'exponential', 'tensorboard_params': {'batch_size': 50, 'write_grads': False, 'write_images': False, 'label_word_embeddings_with_vocab': True, 'embeddings_metadata': None, 'write_graph': True, 'embeddings_layer_names': ['source_word_embedding', 'target_word_embedding'], 'word_embeddings_labels': ['source_text', 'target_text'], 'embeddings_freq': 1, 'log_dir': 'tensorboard_logs', 'histogram_freq': 0}, 'metric_check': 'Bleu_4', 'start_reduction_on_epoch': 0, 'homogeneous_batches': False, 'batch_size': 50, 'num_iterations_val': None, 'normalization_type': '(-1)-1', 'initial_lr': 0.001, 'lr_half_life': 100, 'lr_decay': None, 'joint_batches': 4, 'maxlen': 50, 'eval_on_epochs': True, 'eval_on_sets': [], 'extra_callbacks': [<keras_wrapper.extra.callbacks.EvalPerformance object at 0x7f9963398e10>, <keras_wrapper.extra.callbacks.Sample object at 0x7f98f3e3f050>]}
[26/10/2018 19:19:09] SeqOptimizer apply <theano.scan_module.scan_opt.PushOutScanOutput object at 0x7f9925cac810>
[26/10/2018 19:19:09] Traceback:
[26/10/2018 19:19:09] Traceback (most recent call last):
  File "/home/Vinay737/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 251, in apply
    sub_prof = optimizer.optimize(fgraph)
  File "/home/Vinay737/.local/lib/python2.7/site-packages/theano/gof/opt.py", line 97, in optimize
    ret = self.apply(fgraph, *args, **kwargs)
  File "/home/Vinay737/.local/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 685, in apply
    node = self.process_node(fgraph, node)
  File "/home/Vinay737/.local/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 745, in process_node
    node, args)
  File "/home/Vinay737/.local/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 854, in push_out_inner_vars
    add_as_nitsots)
  File "/home/Vinay737/.local/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 906, in add_nitsot_outputs
    reason='scanOp_pushout_output')
  File "/home/Vinay737/.local/lib/python2.7/site-packages/theano/gof/toolbox.py", line 569, in replace_all_validate_remove
    chk = fgraph.replace_all_validate(replacements, reason)
  File "/home/Vinay737/.local/lib/python2.7/site-packages/theano/gof/toolbox.py", line 543, in replace_all_validate
    fgraph.validate()
  File "/home/Vinay737/.local/lib/python2.7/site-packages/theano/gof/toolbox.py", line 434, in validate_
    ret = fgraph.execute_callbacks('validate')
  File "/home/Vinay737/.local/lib/python2.7/site-packages/theano/gof/fg.py", line 594, in execute_callbacks
    fn(self, *args, **kwargs)
  File "/home/Vinay737/.local/lib/python2.7/site-packages/theano/gof/toolbox.py", line 600, in validate
    raise theano.gof.InconsistencyError("Trying to reintroduce a removed node")
InconsistencyError: Trying to reintroduce a removed node

[26/10/2018 19:19:36] We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.
[26/10/2018 19:19:38] We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.
Epoch 1/500
198/198 [==============================] - 126s 636ms/step - loss: 6.5383

[26/10/2018 19:21:48] <<< Saving model to trained_models/EuTrans_esen_AttentionRNNEncoderDecoder_src_emb_32_bidir_True_enc_LSTM_32_dec_ConditionalLSTM_32_deepout_linear_trg_emb_32_Adam_0.001//epoch_1 ... >>>
[26/10/2018 19:21:48] <<< Model saved >>>

[26/10/2018 19:21:48] <<< Predicting outputs of val set >>>
Sampling 100/100  -  ETA: 0s 
 Total cost of the translations: 868.602650      Average cost of the translations: 8.686026
The sampling took: 7.614410 secs (Speed: 0.076144 sec/sample)

[26/10/2018 19:21:56] Prediction output 0: target_text (text)
[26/10/2018 19:21:56] Decoding beam search prediction ...
[26/10/2018 19:21:56] Using heuristic 0
[26/10/2018 19:21:56] Evaluating on metric coco
[26/10/2018 19:22:03] Computing coco scores on the val split...
[26/10/2018 19:22:03] Bleu_1: 0.367207533767
[26/10/2018 19:22:03] Bleu_2: 0.279575331521
[26/10/2018 19:22:03] Bleu_3: 0.218195356364
[26/10/2018 19:22:03] Bleu_4: 0.169279634999
[26/10/2018 19:22:03] CIDEr: 1.329316007286107
[26/10/2018 19:22:03] METEOR: 0.172640741365
[26/10/2018 19:22:03] ROUGE_L: 0.4674874118613199
[26/10/2018 19:22:03] TER: 0.62044534413
[26/10/2018 19:22:03] Done evaluating on metric coco
Traceback (most recent call last):
  File "main.py", line 53, in <module>
    train_model(parameters, args.dataset)
  File "/home/Vinay737/nmt-keras/nmt_keras/training.py", line 159, in train_model
    nmt_model.trainNet(dataset, training_params)
  File "/home/Vinay737/nmt-keras/src/keras-wrapper/keras_wrapper/cnn_model.py", line 915, in trainNet
    self.__train(ds, params)
  File "/home/Vinay737/nmt-keras/src/keras-wrapper/keras_wrapper/cnn_model.py", line 1140, in __train
    initial_epoch=params['epoch_offset'])
  File "/home/Vinay737/nmt-keras/src/keras/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/home/Vinay737/nmt-keras/src/keras/keras/engine/training.py", line 1480, in fit_generator
    initial_epoch=initial_epoch)
  File "/home/Vinay737/nmt-keras/src/keras/keras/engine/training_generator.py", line 251, in fit_generator
    callbacks.on_epoch_end(epoch, epoch_logs)
  File "/home/Vinay737/nmt-keras/src/keras/keras/callbacks.py", line 78, in on_epoch_end
    callback.on_epoch_end(epoch, logs)
  File "/home/Vinay737/nmt-keras/src/keras-wrapper/keras_wrapper/extra/callbacks.py", line 280, in on_epoch_end
    self.evaluate(epoch, counter_name='epoch')
  File "/home/Vinay737/nmt-keras/src/keras-wrapper/keras_wrapper/extra/callbacks.py", line 554, in evaluate
    self.set_name, upperbound=self.max_plot)
  File "/home/Vinay737/nmt-keras/src/keras-wrapper/keras_wrapper/cnn_model.py", line 2479, in plot
    plt.savefig(plot_file)
  File "/usr/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 688, in savefig
    res = fig.savefig(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/matplotlib/figure.py", line 1565, in savefig
    self.canvas.print_figure(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/matplotlib/backend_bases.py", line 2232, in print_figure
    **kwargs)
  File "/usr/lib/python2.7/dist-packages/matplotlib/backends/backend_agg.py", line 583, in print_jpg
    return image.save(filename_or_obj, format='jpeg', **options)
  File "/home/Vinay737/.local/lib/python2.7/site-packages/PIL/Image.py", line 1969, in save
    save_handler(self, fp, filename)
  File "/home/Vinay737/.local/lib/python2.7/site-packages/PIL/JpegImagePlugin.py", line 622, in _save
    raise IOError("cannot write mode %s as JPEG" % im.mode)
IOError: cannot write mode RGBA as JPEG

Any suggestions for fixing it

lvapeab commented 6 years ago

Hi @PVinay737 ,

do you have the pillow package up-to-date? It seems this was an old bug of this package.

However, I've just added as default option to not plot the performance. Make git pull and everything should be fine.

Cheers

VP007-py commented 6 years ago

Working !