Closed srtarun closed 6 years ago
The error is unreadable. Is it solved? If not, use pastebin to show your exact error so that I could help you.
Regards, Aman
On Fri, May 4, 2018 at 6:24 PM, S R Tarun notifications@github.com wrote:
I had the following error while trying to run the following command "sh train.sh data/monument_300 120000"
output: Job id 0 Loading hparams from ../data/monument_300_model/hparams Updating hparams.test_prefix: None -> ../data/monument_300/test Updating hparams.num_train_steps: 12000 -> 120000
saving hparams to ../data/monument_300_model/hparams saving hparams to ../data/monument_300_model/best_bleu/hparams attention= attention_architecture=standard batch_size=128 beam_width=0 best_bleu=0 best_bleu_dir=../data/monument_300_model/best_bleu bpe_delimiter=None colocate_gradients_with_ops=True decay_factor=0.98 decay_steps=10000 dev_prefix=../data/monument_300/dev dropout=0.2 encoder_type=uni eos= epoch_step=0 forget_bias=1.0 infer_batch_size=32 init_op=uniform init_weight=0.1 learning_rate=1.0 length_penalty_weight=0.0 log_device_placement=False max_gradient_norm=5.0 max_train=0 metrics=[u'bleu'] num_buckets=5 num_embeddings_partitions=0 num_gpus=1 num_layers=2 num_residual_layers=0 num_train_steps=120000 num_units=128 optimizer=sgd out_dir=../data/monument_300_model pass_hidden_state=True random_seed=None residual=False share_vocab=False sos= source_reverse=False src=en src_max_len=50 src_max_len_infer=None src_vocab_file=../data/monument_300_model/vocab.en src_vocab_size=2228 start_decay_step=0 steps_per_external_eval=None steps_per_stats=100 test_prefix=../data/monument_300/test tgt=sparql tgt_max_len=50 tgt_max_len_infer=None tgt_vocab_file=../data/monument_300_model/vocab.sparql tgt_vocab_size=1763 time_major=True train_prefix=../data/monument_300/train unit_type=lstm vocab_prefix=../data/monument_300/vocab Traceback (most recent call last): File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main "main", fname, loader, pkg_name) File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/home/tarun/dbpedia/NSpM/nmt/nmt/nmt.py", line 495, in tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) File "/home/tarun/.local/lib/python2.7/site-packages/ tensorflow/python/platform/app.py", line 126, in run _sys.exit(main(argv)) File "/home/tarun/dbpedia/NSpM/nmt/nmt/nmt.py", line 488, in main run_main(FLAGS, default_hparams, train_fn, inference_fn) File "/home/tarun/dbpedia/NSpM/nmt/nmt/nmt.py", line 481, in run_main train_fn(hparams, target_session=target_session) File "nmt/train.py", line 171, in train train_model = model_helper.create_train_model(model_creator, hparams, scope) File "nmt/model_helper.py", line 69, in create_train_model src_dataset = tf.contrib.data.TextLineDataset(src_file) AttributeError: 'module' object has no attribute 'TextLineDataset'
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/dbpedia/neural-qa/issues/1, or mute the thread https://github.com/notifications/unsubscribe-auth/AReuF_4QUhGxjlhKd0yKzXKM0WD_fsQDks5tvE9ogaJpZM4TymeO .
I think he is referring to the last line.
AttributeError: 'module' object has no attribute 'TextLineDataset'
The error is probably due to different version of tf. The training works fine for me and my tf version is 1.3.0 (for python2). What's yours?
My version of tf is 1.8.0(python 2). Im not sure if this is the real problem but I think even this is considered as a bug. The model should be compatible for latest versions as well.
I had the following error while trying to run the following command "sh train.sh data/monument_300 120000"
output: