tensorflow / tensor2tensor

Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Apache License 2.0
15.5k stars 3.49k forks source link

AttributeError: 'HParams' object has no attribute 'problem_hparams' #1177

Closed h-karami closed 5 years ago

h-karami commented 6 years ago

Dear all After train my model, run this command: t2t-decoder \ --data_dir=Language_modeling\t2t_data \ --problems=languagemodel_ptb10k \ --model=universal_transformer \ --hparams_set=universal_transformer_tiny \ --hparams='sampling_method=random' \ --output_dir=Language_modeling/t2t_train \ --decode_hparams="beam_size=4,alpha=0.6" \ --decode_from_file=Language_modeling/ref-translation.de \ --decode_to_file=Language_modeling/translation.en

But get this error: INFO:tensorflow:Overriding hparams in transformer_tiny with sampling_method=random WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/trainer_lib.py:198: RunConfig.init (from tensorflow.contrib.learn.python.learn.estimators.run_config) is deprecated and will be removed in a future version. Instructions for updating: When switching to tf.estimator.Estimator, use tf.estimator.RunConfig instead. INFO:tensorflow:schedule=continuous_train_and_eval INFO:tensorflow:worker_gpu=1 INFO:tensorflow:sync=False WARNING:tensorflow:Schedule=continuous_train_and_eval. Assuming that training is running on a single machine. INFO:tensorflow:datashard_devices: ['gpu:0'] INFO:tensorflow:caching_devices: None INFO:tensorflow:ps_devices: ['gpu:0'] INFO:tensorflow:Using config: {'_task_type': None, '_task_id': 0, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7ff712bfceb8>, '_master': '', '_num_ps_replicas': 0, '_num_worker_replicas': 0, '_environment': 'local', '_is_chief': True, '_evaluation_master': '', '_train_distribute': None, '_eval_distribute': None, '_device_fn': None, '_tf_config': gpu_options { per_process_gpu_memory_fraction: 1.0 } , '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_secs': None, '_log_step_count_steps': 100, '_protocol': None, '_session_config': gpu_options { per_process_gpu_memory_fraction: 0.95 } allow_soft_placement: true graph_options { optimizer_options { } } , '_save_checkpoints_steps': 1000, '_keep_checkpoint_max': 20, '_keep_checkpoint_every_n_hours': 10000, '_model_dir': 'Language_modeling/t2t_train', 'use_tpu': False, 't2t_device_info': {'num_async_replicas': 1}, 'data_parallelism': <tensor2tensor.utils.expert_utils.Parallelism object at 0x7ff712bfcf28>} WARNING:tensorflow:Estimator's model_fn (<function T2TModel.make_estimator_model_fn..wrapping_model_fn at 0x7ff712389268>) includes params argument, but params are not passed to Estimator. INFO:tensorflow:decode_hp.batch_size not specified; default=32 Traceback (most recent call last): File "/usr/local/bin/t2t-decoder", line 17, in tf.app.run() File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 125, in run _sys.exit(main(argv)) File "/usr/local/bin/t2t-decoder", line 12, in main t2t_decoder.main(argv) File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/bin/t2t_decoder.py", line 190, in main decode(estimator, hp, decode_hp) File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/bin/t2t_decoder.py", line 90, in decode checkpoint_path=FLAGS.checkpoint_path) File "/usr/local/lib/python3.6/dist-packages/tensor2tensor/utils/decoding.py", line 333, in decode_from_file p_hp = hparams.problem_hparams AttributeError: 'HParams' object has no attribute 'problem_hparams'

Can anyone help?

lkluo commented 6 years ago

What is your t2t version? Give a try to replace --problems with --problem.

h-karami commented 6 years ago

@lkluo thank you for the reply I replaced the problems with problem and this error was fixed but I encountered some other errors. For fixed all errors, I had to change t2t to version 1.8 and "hparams_set=universal_transformer_tiny" to "hparams_set=universal_transformer_base".