Unity-Technologies / ml-agents

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
https://unity.com/products/machine-learning-agents
Other
17.19k stars 4.16k forks source link

Continuing training with --load throws 'Key not Found' error in checkpoint #2678

Closed Phantomb closed 4 years ago

Phantomb commented 5 years ago

Description When trying to continue training with --load, tensorflow throws a Key NotFoundError in the checkpoint. Specifically Key beta1_power_1. What causes this, and is this fixable?

Console logs / stack traces

(ml-agents) C:\Users\user\path\to\ml-agents>mlagents-learn trainer_config.yaml --run-id=exploreSkill_0924D --train --load

                        ▄▄▄▓▓▓▓
                   ╓▓▓▓▓▓▓█▓▓▓▓▓
              ,▄▄▄m▀▀▀'  ,▓▓▓▀▓▓▄                           ▓▓▓  ▓▓▌
            ▄▓▓▓▀'      ▄▓▓▀  ▓▓▓      ▄▄     ▄▄ ,▄▄ ▄▄▄▄   ,▄▄ ▄▓▓▌▄ ▄▄▄    ,▄▄
          ▄▓▓▓▀        ▄▓▓▀   ▐▓▓▌     ▓▓▌   ▐▓▓ ▐▓▓▓▀▀▀▓▓▌ ▓▓▓ ▀▓▓▌▀ ^▓▓▌  ╒▓▓▌
        ▄▓▓▓▓▓▄▄▄▄▄▄▄▄▓▓▓      ▓▀      ▓▓▌   ▐▓▓ ▐▓▓    ▓▓▓ ▓▓▓  ▓▓▌   ▐▓▓▄ ▓▓▌
        ▀▓▓▓▓▀▀▀▀▀▀▀▀▀▀▓▓▄     ▓▓      ▓▓▌   ▐▓▓ ▐▓▓    ▓▓▓ ▓▓▓  ▓▓▌    ▐▓▓▐▓▓
          ^█▓▓▓        ▀▓▓▄   ▐▓▓▌     ▓▓▓▓▄▓▓▓▓ ▐▓▓    ▓▓▓ ▓▓▓  ▓▓▓▄    ▓▓▓▓`
            '▀▓▓▓▄      ^▓▓▓  ▓▓▓       └▀▀▀▀ ▀▀ ^▀▀    `▀▀ `▀▀   '▀▀    ▐▓▓▌
               ▀▀▀▀▓▄▄▄   ▓▓▓▓▓▓,                                      ▓▓▓▓▀
                   `▀█▓▓▓▓▓▓▓▓▓▌
                        ¬`▀▀▀█▓

INFO:mlagents.trainers:{'--base-port': '5005',
 '--curriculum': 'None',
 '--debug': False,
 '--docker-target-name': 'None',
 '--env': 'None',
 '--help': False,
 '--keep-checkpoints': '5',
 '--lesson': '0',
 '--load': True,
 '--no-graphics': False,
 '--num-envs': '1',
 '--num-runs': '1',
 '--run-id': 'exploreSkill_0924D',
 '--sampler': 'None',
 '--save-freq': '50000',
 '--seed': '-1',
 '--slow': False,
 '--train': True,
 '<trainer-config-path>': 'trainer_config.yaml'}
INFO:mlagents.envs:Start training by pressing the Play button in the Unity Editor.
INFO:mlagents.envs:
'Academy' started successfully!
Unity Academy name: Academy
        Number of Brains: 1
        Number of Training Brains : 1
        Reset Parameters :

Unity brain name: Explore_LearningBrain
        Number of Visual Observations (per agent): 0
        Vector Observation space size (per agent): 11
        Number of stacked Vector Observation: 1
        Vector Action space type: discrete
        Vector Action space size (per agent): [5]
        Vector Action descriptions:
2019-10-04 11:49:46.384233: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
INFO:mlagents.trainers:Loading Model for brain Explore_LearningBrain
INFO:tensorflow:Restoring parameters from ./models/exploreSkill_0924D-0/Explore_LearningBrain\model-1803501.cptk
2019-10-04 11:49:47.341394: W T:\src\github\tensorflow\tensorflow\core\framework\op_kernel.cc:1273] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key beta1_power_1 not found in checkpoint
Traceback (most recent call last):
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call
    return fn(*args)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\client\session.py", line 1312, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\client\session.py", line 1420, in _call_tf_sessionrun
    status, run_metadata)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 516, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: Key beta1_power_1 not found in checkpoint
         [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\user\AppData\Local\Continuum\miniconda3\envs\ml-agents\Scripts\mlagents-learn.exe\__main__.py", line 9, in <module>
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\learn.py", line 319, in main
    run_training(0, run_seed, options, Queue())
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\learn.py", line 118, in run_training
    tc.start_learning(env, trainer_config)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 283, in start_learning
    self.initialize_trainers(trainer_config, env_manager.external_brains)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 206, in initialize_trainers
    run_id=self.run_id,
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\ppo\trainer.py", line 68, in __init__
    self.policy = PPOPolicy(seed, brain, trainer_parameters, self.is_training, load)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\ppo\policy.py", line 74, in __init__
    self._load_graph()
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\tf_policy.py", line 98, in _load_graph
    self.saver.restore(self.sess, ckpt.model_checkpoint_path)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\training\saver.py", line 1775, in restore
    {self.saver_def.filename_tensor_name: save_path})
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\client\session.py", line 905, in run
    run_metadata_ptr)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\client\session.py", line 1140, in _run
    feed_dict_tensor, options, run_metadata)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\client\session.py", line 1321, in _do_run
    run_metadata)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Key beta1_power_1 not found in checkpoint
         [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

Caused by op 'save/RestoreV2', defined at:
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\user\AppData\Local\Continuum\miniconda3\envs\ml-agents\Scripts\mlagents-learn.exe\__main__.py", line 9, in <module>
    sys.exit(main())
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\learn.py", line 319, in main
    run_training(0, run_seed, options, Queue())
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\learn.py", line 118, in run_training
    tc.start_learning(env, trainer_config)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 283, in start_learning
    self.initialize_trainers(trainer_config, env_manager.external_brains)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\trainer_controller.py", line 206, in initialize_trainers
    run_id=self.run_id,
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\ppo\trainer.py", line 68, in __init__
    self.policy = PPOPolicy(seed, brain, trainer_parameters, self.is_training, load)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\ppo\policy.py", line 74, in __init__
    self._load_graph()
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\mlagents\trainers\tf_policy.py", line 89, in _load_graph
    self.saver = tf.train.Saver(max_to_keep=self.keep_checkpoints)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\training\saver.py", line 1311, in __init__
    self.build()
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\training\saver.py", line 1320, in build
    self._build(self._filename, build_save=True, build_restore=True)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\training\saver.py", line 1357, in _build
    build_save=build_save, build_restore=build_restore)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\training\saver.py", line 809, in _build_internal
    restore_sequentially, reshape)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\training\saver.py", line 448, in _AddRestoreOps
    restore_sequentially)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\training\saver.py", line 860, in bulk_restore
    return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\ops\gen_io_ops.py", line 1541, in restore_v2
    shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\framework\ops.py", line 3290, in create_op
    op_def=op_def)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\site-packages\tensorflow\python\framework\ops.py", line 1654, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

NotFoundError (see above for traceback): Key beta1_power_1 not found in checkpoint
         [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

UnityEnvironment worker: environment stopping.
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\multiprocessing\util.py", line 319, in _exit_function
    p.join()
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\multiprocessing\process.py", line 124, in join
    res = self._popen.wait(timeout)
  File "c:\users\user\appdata\local\continuum\miniconda3\envs\ml-agents\lib\multiprocessing\popen_spawn_win32.py", line 80, in wait
    res = _winapi.WaitForSingleObject(int(self._handle), msecs)
KeyboardInterrupt

Environment:

smshehryar commented 5 years ago

is the run id correct? and what about the environment?

ervteng commented 5 years ago

As @smshehryar suggested, this usually means something in your config file changed, or something in the environment changed, between the initial training and the load.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had activity in the last 14 days. It will be closed in the next 14 days if no further activity occurs. Thank you for your contributions.

Phantomb commented 4 years ago

The run ID was positively correct. But maybe some data got corrupted or something, because trying it again later (having changed between GIT branches) it worked again. Thanks for your input! I'm closing this.

github-actions[bot] commented 3 years ago

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.